abed316fef618b6bd2cda7401bd0d7cd.ppt
- Количество слайдов: 39
i. Grid 2 oo 2 INTERNATIONAL VIRTUAL LABORATORY www. igrid 2002. org 23 -26 September 2002 Amsterdam Science and Technology The Netherlands September 26, 2002 Maxine Brown STAR TAP/Star. Light co-Principal Investigator Associate Director, Electronic Visualization Laboratory University of Illinois at Chicago
i. Grid 2002 September 23 -26, 2002, Amsterdam, The Netherlands • i. Grid is a conference demonstrating application demands for increased bandwidth • i. Grid is a testbed enabling the world’s research community to work together briefly and intensely to advance the state of the art – moving from grid-intensive computing to Lambda. Grid-intensive computing, in which computational resources worldwide are connected by multiple lambdas. www. startap. net/igrid 2002
i. Grid 2002 Application Demonstrations • 28 demonstrations from 16 countries: Australia, Canada, CERN/Switzerland, France, Finland, Germany, Greece, Italy, Japan, Netherlands, Singapore, Spain, Sweden, Taiwan, the United Kingdom and the USA. • Applications to be demonstrated: art, bioinformatics, chemistry, cosmology, cultural heritage, education, highdefinition media streaming, manufacturing medicine, neuroscience, physics, tele-science • Grid technologies to be demonstrated: Major emphasis on grid middleware, data management grids, data replication grids, visualization grids, data/visualization grids, computational grids, access grids, grid portals
i. Grid 2002 Featured Network Infrastructures • Nether. Light, developed by SURFnet within the context of the Dutch Next Generation Internet project (Giga. Port), is an advanced optical infrastructure and proving ground for network services optimized for highperformance applications located at the Amsterdam Internet Exchange facility. • Star. Light, developed by the University of Illinois at Chicago, Northwestern University, and Argonne National Laboratory in partnership with Canada’s CANARIE and Holland’s SURFnet with funding from the USA NSF, is a persistent infrastructure that supports advanced applications and middleware research, and aggressive advanced networking services.
i. Grid 2002 Enabling Technologies and Projects EU funded Data. Grid Project aims to develop, implement and exploit a compu ational and data t intensive grid of re sources for the analysis of scientific data. www. eu-datagrid. org EU funded Data. TAG Project is creating an intercontinental testbed (Trans-Atlantic Grid) for data intensive grids, with a focus on net working techniques and interoperability issues among different grid domains. www. datatag. org
i. Grid 2002 Enabling Technologies and Projects The Globus Project conducts research and development on the application of Grid concepts to scientific and engineering computing. The Globus Project provides software tools (the Globus Toolkit) that make it easier to build computational grids and grid based applications. www. globus. org Quanta, the Quality of Service (Qo. S) Adaptive Networking Toolkit, is backward compatible with CAVERNsoft, and provides application developers with an easy to use system to efficiently utilize the extremely high bandwidth afforded by optical networks. . www. evl. uic. edu/cavern/teranode/quanta
i. Grid 2002 Singapore, Australia and Japan APBio. Grid of APBio. Net • Bio Informatics Centre (BIC) National University of Singapore • Kooprime, Singapore • Cray, Singapore Using BIC’s APBio. Grid (the Asia Pacific Bioinformatics Grid, a collection of networked computational resources) and KOOP testbed technology, biologists can quickly build a complex series of computations and database management activities on top of computational grids to solve real world problems. APBio. Grid mimics tasks typical of a bioinformatician – it does resource discovery over the network, remotely distributing tasks that perform data acquisition, data transfer, data processing, data upload to databases, data analysis, computational calculations and visualizations. www. bic. nus. edu. sg, www. bic. nus. edu. sg/biogrid, www. apbionet. org, http: //s-star. org/main. htm
i. Grid 2002 Canada, CERN and The Netherlands ATLAS Canada Light. Path Data Transfer Trial • • • TRIUMF, Canada Carleton University, Canada University of Victoria, British Columbia, Canada University of Alberta, Canada University of Toronto, Canada Simon Fraser University, Canada BCNet, British Columbia, Canada CANARIE, Canada CERN, Switzerland University of Amsterdam, The Netherlands The Lightpath Trial hopes to transmit 1 TB of ATLAS Monte Carlo data from TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics ) to CERN in under 2 hours. Using Canada’s 2. 5 Gb link to Star. Light, SURFnet’s 2. 5 Gb link from Star. Light to Nether. Light, and from Nether. Light to CERN, an end to end lightpath will be built between TRIUMF in Vancouver and CERN. www. triumf. ca
i. Grid 2002 Netherlands, USA, Canada, CERN, France, Italy, Japan, UK Bandwidth Challenge from the Low-Lands • SLAC, USA • NIKHEF, The Netherlands • Participating sites: APAN, Japan; ANL, USA; Lab, USA; Caltech, USA; CERN, Switzerland; Daresbury Laboratory, UK; ESnet, USA; Fermilab, USA; NASA GSFC, USA; IN 2 P 3, France; INFN/Milan, Italy; INFN/Rome, Italy; Internet 2, USA; JLab, USA; KEK High Energy Accelerator Research Organization, Japan; LANL, USA; LBNL/NERSC, USA; Manchester University, UK; NIKHEF, The Netherlands; ORNL, USA; Rice University, USA; RIKEN Accelerator Research Facility, Japan; Rutherford Appleton Laboratory, UK; SDSC/UCSD, USA; SLAC, USA; Stanford University, USA; Sun Microsystems, USA; TRIUMF, Canada; University College London, UK; University of Delaware, USA; University of Florida, USA; University of Michigan, USA; University of Texas at Dallas, USA; University of Wisconsin, Madison, USA Current data transfer capabilities to several international sites with high performance links is demonstrated. i. Grid 2002 serves as a HENP “Tier 0” or “Tier 1” site (an accelerator or major computing site), distributing data to multiple replica sites. Researchers investigate/ demonstrate issues regarding TCP implementations for high bandwidth long latency links, and create a repository of trace files of a few interesting flows to help explain the behavior of transport protocols over various production networks. http: //www-iepm. slac. stanford. edu/ monitoring/bulk/igrid 2002
i. Grid 2002 USA and CERN Bandwidth Gluttony ― Distributed Grid-Enabled Particle Physics Event Analysis • Argonne National Laboratory (ANL), USA • Caltech, USA • CERN (EU Data. Grid Project) This demonstration is a joint effort between Caltech (HEP) and ANL (Globus/Grid. FTP). Requests for remote virtual data collections are issued by Grid based software that is itself triggered from a customized version of the High Energy Physics (HEP) analysis tool called ROOT. These requests cause the data to be moved across a wide area network using both striped and standard Grid. FTP servers. In addition, at i. Grid, an attempt is made to saturate a 10 Gbps link between Amsterdam, ANL and Star. Light and a 2. 5 Gbps link between Amsterdam and CERN, using striped Grid. FTP channels and specially tuned TCP/IP stacks applied to memory cached data. http: //pcbunn. cacr. caltech. edu/i. Grid 2002/demo. htm
i. Grid 2002 USA Beat Box • Indiana University, USA • Res Umbrae, USA Beat Box presents networked CAVE participants with a playful arena of interactive sound machines. Participants cycle through sound selections and give voice to an interval by introducing it to a thoroughly odd indigenous head. Each head represents a distinct moment in a sequence that contributes to the resultant delivery of the collective instruments. http: //dolinsky. fa. indiana. edu/beatbox
i. Grid 2002 USA Collaborative Visualization over the Access Grid • Argonne National Laboratory/University of Chicago, USA • Northern Illinois University, USA This demonstration shows next generation Access Grid applications, where the Access Grid is coupled to high speed networks and vast computational resources. Using the Globus Toolkit, MPICH G 2 and Access Grid technology, scientists can collaboratively and interactively analyze time varying datasets that are multiple terabytes in size. www. mcs. anl. gov/fl/events/igrid 2002. html, www. accessgrid. org, www. globus. org/mpi, www. globus. org, www. teragrid. org
i. Grid 2002 The Netherlands and USA D 0 Data Analysis • NIKHEF, The Netherlands • Fermi National Accelerator Laboratory (Fermilab), USA • Michigan State University, USA The D 0 Experiment, which relies on the Tevatron Collider at Fermilab, is a worldwide collaboration of scientists conducting research on the fundamental nature of matter. Currently, raw data from the D 0 detector is processed at Fermilab’s computer farm and results are written to tape. At i. Grid, researchers show that by using the transoceanic Star. Light/Nether. Light network, it is possible for Fermilab to send raw data to NIKHEF for processing and then have NIKHEF send the results back to Fermilab. www-d 0. fnal. gov, www. nikhef. nl
i. Grid 2002 USA, Germany, Japan, Taiwan and UK Distributed, On-Demand, Data-Intensive and Collaborative Simulation Analysis • • • Sandia National Laboratories, USA Pittsburgh Supercomputing Center, USA Tsukuba Advanced Computing Center, Japan Manchester Computing Centre, UK National Center for High Performance Computing, Taiwan High Performance Computing Center, Rechen zentrum Universität Stuttgart, Germany Grid tools applied to bioinformatics are demonstrated – specifically, predicting identifiable intron/exon splice sites in human genes based on RNA secondary structures. Modeling and simulation programs scale to geographically distributed supercomputer centers. Results are visualized in a collaborative environment, displaying spatial relationships and insights into identifying exonic splicing enhancers. www. cs. sandia. gov/ilab, www. tbi. univie. ac. at/research/Virus. Prj. html, www. hlrs. de
i. Grid 2002 USA and UK Dynamic Load Balancing of Structured Adaptive Mesh Refinement (SAMR) Applications on Distributed Systems • CS Department, Illinois Institute of Technology • ECE Department, Northwestern University, USA • Nuclear and Astrophysics Laboratory, Oxford University AMR applications results in load imbalance among processors on distributed systems. ENZO is a successful parallel implementation of structured AMR (SAMR), incorporating dynamic load balancing across distributed systems, used by in astrophysics and cosmology. www. ece. nwu. edu/~zlan/research. html
i. Grid 2002 USA and CERN Fine Grained Authorization for GARA Automated Bandwidth Reservation • University of Michigan, USA • CERN This demonstration shows modifications to the Globus General-purpose Architecture for Reservation and Allocation (GARA). Also shown is a secure and convenient Web interface for making reservation requests based on Kerberos credentials. GARA modifications are demonstrated by reserving bandwidth for a video application running between sites with distinct security domains. Traffic generators overload the router interface servicing the video receiver, degrading the video quality when bandwidth is not reserved. www. citi. umich. edu/projects/qos
i. Grid 2002 Italy and CERN GENIUS • Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Catania, Italy • Università di Catania, Italy • NICE srl, Camerano Casasco, Italy • CERN, Switzerland The grid portal GENIUS (Grid Enabled web e. Nvironment for site Independent User job Submission) is an interactive data management tool being developed on the EU Data. Grid testbed. At i. Grid 2002, researchers demonstrate GENIUS’s data movement and discovery, security mechanisms and system monitoring techniques, as well as optimization and fail-safe mechanisms ― for example, how to find network optimized files and how to detect system failure. https: //genius. ct. infn. it
i. Grid 2002 USA, Japan and Taiwan Global Telescience Featuring IPv 6 • • National Center for Microscopy and Imaging Research (NCMIR), UCSD, USA San Diego Supercomputer Center, UCSD, USA Cybermedia Center, Osaka University, Japan National Center for High Performance Computing, Taiwan Utilizing native IPv 6 and a mixture of high bandwidth and low latency, this demonstration features a network enabled end to end system for 3 D electron tomography that utilizes richly connected resources to remotely control the intermediate high voltage electron microscope in San Diego and the ultra high voltage electron microscope in Osaka. https: //gridport. npaci. edu/Telescience
i. Grid 2002 The Netherlands and USA Griz: Grid Visualization Over Optical Networks • Vrije Universiteit, The Netherlands • Electronic Visualization Laboratory, University of Illinois at Chicago, USA Aura, a distributed parallel rendering toolkit, is used to remotely render data on available graphics resources (in Chicago and in Amsterdam) for local display at the i. Grid conference. Aura is applied to real world scientific problems; notably, the visualization of high resolution isosurfaces of the Visible Human dataset and an interactive molecular dynamics simulation. www. cs. vu. nl/~renambot/vr/html/intro. htm
i. Grid 2002 USA, Canada, The Netherlands, Sweden and UK High Performance Data Webs • Laboratory for Advanced Computing, University of Illinois at Chicago, USA • Dalhousie University, Halifax, Canada • Imperial College of Science, Technology & Medicine, University of London, UK • Universiteit van Amsterdam, The Netherlands • SARA, The Netherlands • Center for Parallel Computers, Royal Institute of Technology, Sweden Data. Space is a high performance data web for the remote analysis, mining, and real time interaction of scientific, engineering, business, and other complex data. Data. Space applications are designed to exploit the capabilities of high performance networks so that gigabyte and terabyte datasets can be remotely explored in real time. www. ncdm. uic. edu, www. dataspaceweb. net
i. Grid 2002 Spain and USA HDTV Transmission over IP • Universitat Politècnica de Catalunya, Barcelona, Spain • Ovide Broadcast Services, Barcelona, Spain • Research. Channel, Pacific Northwest Giga. Po. P, USA • i. CAIR, Northwestern Univ. , USA • Starmaze, Spain First transcontinental HDTV broadcast using “Year Gaudi 2002” footage: • UPC 1. 5 Gbps (HDSDI) compressed/transmitted at 270 Mbps (SDTI) over IP • Research. Channel uncompressed bi directional HDTV/IP using prototype Tektronix hardware at 1. 5 Gbps, Sony HDCAM/IP at 270 Mbps, MPEG 2 at 10 Mbps, Video. On. Demand at 5. 6 Mbps, and Audio. On. Demand at 1. 4 Mbps • ICAIR is streaming 270 Mbps over IP www. i 2 cat. net, www. researchchannel. com, www. washington. edu, www. icair. org
i. Grid 2002 Taiwan and Germany Image Feature Extraction on a Grid Testbed • National Center for High Performance Computing, Taiwan • Institute of Statistical Science, Academia Sinica, Taiwan • High Performance Computing Center, Rechenzentrum Universität Stuttgart For medical imagery (confocal laser-scanning microscopes, CT, MRI and PET), NCHC does image processing, analysis and 3 D reconstruction. For biotechnology imagery (e. g. , microarray biochips), NCHC uses a data clustering procedure for feature extraction that provides insight into an image, such as identifying diseases caused by some protein. Grid techniques enable the use of distributed computing resources and shared data. High-speed networks enable fast processing; typical medical doctors want the procedure accomplished in 5 seconds for use in daily operations. http: //motif. nchc. gov. tw/Data. Grid
i. Grid 2002 USA, Canada, France, Japan, The Netherlands. Singapore Kites Flying In and Out of Space • Jacqueline Matisse Monnier, Visting artist • NCSA, UIUC • Mountain Lake Workshop, Virginia Tech Fndn , USA • Virginia Tech, USA • Virginia Polytechnic Institute and State University, USA • EVL, University of Illinois at Chicago, USA • SARA, The Netherlands • Sorbonne/La Cité Museum de Musique Paris, France • Tohwa University, Japan • Institute of High Performance Computing, Singapore • New Media Innovation Center, Canada This virtual reality art piece is a study of the physical properties of the flying kinetic artwork of Jacqueline Matisse Monnier. One PC supports the simulation of one kite. For i. Grid, distributed grid computing for the arts is demonstrated. http: //calder. ncsa. uiuc. edu/ART/MATISSE/
i. Grid 2002 Germany and USA Network Intensive Grid Computing and Visualization • Max Planck Institut für Gravitationsphysik, Albert Einstein Institut/Golm, Germany • Konrad Zuse Zentrum für Informationstechnik/Berlin, Germany • Lawrence Berkeley National Laboratory/National Energy Research Scientific Computing Center, USA Scientists run an astrophysics simulation at a USA supercomputing center and then computing detailed remote visualizations of the results. One part of the demo shows remote online visualization – as the simulation continues, each time step’s raw data is streamed from the USA to a Linux cluster in Amsterdam for parallel volume rendering. The other part demonstrates remote off line visualization using grid technologies to access data on remote data servers, as well as new rendering techniques for network adaptive visualizations. www. cactuscode. org, www. griksl. org
i. Grid 2002 USA PAAPAB • Department of Media Study, University at Buffalo, USA • Res Umbrae, USA • New York State Center for Engineering Design and Industrial Innovation, University at Buffalo, USA PAAPAB (Pick An Avatar, Pick A Beat) is a shared virtual reality disco environment inhabited by life size puppets (user avatars). Users tour the dance floor to see the puppets they animate, dance with the puppets, and dance with avatars of other users. This research focuses on creating interactive drama in virtual reality; that is, immersive stories. PAAPAB serves as a testbed for technology development as well as character and world design. http: //resumbrae. com/projects/paapab, www. ccr. buffalo. edu/anstey/VR/PAAPAB, www. nyscedii. buffalo. edu
i. Grid 2002 USA and The Netherlands Photonic Tera. Stream • i. CAIR, Northwestern University, USA • EVL, University of Illinois at Chicago, USA • Materials Sciences Research Center, Northwestern University, USA • Universiteit van Amsterdam, The Netherlands • Argonne National Laboratory, USA The Photonic Tera. Stream, supported by OMNInet, demonstrates that photonic enabled applications are possible. OMNInet is used to prototype tools for intelligent application signaling, dynamic lambda provisioning, and extensions to lightpaths through dynamically provisioned L 2 and L 3 configurations – to access edge resources. The goal is to develop “Global Services on Demand” technologies for optical networks, enabling scientists to find, gather, integrate, and present information –large scale datasets, scientific visualizations, streaming digital media, computational results – from resources worldwide. www. icair. org/igrid 2002, www. uva. nl, www. icair. org/omninet
i. Grid 2002 Japan TACC Quantum Chemistry Grid/Gaussian Portal • Grid Technology Research Center (GTRC), National Institute of Advanced Industrial Science and Technology (AIST), Japan Gaussian code, used in computational chemistry, sometimes receives inadequate computational resources when run on large computers. The Tsukuba Advanced Computing Center (TACC) Gaussian Grid Portal efficiently utilizes costly computational resources without knowing the specifications of each system environment. It consists of a Web interface, meta scheduler, computational resources, archival resources and Grid software. http: //unit. aist. go. jp/grid/GSA/gaussian
i. Grid 2002 USA Tera. Scope: Visual Tera Mining • Electronic Visualization Laboratory, University of Illinois at Chicago (UIC), USA • National Center for Data Mining, UIC, USA Tera. Scope is a massively parallelized set of information visualization tools for Visual Data Mining that interactively queries and mines terabyte datasets, correlates the data, and then visualizes the data using parallelized rendering software on tiled displays. Tera. Scope’s main foci are to develop techniques to create Tera. Maps (visualizations that summarize rather than plot enormous datasets) and to develop a distributed memory cache to collect pools of memory from optically connected clusters. These caches are used by Tera. Scope to bridge the impedance mismatch between large and slow distributed data stores and fast local memory. www. evl. uic. edu/cavern/teranode/terascope, www. dataspaceweb. net
i. Grid 2002 USA Tera. Vision: Visualization Streaming over Optical Networks • Electronic Visualization Laboratory, University of Illinois at Chicago, USA Tera. Vision is a hardware assisted, high resolution graphics streaming system for the Access Grid, enabling anyone to deliver a presentation without installing software or distributing data files in advance. A user giving a presentation on a laptop or showing output from a node of a graphics cluster plugs the VGA or DVI output of the computer into the Tera. Vision Box. The box captures the signal at its native resolution, and digitizes and broadcasts it to another networked Tera. Vision Box, which is connected to a PC and DLP projector. Two Boxes can be used to stream stereoscopic computer graphics. Multiple Boxes can be used for an entire tiled display. www. evl. uic. edu/cavern/teranode/teravision
i. Grid 2002 USA and UK The Universe • • NCSA, UIUC University of California, San Diego Information Sciences Institute, University of Southern California Stephen Hawking Laboratory, University of Cambridge, UK Virtual Director and related technologies enable multiple users to remotely collaborate in a shared, astrophysical virtual world. Users collaborate via video, audio and 3 D avatar representations, and through discrete interactions with the data. Astrophysical scenes are rendered using several techniques, including an experimental renderer that creates time series volume animations using pre sorted points and billboard splats, allowing visualizations of very large datasets in real time. http: //virdir. ncsa. uiuc. edu/virdir. html
i. Grid 2002 USA, France, Germany and Italy Video IBPster • Logistical Computing and Internetworking (Lo. CI) Lab, University of Tennessee, USA • Innovative Computing Lab, Univ. Tennessee, USA • University of California, Santa Barbara, USA • University of California, San Diego, USA • ENS, Lyon, France • Università del Piemonte Orientale, Alessandria, Italy • High Performance Computing Center, Rechenzentrum Universität Stuttgart, Germany Logistical Networking is the global scheduling and optimization of data movement, storage and computation. Lo. CI develops tools for fast data transfer, such as the Data Mover, using as much bandwidth as is available. At i. Grid, a geographically distributed abstraction of a file is replicated, transported to depots that are closer according to network proximity values, and downloaded from the nearest site in a completely transparent way. http: //loci. cs. utk. edu, http: //nws. cs. ucsb. edu
i. Grid 2002 The Netherlands Virtual Laboratory on a National Scale • University of Amsterdam, The Netherlands This demonstration of upper middleware complements Grid services, enabling scientists to easily extract information from raw datasets utilizing multiple computing resources. The Virtual Laboratory develops a formal series of steps, or process flow, to solve a particular problem (data analysis, visualization, etc. ) in a particular application domain. Various clusters (DAS 2) are assigned parts of the problem (retrieval, analysis, visualization, etc. ) www. vl-e. nl/VLAM-G/
i. Grid 2002 Greece and USA Virtual Visit to the Site of Ancient Olympia • Foundation of the Hellenic World (FHW), Greece • University of Macedonia, Greece • Greek Research & Technology Network • EVL, University of Illinois at Chicago, USA In preparation for the 2004 Olympic Games hosted by Greece, the FHW, a cultural heritage institution based in Athens, is developing an accurate 3 D reconstruction of the site of Olympia as it used to be in antiquity. if access to a high performance network were available, the FHW’s museum One of the most important monuments of the site is the Temple of Zeus, which could serve as a centre of excellence, houses the famous statue of Zeus, one delivering educational and heritage content of the seven wonders of the ancient to a number of sites worldwide. world of which nothing remains today. www. fhw. gr/fhw/en/projects, www. fhw. gr/fhw/en/projects/3 dvr/templezeus. html, www. grnet. gr/grnet 2/index_en. htm
i. Grid 2002 The Netherlands, Finland, UK and USA vlbi. Grid • Joint Institute for VLBI in Europe, The Netherlands • Metsahovi Radio Observatory, Finland • Jodrell Bank Observatory, University of Manchester, UK • Haystack Observatory, MIT, USA • University of Manchester UK • University College London, UK • University of Amsterdam, The Netherlands Very Long Baseline Interferometry (VLBI) is a technique in which an array of physically independent radio telescopes observes simultaneously to yield high resolution images of cosmic radio sources. The European VLBI Network (EVN) has access to multiple data sources that can deliver 1 Gbps each and a data processor that can process 16 data streams simultaneously. High speed networks would enable the EVN to achieve many fold improvements in bandwidth. www. jive. nl, www. jb. man. ac. uk, www. haystack. edu
i. Grid 2002 …in addition, SURFnet Streaming Video Documentary!!! i. Grid 2002 to be streamed live on the Internet! • Live streams. SURFnet is streaming live conference plenary sessions and demonstration material over the Internet using Real Surestream, IP multicast H. 261, MPEG 1 and MPEG 2. • Documentary. SURFnet is making a documentary of i. Grid 2002 demonstrations. • On demand video. After the conference, all video, both plenary sessions and documentary, will be available for on demand viewing through the i. Grid 2002 website and the SURFnet A/V streaming service. http: //www. igrid 2002. org/webcast. html
Acknowledgments Organizing Institutions The Netherlands: Amsterdam Science & Technology Centre Giga. Port Project SARA Computing and Networking Services SURFnet Universiteit van Amsterdam/ Science Faculty United States of America: Argonne National Laboratory/ Mathematics and Computer Science Division Indiana University/ Office of the Vice President for Information Technology Northwestern University/ International Center for Advanced Internet Research University of Illinois at Chicago/ Electronic Visualization Laboratory
Acknowledgments Participating Organizations CANARIE Internet Educational Equal Access Foundation (IEEAF) Global Grid Forum Globus Project GRIDS Center National Lab for Applied Network Research, Distributed Applications Support Team (NLANR/DAST) Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) TERENA UCAID/Internet 2 University of California, San Diego/ California Institute for Telecommunications and Information Technology [Cal-(IT)2]
Acknowledgments Sponsors Amsterdam Internet Exchange Amsterdam Science & Technology Centre Cisco Systems, Inc. City of Amsterdam GEOgraphic Network Affiliates–International Giga. Port Project Glimmerglass Networks HP IBM Juniper Networks Level 3 Communications, Inc. National Computer Facilities (NWO/NCF), NL National Science Foundation, USA Royal Philips Electronics SARA Computing and Networking Services Stichting FOM Foundation for Fundamental Research on Matter Stichting HEF Stichting SURFnet Tyco Telecommunications Unilever NV Universiteit van Amsterdam
For More Information University of Illinois at Chicago Maxine Brown, maxine@uic. edu Tom De. Fanti, tom@uic. edu Universiteit van Amsterdam Cees de Laat, delaat@science. uva. nl
abed316fef618b6bd2cda7401bd0d7cd.ppt