Скачать презентацию HPGC 2006 Workshop on High-Performance Grid Computing at Скачать презентацию HPGC 2006 Workshop on High-Performance Grid Computing at

6829218d0df9f5a66771f6620aa4a1ef.ppt

  • Количество слайдов: 52

HPGC 2006 Workshop on High-Performance Grid Computing at IPDPC 2006 Rhodes Island, Greece, April HPGC 2006 Workshop on High-Performance Grid Computing at IPDPC 2006 Rhodes Island, Greece, April 25 – 29, 2006 Major HPC Grid Projects From Grid Testbeds to Sustainable High-Performance Grid Infrastructures Wolfgang Gentzsch, D-Grid, RENCI, GGF GFSG, e-IRG wgentzsch@d-grid. de Thanks to: Eric Aubanel, Virendra Bhavsar, Michael Frumkin, Rob F. Van der Wijngaart HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

HPGC 2006 Workshop on High-Performance Grid Computing at IPDPC 2006 Rhodes Island, Greece, April HPGC 2006 Workshop on High-Performance Grid Computing at IPDPC 2006 Rhodes Island, Greece, April 25 – 29, 2006 Major HPC Grid Projects From Grid Testbeds to Sustainable High-Performance Grid Infrastructures Wolfgang Gentzsch, D-Grid, RENCI, GGF GFSG, e-IRG wgentzsch@d-grid. de Thanks to: Eric Aubanel, Virendra Bhavsar, Michael Frumkin, Rob F. Van der Wijngaart and INTEL HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Focus … on HPC capabilities of grids … on sustainable grid infrastructures … selected Focus … on HPC capabilities of grids … on sustainable grid infrastructures … selected six major HPC grid projects: UK e-Science, US Tera. Grid, NAREGI Japan, EGEE and DEISA Europe, D-Grid Germany … and I apologize for not mentioning Your favorite grid project, but… HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Too Many Major Grids to mention them all: HPGC 2006 IPDPS Rhodes Island, Greece, Too Many Major Grids to mention them all: HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

UK e-Science Grid Glasgow Belfast Edinburgh DL started in early 2001 $400 Mio Newcastle UK e-Science Grid Glasgow Belfast Edinburgh DL started in early 2001 $400 Mio Newcastle Manchester Cambridge Oxford Hinxton RAL Cardiff London Southampton Application independent HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

NGS Overview: User view • Resources – 4 Core clusters – UK’s National HPC NGS Overview: User view • Resources – 4 Core clusters – UK’s National HPC services – A range of partner contributions • Access – Support UK academic researchers – Light weight peer review for limited “free” resources • Central help desk – www. grid-support. ac. uk Neil Geddes 6 CCLRC e-Science

NGS Overview: Oganisational view • Management – GOSC Board • Strategic direction – Technical NGS Overview: Oganisational view • Management – GOSC Board • Strategic direction – Technical Board • Technical coordination and policy • Grid Operations Support Centre – Manages the NGS – Operates the UK CA + over 30 RA’s – Operates central helpdesk – Policies and procedures – Manage and monitor partners Neil Geddes 7 CCLRC e-Science

NGS Use Files stored Over 320 users Users by institution Users by discipline Sociology NGS Use Files stored Over 320 users Users by institution Users by discipline Sociology Medicine Humanities PP + Astronomy Env. Sci Eng. + Phys. Sci Large facilities biology CPU time by user

NGS Development Baseline Services • Core Node refresh • Expand partnership – HPC – NGS Development Baseline Services • Core Node refresh • Expand partnership – HPC – Campus Grids – Data Centres – Digital Repositories – Experimental Facilities • Baseline services – Aim to map user requirements onto standard solutions – Support convergence/interoperability • Move further towards project (VO) support – Support collaborative projects • Mixed economy – Core resources – Shared resources – Project/project/contract specific resources Storage Element Basic File Transfer Reliable File Transfer Catalogue Services Data Management tools Compute Element Workload Management VO specific services VO Membership Services Data. Base Services Posix-like I/O Application Software Installation Tools Job Monitoring Reliable Messaging Information System Neil Geddes 9 CCLRC e-Science

The Users Desktop The Architecture of Gateway Services Grid Portal Server Proxy Certificate Server The Users Desktop The Architecture of Gateway Services Grid Portal Server Proxy Certificate Server / vault Application Events Tera. Grid Gateway Services User Metadata Catalog Application Workflow Application Deployment Resource Broker App. Resource catalogs Replica Mgmt Core Grid Services Security Accounting Service Notification Service Policy Resource Allocation Reservations And Scheduling Grid Orchestration Data Management Service Administration & Monitoring Web Services Resource Framework – Web Services Notification Physical Resource Layer HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006 Courtesy Jay Boisseau

Tera. Grid Use 1600 users 11 Charlie Catlett (cec@uchicago. edu) Tera. Grid Use 1600 users 11 Charlie Catlett (cec@uchicago. edu)

Delivering User Priorities in 2005 Overall Score (depth of need) Partners in Need (breadth Delivering User Priorities in 2005 Overall Score (depth of need) Partners in Need (breadth of need) Remote File Read/Write High-Performance File Transfer Coupled Applications, Co-scheduling Grid Portal Toolkits Grid Workflow Tools Batch Metascheduling Global File System Client-Side Computing Tools Batch Scheduled Parameter Sweep Tools Advanced Reservations Capability Type Data Grid Computing Science Gateways Charlie Catlett (cec@uchicago. edu) Results of in-depth discussions with 16 Tera. Grid user teams during first annual user survey (August 2004). 12

National Research Grid Infrastructure (NAREGI) 2003 -2007 • Petascale Grid Infrastructure R&D for Future National Research Grid Infrastructure (NAREGI) 2003 -2007 • Petascale Grid Infrastructure R&D for Future Deployment – $45 mil (US) + $16 mil x 5 (2003 -2007) = $125 mil total – PL: Ken Miura (Fujitsu NII) • Sekiguchi(AIST), Matsuoka(Titech), Shimojo(Osaka-U), Aoyagi (Kyushu-U)… – Participation by multiple (>= 3) vendors, Fujitsu, NEC, Hitachi, NTT, etc. – NOT AN ACADEMIC PROJECT, ~100 FTEs – Follow and contribute to GGF Standardization, esp. OGSA Focused “Grand Challenge” Grid Apps Areas Nanotech Grid Apps “Nano. Grid” IMS ~10 TF (Biotech Grid Apps) (Bio. Grid RIKEN) National AAA Infr. Grid Middleware R&D HPGC 2006 (Other Apps) Other Inst. NEC Titech National Research Grid Middleware R&D Grid R&D Infrastr. 15 TF => 100 TF Super. SINET Greece, 29. 4. 2006 IPDPS Rhodes Island, Osaka-U AIST Fujitsu IMS U-Kyushu Hitachi

NAREGI Software Stack (Beta Ver. 2006) Grid-Enabled Nano-Applications (WP 6) Grid Visualization WP 3 NAREGI Software Stack (Beta Ver. 2006) Grid-Enabled Nano-Applications (WP 6) Grid Visualization WP 3 Grid PSE Data (WP 4) Packaging Grid   Programming (WP 2) -Grid RPC -Grid MPI Grid Workflow (WFML (Unicore+ WF)) Super Scheduler WP 1 Distributed Information Service (CIM) (WSRF (GT 4+Fujitsu WP 1) + GT 4 and other services) Grid VM (WP 1) Grid Security and High-Performance Grid Networking (WP 5) Super. SINET NII IMS Research Organizations Major University Computing Centers Computing Resources and Virtual Organizations 14

Grid. MPI • MPI applications run on the Grid environment • Metropolitan area, high-bandwidth Grid. MPI • MPI applications run on the Grid environment • Metropolitan area, high-bandwidth environment: 10 Gpbs, 500 miles (smaller than 10 ms one-way latency) – Parallel Computation • Larger than metropolitan area – MPI-IO computing resource site A computing resource site B Wide-area Network Single (monolithic) MPI application over the Grid environment 15

EGEE Infrastructure Enabling Grids for E-scienc. E Country participating in EGEE Scale > 180 EGEE Infrastructure Enabling Grids for E-scienc. E Country participating in EGEE Scale > 180 sites in 39 countries ~ 20 000 CPUs > 5 PB storage > 10 000 concurrent jobs per day > 60 Virtual Organisations INFSO-RI-508833 16

The EGEE project Enabling Grids for E-scienc. E • Objectives – Large-scale, production-quality infrastructure The EGEE project Enabling Grids for E-scienc. E • Objectives – Large-scale, production-quality infrastructure for e-Science § leveraging national and regional grid activities worldwide § consistent, robust and secure – improving and maintaining the middleware – attracting new resources and users from industry as well as science • EGEE – 1 st April 2004 – 31 March 2006 – 71 leading institutions in 27 countries, federated in regional Grids • EGEE-II – Proposed start 1 April 2006 (for 2 years) – Expanded consortium § > 90 partners in 32 countries (also non-European partners) § Related projects, incl. • • • INFSO-RI-508833 Baltic. Grid SEE-GRID EUMed. Grid EUChina. Grid EELA 17

Applications on EGEE Enabling Grids for E-scienc. E in ev alu § 4 LHC Applications on EGEE Enabling Grids for E-scienc. E in ev alu § 4 LHC experiments (ALICE, ATLAS, CMS, LHCb) § Ba. Bar, CDF, DØ, ZEUS ati on sta ge • More than 20 applications from 7 domains – High Energy Physics – Biomedicine sa re § Bioinformatics (Drug Discovery, GPS@, Xmipp_MLrefine, etc. ) § Medical imaging (GATE, CDSS, g. PTM 3 D, Si. MRI 3 D, etc. ) ati on § MAGIC § Planck ap p lic – Geo-Physics INFSO-RI-508833 he ot An § E-GRID r 8 § EGEODE – Financial Simulation om sf ro m – Computational Chemistry – Astronomy 4 d § Earth Observation, Solid Earth Physics, Hydrology, Climate ain – Earth Sciences 18

Steps for “Grid-enabling” applications II Enabling Grids for E-scienc. E • Tools to easily Steps for “Grid-enabling” applications II Enabling Grids for E-scienc. E • Tools to easily access Grid resources through high level Grid middleware (g. Lite) – – VO management (VOMS etc. ) Workload management Data management Information and monitoring • Application can – interface directly to g. Lite or – use higher level services such as portals, application specific workflow systems etc. INFSO-RI-508833 19

EGEE Performance Measurements Enabling Grids for E-scienc. E • Information about resources (static & EGEE Performance Measurements Enabling Grids for E-scienc. E • Information about resources (static & dynamic) – Computing: machine properties (CPUs, memory architecture, . . ), platform properties (OS, compiler, other software, …), load – Data: storage location, access properties, load – Network: bandwidth, load • Information about applications – Static: computing and data requirements to reduce search space – Dynamic: changes in computing and data requirements (might need rescheduling) Plus • Information about Grid services (static & dynamic) – Which services available § Status § Capabilities INFSO-RI-508833 20

Sustainability: Beyond EGEE-II Enabling Grids for E-scienc. E • Need to prepare for permanent Sustainability: Beyond EGEE-II Enabling Grids for E-scienc. E • Need to prepare for permanent Grid infrastructure – – Maintain Europe’s leading position in global science Grids Ensure a reliable and adaptive support for all sciences Independent of project funding cycles Modelled on success of GÉANT § Infrastructure managed centrally in collaboration with national bodies Permanent Grid Infrastructure INFSO-RI-508833 21

e-Infrastructures Reflection Group: e-IRG Mission: … to support on political, advisory and monitoring level, e-Infrastructures Reflection Group: e-IRG Mission: … to support on political, advisory and monitoring level, the creation of a policy and administrative framework for the easy and cost-effective shared use of electronic resources in Europe (focusing on Grid-computing, data storage, and networking resources) across technological, administrative and national domains. 22

DEISA Perspectives Towards cooperative extreme computing in Europe Victor Alessandrini IDRIS - CNRS va@idris. DEISA Perspectives Towards cooperative extreme computing in Europe Victor Alessandrini IDRIS - CNRS va@idris. fr GGF 16 Athens, February 13 -16 2006.

The DEISA Supercomputing Environment (21. 900 processors and 145 Tf in 2006, more than The DEISA Supercomputing Environment (21. 900 processors and 145 Tf in 2006, more than 190 Tf in 2007) • IBM AIX Super-cluster – FZJ-Julich, 1312 processors, 8, 9 teraflops peak – RZG – Garching, 748 processors, 3, 8 teraflops peak – IDRIS, 1024 processors, 6. 7 teraflops peak – CINECA, 512 processors, 2, 6 teraflops peak – CSC, 512 processors, 2, 6 teraflops peak – ECMWF, 2 systems of 2276 processors each, 33 teraflops peak – HPCx, 1600 processors, 12 teraflops peak • BSC, IBM Power. PC Linux system (Mare. Nostrum) 4864 processeurs, 40 teraflops peak • SARA, SGI ALTIX Linux system, 1024 processors, 7 teraflops peak • LRZ, Linux cluster (2. 7 teraflops) moving to SGI ALTIX system (5120 processors and 33 teraflops peak in 2006, 70 teraflops peak in 2007) • HLRS, NEC SX 8 vector system, 646 processors, 12, 7 teraflops peak. Fourth EGEE Conference Pise, October 23 - 28, 2005 V. Alessandrini, IDRIS-CNRS 24

DEISA objectives • To enable Europe’s terascale science by the integration of Europe’s most DEISA objectives • To enable Europe’s terascale science by the integration of Europe’s most powerful supercomputing systems. • Enabling scientific discovery across a broad spectrum of science and technology is the only criterion for success • DEISA is a European Supercomputing Service built on top of existing national services. • Integration of national facilities and services, together with innovative operational models • Main focus is HPC and Extreme Computing applications that cannot by supported by the isolated national services • Service providing model is the transnational extension of national HPC centers: – Operations, – User Support and Applications Enabling, – Network Deployment and Operation, – Middleware services. Fourth EGEE Conference Pise, October 23 - 28, 2005 V. Alessandrini, IDRIS-CNRS 25

About HPC • Dealing with large complex systems requires exceptional computational resources. For algorithmic About HPC • Dealing with large complex systems requires exceptional computational resources. For algorithmic reasons, resources grow much faster than the systems size and complexity. • Dealing with huge datasets, involving large files. Typical datasets are several PBytes. • Little usage of commercial or public domain packages. Most applications are corporate codes incorporating specialized know how. Specialized user support is important. • Codes are fine tuned and targeted for a relatively small number of well identified. computing platforms. They are extremely sensitive to the production environment. • Main requirement for high performance is bandwidth (processor to memory, processor to processor, node to node, system to system). Fourth EGEE Conference Pise, October 23 - 28, 2005 V. Alessandrini, IDRIS-CNRS 26

HPC and Grid Computing • • Problem: the speed of light is not big HPC and Grid Computing • • Problem: the speed of light is not big enough Finite signal propagation speed boosts message passing latencies in a WAN from a few microseconds to tens of milliseconds (if A is in Paris and B in Helsinki) • If A and B are two halves of a tightly coupled complex system, communications are frequent and the enhanced latencies will kill performance. • Grid computing works best for embarrassingly parallel applications, or coupled software modules with limited communications. Example: A is an ocean code, and B an atmospheric code. There is no bulk interaction. • • Large, tightly coupled parallel applications should be run in a single platform. This is why we still need high end supercomputers. • DEISA implements this requirement by rerouting jobs and balancing the computational workload at a European scale. Fourth EGEE Conference Pise, October 23 - 28, 2005 V. Alessandrini, IDRIS-CNRS 27

Applications for Grids • Single-CPU Jobs: jobmix, many users, many serial applications, suitable for Applications for Grids • Single-CPU Jobs: jobmix, many users, many serial applications, suitable for grid (e. g in universities and research centers) • Array Jobs: 100 s/1000 s of jobs, one user, one serial application, varying input parameters, suitable for grid (e. g. parameter studies in Optimization, CAE, Genomics, Finance) • Massively Parallel Jobs, loosely coupled: one job, one user, one parallel application, no/low communication, scalable, fine-tune for grid (time-explicit algorithms, film rendering, pattern recognition) • Parallel Jobs, tightly coupled: one job, one user, one parallel application, high interprocs communication, not suitable for distribution over the grid, but for parallel system in the grid (time-implicit algorithms, direct solvers, large linear algebra equation systems) HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

German D-Grid Project Part of 100 Mio Euro e-Science in Germany Objectives of e-Science German D-Grid Project Part of 100 Mio Euro e-Science in Germany Objectives of e-Science Initiative Ø Building one Grid Infrastructure in Germany Ø Combine existing German grid activities Ø Development of e-science services for the research community Ø Science Service Grid: „Services for Scientists“ Ø Important: Sustainability Ø Production grid infrastructure after the funding period Ø Integration of new grid communities (2. generation) Ø Evaluation of new business models for grid services HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

e-Science Projects D-Grid Knowledge Management r o C 3 HG G r E r e-Science Projects D-Grid Knowledge Management r o C 3 HG G r E r i P i d d G INr MG i r e d i Ted G x r t i Og d r N i T d O WI V K E I Im R N W WI S G i N E E s R s E e M n s n . . . Generic Grid Middleware and Grid Services VIOL A HPGC 2006 Integration Project IPDPS Rhodes Island, Greece, 29. 4. 2006 e. Sci D o c

DGI D-Grid Middleware Infrastructure User Application Development and User Access GAT API Grid. Sphere DGI D-Grid Middleware Infrastructure User Application Development and User Access GAT API Grid. Sphere Nutzer Scheduling Workflow Management High-level Grid Services Monitoring Data management Basic Grid Services HPGC 2006 UNICORE LCG/g. Lite Accounting Billing User/VO-Mngt Globus 4. 0. 1 Security Resources in D-Grid Plug-In Distributed Data Archive Data/ Software Network Infrastructur Distributed Compute Resources IPDPS Rhodes Island, Greece, 29. 4. 2006

Key Characteristics of D-Grid Ø Generic Grid infrastructure for German research communities Ø Focus Key Characteristics of D-Grid Ø Generic Grid infrastructure for German research communities Ø Focus on Sciences and Scientists, not industry Ø Strong influence of international projects: EGEE, Deisa, Cross. Grid, Core. Grid, Grid. Lab, Grid. Coord, Uni. Grids, Next. Grid, … Ø Application-driven (80% of funding), not infrastructure-driven Ø Focus on implementation, not research Ø Phase 1 & 2: 50 MEuro, 100 research organizations HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Conclusion: moving towards Sustainable Grid Infrastructures OR Why Grids are here to stay ! Conclusion: moving towards Sustainable Grid Infrastructures OR Why Grids are here to stay ! HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #1: • • • Benefits Resource Utilization: increase from 20% to 80+% Productivity: Reason #1: • • • Benefits Resource Utilization: increase from 20% to 80+% Productivity: more work done in shorter time Agility: flexible actions and re-actions On Demand: get resources, when you need them Easy Access: transparent, remote, secure Sharing: enable collaboration over the network Failover: migrate/restart applications automatically Resource Virtualization: access compute services, not servers Heterogeneity: platforms, OSs, devices, software Virtual Organizations: build & dismantle on the fly HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #2: Standards The Global Grid Forum • Community-driven set of working groups that Reason #2: Standards The Global Grid Forum • Community-driven set of working groups that are developing standards and best practices for distributed computing efforts • Three primary functions: community, standards, and operations • Standards Areas: Infrastructure, Data, Compute, Architecture, Applications, Management, Security, and Liaison • Community Areas: Research Applications, Industry Applications, Grid Operations, Technology Innovations, and Major Grid Projects • Community Advisory Board represents the different communities and provides input and feedback to GGF HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #3: Industry EGA, Enterprise Grid Alliance • Industry-driven consortium to implement standards in Reason #3: Industry EGA, Enterprise Grid Alliance • Industry-driven consortium to implement standards in industry products and make them interoperable • Founding members: EMC, Fujitsu Siemens Computers, HP, NEC, Network Appliance, Oracle and Sun, plus 20+ Associate Members • May 11, 2005: Enterprise Grid Reference Model v 1. 0 HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #3: Industry EGA, Enterprise Grid Alliance • Industry-driven consortium to implement standards in Reason #3: Industry EGA, Enterprise Grid Alliance • Industry-driven consortium to implement standards in industry products and make them interoperable • Founding members: EMC, Fujitsu Siemens Computers, HP, NEC, Network Appliance, Oracle and Sun, plus 20+ Associate Members • May 11, 2005: Enterprise Grid Reference Model v 1. 0 Feb 06: GGF & EGF signed a letter of intent to merge. A joint team is planning the transition, expected to be complete this summer HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #4: OGSA ONE Open Grid Services Architecture OGSA Grid Technologies Web Services OGSA Reason #4: OGSA ONE Open Grid Services Architecture OGSA Grid Technologies Web Services OGSA Open Grid Service Architecture Integrates grid technologies with Web Services (OGSA => WS-RF) Defines the key components of the grid OGSA enables the integration of services and resources across distributed, heterogeneous, dynamic, virtual organizations – whether within a single enterprise or extending to external resource-sharing and service-provider relationships. ” HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #5: Quasi-Standard Tools Example: The Globus Toolkit • Globus Toolkit provides four major Reason #5: Quasi-Standard Tools Example: The Globus Toolkit • Globus Toolkit provides four major functions for building grids 2. discover resource, MDS 3. submit job, GRAM 4. transfer data, Grid. FTP 1. secure environment, GSI Courtesy Gridwise Technologies HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

. . and • Seamless, secure, intuitive access to distributed resources & data • . . and • Seamless, secure, intuitive access to distributed resources & data • Available as Open Source • Features: intuitive GUI with single sign-on, X. 509 certificates for AA, workflow engine for multi-site, multi-step workflows, job monitoring, application support, secure data transfer, resource management, and more • In production HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006 Courtesy: Achim Streit, FZJ

Globus 2. 4 UNICORE Client NJS Gateway IDB UNICORE UUDB WS-Resource based Resource Management Globus 2. 4 UNICORE Client NJS Gateway IDB UNICORE UUDB WS-Resource based Resource Management Framework for dynamic resource information and resource negotiation TSI GRAM Client MDS GRAM Gatekeeper Grid. FTP Client Uspace Globus 2 Client Grid. FTP Server Portal Command Line WS-RF GRAM Job-Manager WS-RF RMS WS-RF Gateway + Service Registry Gateway WS-RF Workflow Engine WS-RF File Transfer WS-RF User Management (AAA) Network Job Resource Application WS-RF Monitoring Management Support Supervisor HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006 Courtesy: Achim Streit, FZJ

Reason #6: Global Grid Community HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006 Reason #6: Global Grid Community HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

#7: Projects/Initiatives Testbeds Active. Grid BIRN Condor-G Deisa Dame EGA Enter. The. Grid GGF #7: Projects/Initiatives Testbeds Active. Grid BIRN Condor-G Deisa Dame EGA Enter. The. Grid GGF Globus Alliance Grid. Bus Grid. Lab Grid. Portal GRIDtoday Gri. Phy. N I-WAY Knowledge Grid Legion My. Grid NMI OGCE OGSA OMII PPDG Semantic Grid The. Grid. Report UK e. Science Unicore. . . HPGC 2006 Companies CO Grid Compute-against-Cancer D-Grid Desk. Grid DOE Science Grid EEGE Euro. Grid European Data. Grid Fight. AIDS@home Folding@home GRIP NASA IPG NC Bio. Grid NC Startup Grid NC Statewide Grid NEESgrid Next. Grid Nimrod Ninf NRC-Bio. Grid Open. Mol. Grid Opt. IPuter Progress SETI@home Tera. Grid Uni. Grids Virginia Grid West. Grid White Rose Grid. . . IPDPS Rhodes Island, Greece, 29. 4. 2006 Altair Avaki Axceleon Cassatt Datasynapse Egenera Entropia e. Xludus Grid. Frastructure Grid. Iron Grid. Systems Gridwise Grid. Xpert HP Utility Data Center IBM Grid Toolbox Kontiki Metalogic Noemix Oracle 10 g Parabon Platform Popular Powerllel/Aspeed Proxima Softricity Sun N 1 Turbo. Worx United Devices Univa. . .

#8: FP 6 Grid Technologies Projects EU Funding: 124 M€ Call 5 start: Summer #8: FP 6 Grid Technologies Projects EU Funding: 124 M€ Call 5 start: Summer 2006 supporting the NESSI ETP & Grid community Challengers trust, security Nessi-Grid. Coord Degree Grid services, business models Grid@Asia Grid. Econ Grid. Trust Assess. Grid Argu. Grid Provenance SIMDAT industrial simulations platforms, user environments business experiments Next. GRID mobile services agents & semantics Qos. Cos. Grid Core. GRID g-Eclipse Grid. Comp Uni. Grids A-Ware BREIN Akogrimo service architecture Grid 4 all Gredia Bein. Grid Xtreem. OS Linux based Grid operating system HPC 4 U six virtual laboratories Specific support action Integrated project Network of excellence Information Society and Media Directorate-General – European Commission Unit Grid Technologies GGF 16 – Athens, 15 February 2006 44 Edutain@ Grid Sorma data, knowledge, semantics, mining Know. Arc K-WF Grid Chemomen tum Inteli. Grid Datamining Onto. Grid Specific targeted research project

Reason #9: Enterprise Grids Sun. Ray Access Browser Access via GEP Workstation Access Optional Reason #9: Enterprise Grids Sun. Ray Access Browser Access via GEP Workstation Access Optional Control Network (Gbit-E) Myrinet Servers, Blades, & VIZ Myrinet Linux Racks Grid Manager Workstations Sun Fire Link Data Network (Gbit-E) Gbit-E switch V 240 / V 880 NFS Gbit-E switch V 880 QFS/NFS Server FC Switch NAS/NFS HPGC 2006 Simple NFS HA NFS IPDPS Rhodes Island, Greece, 29. 4. 2006 Scalable QFS/NFS

Enterprise Grid Reference Architecture Sun. Ray Access Browser Access via GEP Access Workstation Access Enterprise Grid Reference Architecture Sun. Ray Access Browser Access via GEP Access Workstation Access Optional Control Network (Gbit-E) Myrinet Servers, Blades, & VIZ Compute Myrinet Linux Racks Grid Manager Workstations Sun Fire Link Data Network (Gbit-E) Data Gbit-E switch V 240 / V 880 NFS Gbit-E switch V 880 QFS/NFS Server FC Switch NAS/NFS HPGC 2006 Simple NFS HA NFS IPDPS Rhodes Island, Greece, 29. 4. 2006 Scalable QFS/NFS

1000 s of Enterprise Grids in Industry • Life Sciences ● Electronic Design Startup 1000 s of Enterprise Grids in Industry • Life Sciences ● Electronic Design Startup and cost efficient Time to Market Custom research or limited use applications Fastest platforms, largest Grids Multi-day application runs (BLAST) License Management Exponential Combinations Well established application suite Limited administrative staff Large legacy investment Complementary techniques Platform Ownership issues ● Financial Services Market simulations Time IS Money Proprietary applications Multiple Platforms Multiple scenario execution Need instant results & analysis tools HPGC 2006 ● High Performance Computing Parallel Reservoir Simulations Geophysical Ray Tracing Custom in-house codes Large scale, multi-platform execution IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #10 : Grid Service Providers Example: BT • Inside data center, within Firewall Reason #10 : Grid Service Providers Example: BT • Inside data center, within Firewall • Virtual use of own IT assets • The GRID virtualiser engine inside Firewall: – Opens up under-used ICT assets – improves TCO, ROI and Apps performance • ENTERPRISE WANS Intra-enterprise GRID is self limiting – Pool of virtualised assets is restricted by firewall – Does not support Inter-Enterprise usage BT is focussing on managed Grid solution LANS ENTERPRISE WANS BUT • Pre-GRID IT asset usage 10 -15 % GRID Engine LANS Virtualised assets Post-GRID IT asset usage 70 -75 % Courtesy: Piet Bel, BT HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

BT’s Virtual Private Grid ( VPG ) WANS ENTERPRISE LANS Virtualised IT assets GRID BT’s Virtual Private Grid ( VPG ) WANS ENTERPRISE LANS Virtualised IT assets GRID Engine WANS ENTERPRISE A LANS BT NETWORK GRID ENGINE Courtesy: Piet Bel, BT HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

Reason #11: There will be a Market for Grids HPGC 2006 IPDPS Rhodes Island, Reason #11: There will be a Market for Grids HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

General Observations on Grid Performance • • Today, there are 100 s of important General Observations on Grid Performance • • Today, there are 100 s of important grid projects around the world GGF identifies about 15 research projects which have major impact Most research grids focus on HPC and collaboration, most industry grids focus on utilization and automation Many grids are driven by user / application needs, few grid projects are driven by infrastructure research Few projects focus on performance / benchmarks where performance is mostly seen at the job / computation / application level Need for metrics and measurements that help us understand grids In a grid, application performance has 3 major areas of concern: system capabilities, network, and software infrastructure Evaluating performance in a grid is different from classic benchmarking, because grids are dynamically changing systems incorporating new components. HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006

The Grid Engine Thank You ! wgentzsch@d-grid. de HPGC 2006 IPDPS Rhodes Island, Greece, The Grid Engine Thank You ! wgentzsch@d-grid. de HPGC 2006 IPDPS Rhodes Island, Greece, 29. 4. 2006