Скачать презентацию LHC Experiments and the PACI A Partnership for Скачать презентацию LHC Experiments and the PACI A Partnership for

2f345aa779ef92f83ed5a996c0def418.ppt

  • Количество слайдов: 34

LHC Experiments and the PACI A Partnership for Global Data Analysis Harvey B. Newman, LHC Experiments and the PACI A Partnership for Global Data Analysis Harvey B. Newman, Caltech Advisory Panel on Cyber. Infrastructure National Science Foundation November 29, 2001 http: //l 3 www. cern. ch/~newman/LHCGrids. PACI. ppt

Global Data Grid Challenge “Global scientific communities, served by networks with bandwidths varying by Global Data Grid Challenge “Global scientific communities, served by networks with bandwidths varying by orders of magnitude, need to perform computationally demanding analyses of geographically distributed datasets that will grow by at least 3 orders of magnitude over the next decade, from the 100 Terabyte to the 100 Petabyte scale [from 2000 to 2007]”

The Large Hadron Collider (2006 -) u. The Next-generation Particle Collider è The largest The Large Hadron Collider (2006 -) u. The Next-generation Particle Collider è The largest superconductor installation in the world u. Bunch-bunch collisions at 40 MHz, Each generating ~20 interactions è Only one in a trillion may lead to a major physics discovery u. Real-time data filtering: Petabytes per second to Gigabytes per second u. Accumulated data of many Petabytes/Year Large data samples explored analyzed by thousands of globally dispersed scientists, in hundreds of teams

Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0. 30 Petaflops and UP 0. 1 to 1 Exabyte (1 EB = 1018 Bytes) (2007) (~2012 ? ) for the LHC Experiments

Evidence for the Higgs at LEP at M~115 Ge. V The LEP Program Has Evidence for the Higgs at LEP at M~115 Ge. V The LEP Program Has Now Ended

LHC: Higgs Decay into 4 muons 1000 X LEP Data Rate 109 events/sec, selectivity: LHC: Higgs Decay into 4 muons 1000 X LEP Data Rate 109 events/sec, selectivity: 1 in 1013 (1 person in a thousand world populations)

LHC Data Grid Hierarchy CERN/Outside Resource Ratio ~1: 2 Tier 0/( Tier 1)/( Tier LHC Data Grid Hierarchy CERN/Outside Resource Ratio ~1: 2 Tier 0/( Tier 1)/( Tier 2) ~1: 1: 1 ~PByte/sec Online System Experiment ~100 -400 MBytes/sec Tier 0 +1 ~2. 5 Gbits/sec Tier 1 IN 2 P 3 Center Tier 3 INFN Center RAL Center ~2. 5 Gbps Tier 2 Institute ~0. 25 TIPS Physics data cache Workstations CERN 700 k SI 95 ~1 PB Disk; Tape Robot FNAL: 200 k SI 95; 600 TB 2. 5 Gbps Tier 2 Center Tier 2 Center Institute 100 - 1000 Mbits/sec Tier 4 Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels

Tera. Grid: NCSA, ANL, SDSC, Caltech Star. Light: Int’l A Preview of the Grid Tera. Grid: NCSA, ANL, SDSC, Caltech Star. Light: Int’l A Preview of the Grid Hierarchy and Networks of the LHC Era Optical Peering Point (see www. startap. net) Abilene : ne(4 x ckpla 4 Chicago ps) 0 Gb a Pasadena B DTF San Diego Indianapolis Urbana UIC I-WIRE OC-48 (2. 5 Gb/s, Abilene) ANL Multiple 10 Gb. E (Qwest) Multiple 10 Gb. E (I-WIRE Dark Fiber) 2 Solid lines in place and/or available in 2001 2 Dashed I-WIRE lines planned for Summer 2002 Starlight / NW Univ Multiple Carrier Hubs Ill Inst of Tech Univ of Chicago Indianapolis (Abilene NCSA/UIUC NOC) Source: Charlie Catlett, Argonne

Current Grid Challenges: Resource Discovery, Co-Scheduling, Transparency u. Discovery and Efficient Co-Scheduling of Computing, Current Grid Challenges: Resource Discovery, Co-Scheduling, Transparency u. Discovery and Efficient Co-Scheduling of Computing, Data Handling, and Network Resources è Effective, Consistent Replica Management è Virtual Data: Recomputation Versus Data Transport Decisions u. Reduction of Complexity In a “Petascale” World è “GA 3”: Global Authentication, Authorization, Allocation è VDT: Transparent Access to Results (and Data When Necessary) è Location Independence of the User Analysis, Grid, and Grid-Development Environments è Seamless Multi-Step Data Processing and Analysis: DAGMan (Wisc), MOP+IMPALA(FNAL)

CMS Production: Event Simulation and Reconstruction Simulation Digitization No PU GDMP PU Common Prod. CMS Production: Event Simulation and Reconstruction Simulation Digitization No PU GDMP PU Common Prod. tools (IMPALA) CERN FNAL Moscow INFN Caltech UCSD In progress Fully operational ion ct Imperial College Bristol Wisconsin IN 2 P 3 Helsinki Not Op. W u od Pr es ide Sit ldw t 12 or a UFL Not Op. “Grid-Enabled” Automated

US CMS Tera. Grid Seamless Prototype u Caltech/Wisconsin Condor/NCSA Production u Simple Job Launch US CMS Tera. Grid Seamless Prototype u Caltech/Wisconsin Condor/NCSA Production u Simple Job Launch from Caltech è Authentication Using Globus Security Infrastructure (GSI) è Resources Identified Using Globus Information Infrastructure (GIS) u CMSIM Jobs (Batches of 100, 12 -14 Hours, 100 GB Output) Sent to the Wisconsin Condor Flock Using Condor-G è Output Files Automatically Stored in NCSA Unitree (Gridftp) u ORCA Phase: Read-in and Process Jobs at NCSA èOutput Files Automatically Stored in NCSA Unitree u Future: Multiple CMS Sites; Storage in Caltech HPSS Also, Using GDMP (With LBNL’s HRM). u Animated Flow Diagram of the DTF Prototype: http: //cmsdoc. cern. ch/~wisniew/infrastructure. html

Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF) Transoceanic Networking Integrated with Baseline BW for the US-CERN Link: HENP Transatlantic WG (DOE+NSF) Transoceanic Networking Integrated with the Tera. Grid, Abilene, Regional Nets and Continental Network Infrastructures in US, Europe, Asia, South America US-CERN Plans: 155 Mbps to 2 X 155 Mbps this Year; 622 Mbps in April 2002; Data. TAG 2. 5 Gbps Research Link in Summer 2002; 10 Gbps Research Link in ~2003

Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] Installed BW. Maximum Link Occupancy Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] Installed BW. Maximum Link Occupancy 50% Assumed The Network Challenge is Shared by Both Next- and Present Generation Experiments

Internet 2 HENP Networking WG [*] Mission u To help ensure that the required Internet 2 HENP Networking WG [*] Mission u To help ensure that the required è National and international network infrastructures è Standardized tools and facilities for high performance and end-to-end monitoring and tracking, and è Collaborative systems u are developed and deployed in a timely manner, and used effectively to meet the needs of the US LHC and other major HENP Programs, as well as the general needs of our scientific community. u To carry out these developments in a way that is broadly applicable across many fields, within and beyond the scientific community u [*] Co-Chairs: S. Mc. Kee (Michigan), H. Newman (Caltech); With thanks to R. Gardner and J. Williams (Indiana)

Grid R&D: Focal Areas for NPACI/HENP Partnership u. Development of Grid-Enabled User Analysis Environments Grid R&D: Focal Areas for NPACI/HENP Partnership u. Development of Grid-Enabled User Analysis Environments è CLARENS (+IGUANA) Project for Portable Grid-Enabled Event Visualization, Data Processing and Analysis è Object Integration: backed by an ORDBMS, and File-Level Virtual Data Catalogs u. Simulation Toolsets for Systems Modeling, Optimization è For example: the MONARC System u. Globally Scalable Agent-Based Realtime Information Marshalling Systems è To face the next-generation challenge of Dynamic Global Grid design and operations è Self-learning (e. g. SONN) optimization è Simulation (Now-Casting) enhanced: to monitor, track and forward predict site, network and global system state u 1 -10 Gbps Networking development and global deployment è Work with the Tera. Grid, STARLIGHT, Abilene, the i. VDGL GGGOC, HENP Internet 2 WG, Internet 2 E 2 E, and Data. TAG u. Global Collaboratory Development: e. g. VRVS, Access Grid

CLARENS: a Data Analysis Portal to the Grid: Steenberg (Caltech) u A highly functional CLARENS: a Data Analysis Portal to the Grid: Steenberg (Caltech) u A highly functional graphical interface, Grid-enabling the working environment for “non-specialist” physicists’ data analysis u Clarens consists of a server communicating with various clients via the commodity XML-RPC protocol. This ensures implementation independence. u The server is implemented in C++ to give access to the CMS OO analysis toolkit. u The server will provide a remote API to Grid tools: è Security services provided by the Grid (GSI) è The Virtual Data Toolkit: Object collection access è Data movement between Tier centers using GSI-FTP è CMS analysis software (ORCA/COBRA) u Current prototype is running on the Caltech Proto-Tier 2 u More information at http: //heppc 22. hep. caltech. edu, along with a web-based demo

Modeling and Simulation: MONARC System Ø Modelling and understanding current systems, their performance and Modeling and Simulation: MONARC System Ø Modelling and understanding current systems, their performance and limitations, is essential for the design of the future large scale distributed processing systems. v The simulation program developed within the MONARC (Models Of Networked Analysis At Regional Centers) project is based on a process oriented approach for discrete event simulation. It is based on the on Java(TM) technology and provides a realistic modelling tool for such large scale distributed systems. SIMULATION of Complex Distributed Systems

MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 9) <E> = 0. MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 9) = 0. 83 CERN 30 CPUs = 0. 73 1 MB/s ; 150 ms RTT CALTECH 25 CPUs 20 0. 8 0 MB m s /s RT T s B/ T M RT 2 1. ms 0 15 NUST 20 CPUs = 0. 66 Day = 9

Maximizing US-CERN TCP Throughput (S. Ravot, Caltech) TCP Protocol Study: Limits u We determined Maximizing US-CERN TCP Throughput (S. Ravot, Caltech) TCP Protocol Study: Limits u We determined Precisely èThe parameters which limit the throughput over a high-BW, long delay (170 msec) network èHow to avoid intrinsic limits; unnecessary packet loss Methods Used to Improve TCP u Linux kernel programming in order to tune TCP parameters u We modified the TCP algorithm u A Linux patch will soon be available Result: The Current State of the Art for Reproducible Throughput u 125 Mbps between CERN and Caltech u 135 Mbps between CERN and Chicago Status: Ready for Tests at Higher BW (622 Mbps) in Spring 2002 1) A packet is lost 2) Fast Recovery (Temporary state to repair the lost) New loss Losses occur when the cwnd is larger than 3, 5 Mbyte 3) Back to slow start (Fast Recovery couldn’t repair the lost The packet lost is detected by timeout => go back to slow start cwnd = 2 MSS) Congestion window behavior of a TCP connection over the transatlantic line Reproducible 125 Mbps Between CERN and Caltech/CACR

Agent-Based Distributed System: JINI Prototype (Caltech/Pakistan) Lookup Discovery Service u. Includes “Station Servers” (static) Agent-Based Distributed System: JINI Prototype (Caltech/Pakistan) Lookup Discovery Service u. Includes “Station Servers” (static) that n tio ra st gi Re host mobile “Dynamic Services” u. Servers are interconnected dynamically to form a fabric in which mobile agents Lookup Ser vice Service Liste travel, with a payload of physics ne. Lookup Rem r analysis tasks ote N otific Service ation u. Prototype is highly flexible and robust against network outages u. Amenable to deployment on leading edge and future portable devices (WAP, i. Appliances, etc. ) Station Server k “The” system for the travelling physicist u. The Design and Studies with this prototype use the MONARC Simulator, Station and build on SONN studies hange Server Proxy Exc See http: //home. cern. ch/clegrand/lia/ Server

Globally Scalable Monitoring Service Lookup Service Proxy tio ra st gi Re Lookup Service Globally Scalable Monitoring Service Lookup Service Proxy tio ra st gi Re Lookup Service Discovery n Push & Pull rsh & ssh existing scripts RC snmp Monitor Farm Monitor Service u. Component Factory u. GUI marshaling u. Code Transport u. RMI data access Farm Monitor Client (other service)

Examples u GLAST meeting u 10 participants connected via VRVS (and 16 participants in Examples u GLAST meeting u 10 participants connected via VRVS (and 16 participants in Audio only) VRVS 7300 Hosts; 4300 Registered Users In 58 Countries 34 Reflectors; 7 In I 2 Annual Growth 250% US CMS will use the CDF/KEK remote control room concept for Fermilab Run II as a starting point. However, we will (1) expand the scope to encompass a US based physics group and US LHC accelerator tasks, and (2) extend the concept to a Global Collaboratory for realtime data acquisition + analysis

Next Round Grid Challenges: Global Workflow Monitoring, Management, and Optimization u Workflow Management, Balancing Next Round Grid Challenges: Global Workflow Monitoring, Management, and Optimization u Workflow Management, Balancing Policy Versus Moment-to-moment Capability to Complete Tasks è Balance High Levels of Usage of Limited Resources Against Better Turnaround Times for Priority Jobs è Goal-Oriented; According to (Yet to be Developed) Metrics u Maintaining a Global View of Resources and System State è Global System Monitoring, Modeling, Quasi-realtime simulation; feedback on the Macro- and Micro. Scales è Adaptive Learning: new paradigms for execution optimization and Decision Support (eventually automated) u Grid-enabled User Environments

PACI, Tera. Grid and HENP u The scale, complexity and global extent of the PACI, Tera. Grid and HENP u The scale, complexity and global extent of the LHC Data Analysis problem is unprecedented u The solution of the problem, using globally distributed Grids, is mission-critical for frontier science and engineering u HENP has a tradition of deploying new highly functional systems (and sometimes new technologies) to meet its technical and ultimately its scientific needs u HENP problems are mostly “embarrassingly” parallel; but potentially “overwhelming” in their data- and network intensiveness u HENP/Computer Science synergy has increased dramatically over the last two years, focused on Data Grids è Successful collaborations in Gri. Phy. N, PPDG, EU Data Grid u The Tera. Grid (present and future) and its development program is scoped at an appropriate level of depth and diversity è to tackle the LHC and other “Petascale” problems, over a 5 year time span è matched to the LHC time schedule, with full ops. In 2007

Some Extra Slides Follow Some Extra Slides Follow

Computing Challenges: LHC Example èGeographical dispersion: of people and resources èComplexity: the detector and Computing Challenges: LHC Example èGeographical dispersion: of people and resources èComplexity: the detector and the LHC environment èScale: Tens of Petabytes per year of data 5000+ Physicists 250+ Institutes 60+ Countries Major challenges associated with: Communication and collaboration at a distance Network-distributed computing and data resources Remote software development and physics analysis R&D: New Forms of Distributed Systems: Data Grids

Why Worldwide Computing? Regional Center Concept Goals u Managed, fair-shared access for Physicists everywhere Why Worldwide Computing? Regional Center Concept Goals u Managed, fair-shared access for Physicists everywhere u Maximize total funding resources while meeting the total computing and data handling needs u Balance proximity of datasets to large central resources, against regional resources under more local control è Tier-N Model u Efficient network use: higher throughput on short paths è Local > regional > national > international u Utilizing all intellectual resources, in several time zones è CERN, national labs, universities, remote sites è Involving physicists and students at their home institutions u Greater flexibility to pursue different physics interests, priorities, and resource allocation strategies by region è And/or by Common Interests (physics topics, subdetectors, …) u Manage the System’s Complexity è Partitioning facility tasks, to manage and focus resources

HENP Related Data Grid Projects Funded Projects è PPDG I USA è Gri. Phy. HENP Related Data Grid Projects Funded Projects è PPDG I USA è Gri. Phy. N USA è EU Data. Grid è PPDG II (CP) è i. VDGL USA è Data. TAG EU DOE NSF EU USA NSF EC $ 2 M 1999 -2001 $ 11. 9 M + $1. 6 M 2000 -2005 EC € 10 M 2001 -2004 DOE $ 9. 5 M 2001 -2004 $ 13. 7 M + $2 M 2001 -2006 € 4 M 2002 -2004 About to be Funded Project è Grid. PP* UK PPARC >$15 M? 2001 -2004 Many national projects of interest to HENP è Initiatives in US, UK, Italy, France, NL, Germany, Japan, … è EU networking initiatives (Géant, SURFNet) èUS Distributed Terascale Facility: ($53 M, 12 TFL, 40 Gb/s network) * = in final stages of approval

Network Progress and Issues for Major Experiments u Network backbones are advancing rapidly to Network Progress and Issues for Major Experiments u Network backbones are advancing rapidly to the 10 Gbps range: “Gbps” end-to-end data flows will soon be in demand è These advances are likely to have a profound impact on the major physics Experiments’ Computing Models u We need to work on the technical and political network issues è Share technical knowledge of TCP: Windows, Multiple Streams, OS kernel issues; Provide User Toolset è Getting higher bandwidth to regions outside W. Europe and US: China, Russia, Pakistan, India, Brazil, Chile, Turkey, etc. k Even to enable their collaboration u Advanced integrated applications, such as Data Grids, rely on seamless “transparent” operation of our LANs and WANs è With reliable, quantifiable (monitored), high performance è Networks need to become part of the Grid(s) design è New paradigms of network and system monitoring and use need to be developed, in the Grid context

Grid-Related R&D Projects in CMS: Caltech, FNAL, UCSD, UWisc, UFl u Installation, Configuration and Grid-Related R&D Projects in CMS: Caltech, FNAL, UCSD, UWisc, UFl u Installation, Configuration and Deployment of Prototype Tier 2 Centers at Caltech/UCSD and Florida u Large Scale Automated Distributed Simulation Production èDTF “Tera. Grid” (Micro-)Prototype: CIT, Wisconsin Condor, NCSA èDistributed MOnte Carlo Production (MOP): FNAL u “MONARC” Distributed Systems Modeling; Simulation system applications to Grid Hierarchy management èSite configurations, analysis model, workload èApplications to strategy development; e. g. inter-site load balancing using a “Self Organizing Neural Net” (SONN) èAgent-based System Architecture for Distributed Dynamic Services u Grid-Enabled Object Oriented Data Analysis

MONARC Simulation System Validation CMS Proto. Tier 1 Production Farm at FNAL CMS Farm MONARC Simulation System Validation CMS Proto. Tier 1 Production Farm at FNAL CMS Farm at CERN Measurement Mean measured Value ~48 MB/s Simulation Muon Jet <0. 90> <0. 52>

MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 0) 15 1. 2 MONARC SONN: 3 Regional Centres Learning to Export Jobs (Day 0) 15 1. 2 0 M m B/ s s RT T NUST 20 CPUs 20 0. 8 0 M m B s /s RT T 1 MB/s ; 150 ms RTT CALTECH CERN 25 CPUs 30 CPUs Day = 0

US CMS Remote Control Room For LHC US CMS Remote Control Room For LHC

Full Event Database of ~40, 000 large objects Request Parallel tuned GSI FTP Full Full Event Database of ~40, 000 large objects Request Parallel tuned GSI FTP Full Event Database of ~100, 000 large objects Request “Tag” database of ~140, 000 small objects Parallel tuned GSI FTP Bandwidth Greedy Grid-enabled Object Collection Analysis for Particle Physics (SC 2001 Demo) Julian Bunn, Ian Fisk, Koen Holtman, Harvey Newman, James Patton The object of this demo is to show grid-supported interactive physics analysis on a set of 144, 000 physics events. Initially we start out with 144, 000 small Tag objects, one for each event, on the Denver client machine. We also have 144, 000 LARGE objects, containing full event data, divided over the two tier 2 servers. Using local Tag event database, user plots event parameters of interest User selects subset of events to be fetched for further analysis Lists of matching events sent to Caltech and San Diego Tier 2 servers begin sorting through databases extracting required events For each required event, a new large virtual object is materialized in the server-side cache, this object contains all tracks in the event. The database files containing the new objects are sent to the client using Globus FTP, the client adds them to its local cache of large objects The user can now plot event parameters not available in the Tag Future requests take advantage of previously cached large objects in the client http: //pcbunn. cacr. caltech. edu/Tier 2_Overall_JJB. htm