Скачать презентацию Grid Team See David working System Скачать презентацию Grid Team See David working System

456d226296f66653345e08bf646ce357.ppt

  • Количество слайдов: 18

Grid Team See David. . # working System Hardware Middleware Applications Software Hardware Grid Team See David. . # working System Hardware Middleware Applications Software Hardware

LHC Computing at a Glance • The investment in LHC computing will be massive LHC Computing at a Glance • The investment in LHC computing will be massive – LHC Review estimated 240 MCHF – 80 MCHF/y afterwards • These facilities will be distributed – Political as well as sociological and practical reasons Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users

9 orders of magnitude! Rare Phenomena – Huge Background All interactions The HIGGS 9 orders of magnitude! Rare Phenomena – Huge Background All interactions The HIGGS

CPU Requirements • Complex events – Large number of signals – “good” signals are CPU Requirements • Complex events – Large number of signals – “good” signals are covered with background • Many events – 109 events/experiment/year – 1 - 25 MB/event raw data – several passes required ®Need world-wide: 7*106 SPECint 95 (3*108 MIPS)

LHC Computing Challenge 1 TIPS = 25, 000 Spec. Int 95 ~PBytes/sec Online System LHC Computing Challenge 1 TIPS = 25, 000 Spec. Int 95 ~PBytes/sec Online System ~100 MBytes/sec Offline Farm ~20 TIPS ~100 MBytes/sec • One bunch crossing per 25 ns • 100 triggers per second • Each event is ~1 Mbyte Tier 1 US Regional Centre Tier 3 ~ Gbits/sec or Air Freight Italian Regional Centre Physics data cache Workstations Institute Tier 0 French Regional Centre Tier 2 ~Gbits/sec Institute ~0. 25 TIPS PC (1999) = ~15 Spec. Int 95 Scot. GRID++ ~1 TIPS Institute 100 - 1000 Mbits/sec Tier 4 CERN Computer Centre >20 TIPS RAL Regional Centre Tier 2 Centre ~1 TIPS Physicists work on analysis “channels” Glasgow has ~10 physicists working on one or more channels Data for these channels is cached by the Glasgow server

Starting Point Starting Point

CPU Intensive Applications Numerically intensive simulations: – Minimal input and output data • ATLAS CPU Intensive Applications Numerically intensive simulations: – Minimal input and output data • ATLAS Monte Carlo (gg H bb) 182 sec/3. 5 Mb event on 1000 MHz linux box Compiler Tests: Compiler (MFlops) Fortran (g 77) C (gcc) Java (jdk) Speed 27 43 41 Standalone physics applications: 1. Simulation of neutron/photon/electron interactions for 3 D detector design 2. NLO QCD physics simulation

Timeline Prototype of Hybrid Event Store (Persistency Framework) Hybrid Event Store available for general Timeline Prototype of Hybrid Event Store (Persistency Framework) Hybrid Event Store available for general users applications Distributed production using grid services Full Persistency Framework Distributed end-user interactive analysis Q 1 Q 2 Q 3 Q 4 2002 2003 2004 2005 LHC Global Grid TDR grid “ 50% prototype” (LCG-3) available Scot. GRID ~ 300 CPUs + ~ 50 TBytes LCG-1 reliability and performance targets First Global Grid Service (LCG-1) available

Scot. GRID Processing nodes at Glasgow 59 IBM X Series 330 dual 1 GHz Scot. GRID Processing nodes at Glasgow 59 IBM X Series 330 dual 1 GHz Pentium III with 2 GB memory • 2 IBM X Series 340 dual 1 GHz Pentium III with 2 GB memory and dual ethernet • 3 IBM X Series 340 dual 1 GHz Pentium III with 2 GB memory and 100 + 1000 Mbit/s ethernet • 1 TB disk • LTO/Ultrium Tape Library • Cisco ethernet switches Scot. GRID Storage at Edinburgh • IBM X Series 370 PIII Xeon with 512 MB memory 32 x 512 MB RAM • 70 x 73. 4 GB IBM FC Hot-Swap HDD Griddev testrig at Glasgow • 4 x 233 MHz Pentium II CDF equipment at Glasgow • 8 x 700 MHz Xeon IBM x. Series 370 4 GB memory 1 TB disk

EDG Test. Bed 1 Status Web interface showing status of (~400) servers at testbed EDG Test. Bed 1 Status Web interface showing status of (~400) servers at testbed 1 sites GRID extend to all expts

Glasgow within the Grid Glasgow within the Grid

Grid. PP £ 17 m 3 -year project funded by PPARC Applications Operations £ Grid. PP £ 17 m 3 -year project funded by PPARC Applications Operations £ 1. 99 m £ 1. 88 m Tier - 1/A £ 3. 66 m CERN - LCG (start-up phase) funding for staff EDG - UK Contributions and hardware. . . Architecture CERN £ 5. 67 m Data. Grid £ 3. 78 m Applications -up phase) (start Ba. Bar CDF+D 0 (SAM) ATLAS/LHCb CMS (ALICE) UKQCD • • • http: //www. gridpp. ac. uk • Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA LCFG MDS deployment Grid. Site Slash. Grid Spitfire Optor Grid. PP Monitor Page =Glasgow element • •

Overview of SAM Overview of SAM

Spitfire - Security Mechanism HTTP + SSL Request + client certificate Servlet Container SSLServlet. Spitfire - Security Mechanism HTTP + SSL Request + client certificate Servlet Container SSLServlet. Socket. Factory Trusted CAs Is certificate signed by a trusted CA? Trust. Manager Revoked Certs repository RDBMS Has certificate been revoked? No Security Servlet Authorization Module Does user specify role? Connection Pool No Find default Yes Role repository Role ok? Translator Servlet Role Connection mappings Map role to connection id Request a connection ID

Optor – replica optimiser simulation • Simulate prototype Grid • Input site policies and Optor – replica optimiser simulation • Simulate prototype Grid • Input site policies and experiment data files. • Introduce replication algorithm: – Files are always replicated to the local storage. – If necessary oldest files are deleted. – Even a basic replication algorithm significantly reduces network traffic and program running times. New economics-based algorithms under investigation

Prototypes real world. . . simulated World… Tools: Java Analysis Studio over TCP/IP Instantaneous Prototypes real world. . . simulated World… Tools: Java Analysis Studio over TCP/IP Instantaneous CPU Usage Scalable Architecture Individual Node Info.

Glasgow Investment in Computing Infrastructure • Long tradition • Significant Dept. Investment • £ Glasgow Investment in Computing Infrastructure • Long tradition • Significant Dept. Investment • £ 100, 000 refurbishment (just completed) • Long term commitment (LHC era ~ 15 years) • Strong System Management Team – underpinning role • New Grid Data Management Group – fundamental to Grid Development • ATLAS/CDF/LHCb software • Alliances with Glasgow Computing Science, Edinburgh, IBM.

Summary (to be updated. . ) Detector for LHCb experiment Detector for ALICE experiment Summary (to be updated. . ) Detector for LHCb experiment Detector for ALICE experiment Scot. GRID • Grids are (already) becoming a reality • Mutual Interest Scot. GRID Example • Glasgow emphasis on – Data. Grid Core Development – Grid Data Management – CERN+UK lead – Multidisciplinary Approach – University + Regional Basis – Applications ATLAS, CDF, LHCb – Large distributed databases – a common problem=challenge – CDF LHC – – Genes Proteins