Скачать презентацию National Knowledge Network and Grid — A Скачать презентацию National Knowledge Network and Grid — A

1339190221d3df36a463738299ecb3bf.ppt

  • Количество слайдов: 35

“ National Knowledge Network and Grid - A DAE Perspective ” B. S. Jagadeesh “ National Knowledge Network and Grid - A DAE Perspective ” B. S. Jagadeesh Computer Division BARC. India and CERN: Visions for future

Our Approach to Grids has been an evolutionary approach • Anupam Supercomputers • To Our Approach to Grids has been an evolutionary approach • Anupam Supercomputers • To Achieve Supercomputing Speeds – At least 10 times faster than the available sequential machines in BARC • To build a General Purpose Parallel Computer – Catering to wide variety of problems – General Purpose compute nodes and interconnection network • To keep development cycle short – Use readily available, off-the-shelf components

ANUPAM Performance over the years 100000 47000 10000 9036 1730 100 72 15 10 ANUPAM Performance over the years 100000 47000 10000 9036 1730 100 72 15 10 3 1 1. 6 1 0. 407 0. 11 0. 052 0. 034 0. 189 2012 2011 2010 2009 2008 2007 2006 2005 2004 Year 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 0. 01 1991 GFLOPS 365

‘ANUPAM-ADHYA’ 47 Tera Flops ‘ANUPAM-ADHYA’ 47 Tera Flops

Complete solution to scientific problems by exploiting parallelism for v Processing ( parallelization of Complete solution to scientific problems by exploiting parallelism for v Processing ( parallelization of computation) v I/O ( parallel file system) v Visualization (parallelized graphic pipeline/ Tile Display Unit)

Snapshots of Tiled Image Viewer Snapshots of Tiled Image Viewer

We now have , Large tiled display Rendering power with distributed rendering Scalable to We now have , Large tiled display Rendering power with distributed rendering Scalable to many pixels and polygons An attractive alternative to high end graphics system Deep and rich scientific visualization system

Post-tsunami: Nagappattinam, India (Lat: 10. 7906° N Lon: 79. 8428° E) This one-meter resolution Post-tsunami: Nagappattinam, India (Lat: 10. 7906° N Lon: 79. 8428° E) This one-meter resolution image was taken by Space Imaging's IKONOS satellite on Dec. 29, 2004 — just three days after the devastating tsunami hit. 1 M IKONOS Image Acquired: 29 December 2004 Credit "Space Imaging"

So, We Need Lots Of Resources Like High Performance Computers, Visualization Tools, Data Collection So, We Need Lots Of Resources Like High Performance Computers, Visualization Tools, Data Collection Tools, Sophisticated Laboratory Equipments Etc. “ Science Has Become Mega Science” “Laboratory Has To Be A Collaboratory” Key Concept is “Sharing By Respecting Administrative Policies”

GRID CONCEPT User Access Point Result Resource Broker Grid Resources GRID CONCEPT User Access Point Result Resource Broker Grid Resources

Grid concept (another perspective) • Many jobs per system -- Early days of Computation Grid concept (another perspective) • Many jobs per system -- Early days of Computation • One job per system -- Risc / Workstation Era • Two systems per JOB -- Client-Server Model • Many systems per JOB -- Parallel / Distributed Computing • View all of the above as Single unified resource -- Grid Computing

LHC Computing • LHC (Large Hadron Collider) has become operational and is churning out LHC Computing • LHC (Large Hadron Collider) has become operational and is churning out data. • Data rates per experiment of >100 Mbytes/sec. • >1 Pbytes/year of storage for raw data per experiment. • Computationally problem is so large that can not be solved by a single computer centre • World-wide collaborations and analysis. – Desirable to share computing and analysis throughout the world.

LEMON architecture LEMON architecture

QUATTOR • Quattor is a tool suite providing automated installation, configuration and management of QUATTOR • Quattor is a tool suite providing automated installation, configuration and management of clusters and farms • Highly suitable to install, configure and manage Grid computing clusters correctly and automatically • At CERN, currently used to auto manage nodes >2000 with heterogeneous hardware and software applications • Centrally configurable & reproducible installations, run time management for functional & security updates to maximize availability

Please visit: http: //gridview. cern. ch/GRIDVIEW/ Please visit: http: //gridview. cern. ch/GRIDVIEW/

National Knowledge Network • Nationwide networks like ANUNET, NICNET, SPACENET did exist • Idea National Knowledge Network • Nationwide networks like ANUNET, NICNET, SPACENET did exist • Idea is to Synergize, Integrate and leverage to form National Information Highways • Take everyone onboard (5000 institutes of learning). Provide quality of service. Outlay of Rs 100 crores for Proof of Concept Envisaged applications are: Countrywide Classrooms, Telemedicine, E-Governance, Grid Technology …. .

NKN TOPOLOGY All pops are covered by atleast Two NLDs NKN TOPOLOGY All pops are covered by atleast Two NLDs

NKN Topology EDGE GE ED G E EDGE Distribution EDGE 78 Institutes have been NKN Topology EDGE GE ED G E EDGE Distribution EDGE 78 Institutes have been connected Distribution ED E 15 POPS Core EDGE Distribution ED Distribution D G E Current status GE Work in progress for 27 POPS 550+ Institutes

Achieve Higher Availability Achieve Higher Availability

Educational Institutions NTRO Cert-IN Research Labs CSIR/DAE/ISRO/ICAR EDUSAT National Internet Exchange Points (NIXI) NKN Educational Institutions NTRO Cert-IN Research Labs CSIR/DAE/ISRO/ICAR EDUSAT National Internet Exchange Points (NIXI) NKN MPLS Clouds INTERNET Connections to Global Networks (e. g. GEANT) Broad Band Clouds National / State Data Centers

DAE-wide Applications on NKN • DAE-Grid : Grid resources at BARC, IGCAR, RRCAT and DAE-wide Applications on NKN • DAE-Grid : Grid resources at BARC, IGCAR, RRCAT and VECC • WLCG and GARUDA • Videoconferencing: with NIC, IITs, IISc • Collab-CAD : Collaborative design of sub assembly of the prototype 500 MW Fast Breeder Reactor from NIC, BARC & IGCAR • Remote classrooms : Amongst different Training schools

INSAT 3 C 8 Carriers of 768 Kbps each Quarter Transponder 9 MHz BARC, INSAT 3 C 8 Carriers of 768 Kbps each Quarter Transponder 9 MHz BARC, Trombay CTCRS, BARC Anushaktinagar , Mumbai Hos p. BARC AERB HWB BRIT NPCIL DAE, Mumbai ECIL, Hyderabad TIFR NFC CCCM IRE TMC IGCAR, Kalpakkam VECC, Kolkata BARC FACL CAT, Indore BARC, Mysore Gauribidnur Tarapur BARC, Mt. Abu AMD, Secund’bad. Shillong SAHA INST. IMS, Chennai IOP IPR, HRI, TMH HWB, Bhubaneshwar UCIL-II Allahabad. Ahmedabad MRPU UCIL-III Navi Manuguru Kota Jaduguda Mumbai Notes: Sites shown in yellow oblong are connected over dedicated landlines. ANUNET WIDE AREA NETWORK

ANUNET Leased Links Plan for 11 th Plan Delhi Mount Abu Shilong Allahabad Kota ANUNET Leased Links Plan for 11 th Plan Delhi Mount Abu Shilong Allahabad Kota Jaduguda Indore Gandhinagar Kolkata Tarapur Bhubneshwar Mumbai Hyderabad Vizag Manguru Mysore Gauribidunir Chennai Kalpakkam Existing Proposed Leased Links Plan All over India

DAE & NKN • BARC CONDUCTED THE FIRST NKN WORKSHOP TO TRAIN NETWORK ADMINISTRATORS DAE & NKN • BARC CONDUCTED THE FIRST NKN WORKSHOP TO TRAIN NETWORK ADMINISTRATORS IN DEC, 2008 • DAE-GRID • COUNTRYWIDE CLASSROOMS OF HBNI • SHARING OF DATABASES • DISASTER RECOVERY SYSTEMS (planned) • DAE GETS ACCESS TO RESOURCES IN THE COUNTRY • DAE STANDS TO GAIN IN THAT WE GET ACCESS TO RESOURCES ON GEANT(34 EUROPEAN COUNTRIES), • GLORIAD ( RUSSIA, CHINA, USA, NETHERLANDS, CANADA. . ), • TIEN 3 (AUSTRALIA, CHINA, JAPAN, SINGAPORE, …. . ) Applications …

NKN-General (National Collaborations) NKN-Internet (Grenoble-France) WLCG Collaboration Logical Communication Domains Through NKN Intranet segment NKN-General (National Collaborations) NKN-Internet (Grenoble-France) WLCG Collaboration Logical Communication Domains Through NKN Intranet segment of BARC 0 Internet segment of BARC NKN Router Anunet (DAE units) BARC – IGCAR Common Users Group (CUG) National Grid Computing CDAC, Pune

LAYOUT OF VIRTUAL CLASSROOM 55“ LED Projection Screen Elevation Front HD Camera Teacher HD LAYOUT OF VIRTUAL CLASSROOM 55“ LED Projection Screen Elevation Front HD Camera Teacher HD Camera 55“ LED LAYOUT OF VIRTUAL CLASS ROOM Elevation Back

An Example of High bandwidth Application An Example of High bandwidth Application

A Collaboratory (ESRF, Grenoble) Collaboratory? A Collaboratory (ESRF, Grenoble) Collaboratory?

Depicts a one degree oscillation photograph on crystals of HIV-1 PR M 36 I Depicts a one degree oscillation photograph on crystals of HIV-1 PR M 36 I mutant recorded by remotely operating the FIP beamline at ESRF, and OMIT density for the mutation residue I. (Credits: Dr. Jean. Luc Ferrer, Dr. Michel Pirochi & Dr. Jacques Joley, IBS/ESRF, France, Dr. M. V. Hosur & Colleagues, Solid State Physics Division & Computer Division, BARC) E-Governance?

COLLABORATIVE DESIGN Collaborative design of reactor components Credits : IGCAR, Kalpakkam, NIC-Delhi, Comp Divn, COLLABORATIVE DESIGN Collaborative design of reactor components Credits : IGCAR, Kalpakkam, NIC-Delhi, Comp Divn, BARC

DAEGrid UTKARSH 4 Sites 6 Clusters • • • Utkarsh – dual processor-quad core DAEGrid UTKARSH 4 Sites 6 Clusters • • • Utkarsh – dual processor-quad core 80 node (BARC) Aksha-itanium – dual processor 10 node (BARC) Ramanujam – dual core dual processor 14 node (RRCAT) Daksha – dual processor 8 node (RRCAT) Igcgrid-xeon – dual processor 8 node (IGCAR) Igcgrid 2 -xeon – quad core 16 node (IGCAR) 800 Cores DAEGrid Connectivity is through NKN – 1 Gbps

Categories of Grid Applications Category Examples Distributed Supercomputing Characteristics Ab-initio Molecular Dyn Large CPU Categories of Grid Applications Category Examples Distributed Supercomputing Characteristics Ab-initio Molecular Dyn Large CPU / memory reqd High Throughput Cryptography Remote Experiments CERN LHC Harness Idle cycles Cost effectiveness Collaborative Data Exploration Support communication On Demand Data Intensive Info from Large Data sets / Provenance

Moving Forward …… “ ‘Fire and Forget’ to ‘Assured quality of service’ ” - Moving Forward …… “ ‘Fire and Forget’ to ‘Assured quality of service’ ” - Effect of Process Migration in distributed environments - A Novel Distributed Memory file system - Implementation of Network Swap for Performance Enhancement - Redundancy issues to address the failure of resource brokers - Service center concept using Glite Middleware - Development of Meta-brokers to direct jobs amongst middlewares -All of the above lead towards ensuring Better quality of service and imparting simplicity in Grid usage… clouds?

Acknowledgements All colleagues from Computer Division, BARC for help in preparing the presentation material Acknowledgements All colleagues from Computer Division, BARC for help in preparing the presentation material are gratefully acknowledged. We also Acknowledge NICDelhi, IGCAR-Kalpakkam and IT-Division, CERN, Geneva. THANK YOU