Скачать презентацию Center for Information Services and High Performance Computing Скачать презентацию Center for Information Services and High Performance Computing

024edccea2102b6671d702a6dba2dce3.ppt

  • Количество слайдов: 35

Center for Information Services and High Performance Computing (ZIH) Introduction to High Performance Computing Center for Information Services and High Performance Computing (ZIH) Introduction to High Performance Computing at ZIH Getting started Zellescher Weg 16 Trefftz-Bau (HRSK-Anbau) Room HRSK/151 Tel. +49 351 - 463 - 39871 Guido Juckeland (guido. juckeland@tu-dresden. de)

Agenda Before you can get on – Paperwork When you first get on – Agenda Before you can get on – Paperwork When you first get on – Using ssh, VPN, environment modules, available file systems Things to know about the hardware you are/will be using Slides at: http: //wwwpub. zih. tu-dresden. de/~juckel/slides Slide 2 - Guido Juckeland

Before you can get on – Paperwork Slide 3 - Guido Juckeland Before you can get on – Paperwork Slide 3 - Guido Juckeland

Project Proposal • No login without a valid HPC project! • Every HPC user Project Proposal • No login without a valid HPC project! • Every HPC user account has to associated with at least one project • Project has to be endorsed (headed) by a Saxonian research group leader • Applications (pdf): http: //tu-dresden. de/die_tu_dresden/zentrale_einrichtungen/zih/dienste/formulare • Online project application: https: //formulare. zih. tu-dresden. de/antraege/antrag_form. html • Small amount of CPU time can be granted immediately • Proposal is peer reviewed and decided upon (Peers from all over Saxony) • Projects have a lifetime – need to reapply for follow-up projects

Login Application • Paperwork at: http: //www. tu-dresden. de/zih/hpc • You need a signature Login Application • Paperwork at: http: //www. tu-dresden. de/zih/hpc • You need a signature of your project leader on the application • What you get: • ZIH-Standard Login (E-Mail Account, Personal Storage, Anti-Virus Software, VPN Access, WLAN Access over Eduroam, …) • Account on the HPC systems you applied for • Automatic entry in the ZIH HPC Maillists (Announcements and Forum) • Accounts usually expire every year at the end of October! You need to extend your login!

When you first get on – Using ssh, VPN, environment modules, available file systems When you first get on – Using ssh, VPN, environment modules, available file systems Slide 6 - Guido Juckeland

Access from within the TUD-Network • You are on the TUD campus (you have Access from within the TUD-Network • You are on the TUD campus (you have an IP-Address that starts with 141. 30 or 141. 76) • Simply „ssh/sftp“ to the machine address (e. g. ssh deimos. hrsk. tu-dresden. de) • No Web-Access or similar (so do not try mars. hrsk. tu-dresden. de in your browser)

Access from the outside of the TUD Network • You are sitting at a Access from the outside of the TUD Network • You are sitting at a MPI or Fh. G (or at home) • No direct access from outside the TUD local network (hardware firewall) • 2 Options: • Double ssh connection (tough for file transfers) • First ssh to one of the central ZIH login servers (login 1. zih. tudresden. de or login 2. zih. tu-dresden. de) using your standard ZIH-login • ssh to the desired HPC machine from there • Use a ZIH VPN connection (preferred solution) • Download and install a ZIH VPN client (more information under: http: //tu-dresden. de/die_tu_dresden/zentrale_einrichtungen/zih/ dienste/datennetz_dienste/vpn) • Establish a VPN connection using your ZIH standard login • Then open a ssh/sftp connection from your computer to the desired HPC system

SSH Fingerprints der HRSK-Maschinen mars. hrsk. tu-dresden. de: 1024 cf: 89: 20: a 8: SSH Fingerprints der HRSK-Maschinen mars. hrsk. tu-dresden. de: 1024 cf: 89: 20: a 8: aa: 36: 3 f: 1 f: 7 b: 5 e: f 4: 8 e: 57: 99: 15: 35 ssh_host_dsa_key. pub 1024 1 a: cc: 4 e: 4 f: ff: 5 f: b 0: bc: 25: 9 d: 84: 9 f: 39: 12: d 7: 6 d ssh_host_key. pub 1024 08: 3 b: da: 02: 1 d: ff: a 8: cf: 26: 27: 96: 16: 86: 07: a 2: a 9 ssh_host_rsa_key. pub neptun. hrsk. tu-dresden. de: 1024 b 0: 0 b: 2 c: 3 d: 66: d 9: d 2: 49: ec: fc: d 1: 89: 6 d: 5 b: 4 c: f 7 ssh_host_key. pub deimos 10[1 -4]. hrsk. tu-dresden. de: 1024 48: f 7: d 6: 37: d 0: cf: b 0: f 4: 49: 67: b 6: 1 f: c 1: 44: 7 d: 9 f ssh_host_dsa_key. pub 1024 5 f: 11: 98: 8 a: 29: 20: c 8: 65: 78: 75: d 7: a 0: bb: d 4: 74: 93 ssh_host_key. pub 1024 22: 42: 72: c 6: 38: 57: 71: 03: 90: 72: 2 b: 2 c: 72: e 7: d 0: cd ssh_host_rsa_key. pub phobos. hrsk. tu-dresden. de: 1024 91: bd: d 0: b 0: 8 b: 60: 75: 40: bc: 4 a: 54: 9 d: 54: 2 a: dc: b 8 ssh_host_dsa_key. pub 1024 1 b: 1 c: 29: 1 f: d 2: 5 c: a 9: 0 b: ac: e 6: cf: 28: 1 c: 4 f: 92: 8 f ssh_host_key. pub 1024 b 8: 14: 54: 9 a: f 5: 06: f 8: d 5: da: cb: 51: a 8: 21: fb: db: bd ssh_host_rsa_key. pub Andreas Knüpfer: HRSK-Einführung

You are on – what do you find? • HRSK: Standard Linux Enterprise installation You are on – what do you find? • HRSK: Standard Linux Enterprise installation (Su. SE SLES 10 SP 2) • Phobos: Su. SE SLES 9 SP 3 • SX-6: Super. UX (Special UNIX environment) • Similar to a Desktop Linux (some special programs missing) • GCC, automake, and all the standard tools are there • Only a limited number of GUI tools available (usually not needed) • Caution: The amount of CPU time on the login nodes is limited to 5 minutes • This can cause problems for large file transfers contact us in this case • 3 rd party software or stuff that is not in the Linux distribution via enviroment modules

Modules for environment variables Non standard software installed into special paths (not in standard Modules for environment variables Non standard software installed into special paths (not in standard search path for applications) Modules set environment variables so that applications and libraries find their binaries/shared objects Show installed modules module avail Show currently loaded modules module list Load a module load Unload a module rm Exchange modules module switch <1> <2> Andreas Knüpfer: HRSK-Einführung

HRSK-Software Installed Software on the HRSK Systems (not complete, not all on all systems): HRSK-Software Installed Software on the HRSK Systems (not complete, not all on all systems): Compilers: – GCC – Intel – Pathscale – PGI Debuggers: – ddd – ddt – idb Libraries: – acml – atlas – blacs – blas – boost – hypre – lapack – mkl/clustermkl – netcdf – petsc Applications: – Abaqus – Ansys – CFX – Comsol – CP 2 K – Fluent – Gamess – Gaussian – Gromacs – Hmmer – Lammps – Totalview – Valgrind Andreas Knüpfer: HRSK-Einführung – – – – – LS-Dyna Maple Mathematica Matlab MSC Namd Numeca Octave R Tecplot

File system layout Michael Kluge File system layout Michael Kluge

Altix 4700 CXFS – The same on all Altix partitions – work [ /work Altix 4700 CXFS – The same on all Altix partitions – work [ /work ] • contains /work/home[0 -9]/ • 8, 8 TB • Backup – fastfs [ /fastfs ] • 60 TB • DMF, no Backup • Fastest file system scratch [ /scratch ] • Local – only visible per Altix partition • Fast alternative to /tmp Michael Kluge

Deimos Lustre – work [/work] • contains/work/home[0 -9]/ • global 16 TB • Backup Deimos Lustre – work [/work] • contains/work/home[0 -9]/ • global 16 TB • Backup – fastfs [ /fastfs ] • global 48 TB • no. Backup • Fastest available file system local (ext 3) – scratch [ /scratch ] • local per node (per core about 40 GB) Michael Kluge

Deimos (2) NFS – /hpc_fastfs • /fastfs from the Altix • Also dmf commands Deimos (2) NFS – /hpc_fastfs • /fastfs from the Altix • Also dmf commands available to access archive • Also Deimos only users have access here to archive data – /hpc_work • /work from the Altix • Incl. Home directories there Michael Kluge

Project directories • You are by default in a a user group the has Project directories • You are by default in a a user group the has the same name as your project • Your project has a shared „Home“ and „Fastfs“ directory for you to share applications and data • There are symbolic links in your home directory to the project directories • Please use them and do not install software into each of your project members home directories!

DMF - Commands DMF copies data back and forth automatically Manual invocation possible to DMF - Commands DMF copies data back and forth automatically Manual invocation possible to migrate data between disk and tape dmput – Moves data from disk to tape – “-r” also removes the data from disk after moving – Moving is done in the background dmls – Extended ls – Displays the location of the file data (ONL=disk, OFL=tape; DUL=on disk and tape; MIG=currently moving to tape; UNM=currently moved to disk) dmget – Recalls data from tape to disk Use dmput/dmget calls for full directories if needed!! Michael Kluge

I/O Recommendations Temporary data -> /fastfs Compile in /scratch Source code etc. -> home I/O Recommendations Temporary data -> /fastfs Compile in /scratch Source code etc. -> home Checkpoints -> fastfs Archive results as tar files (no need to compress) to /fastfs or /hpc_fastfs and run dmput -r on it afterwards Parallel file systems are bad for small I/O! (e. g. compilation) Large I/O bandwitdth with – Lots of clients – Lots of processes (that may even write to the same file) – Large I/O blocks Michael Kluge

Things to know about the hardware you are/will be using Slide 20 - Guido Things to know about the hardware you are/will be using Slide 20 - Guido Juckeland

SGI Altix 4700 (5 partitions) 1024 x 1. 6 GHz/18 MB L 3 Cache SGI Altix 4700 (5 partitions) 1024 x 1. 6 GHz/18 MB L 3 Cache Itanium II / Montecito CPUs (2048 Cores) 13, 1 TFlop/s Peak Performance 6, 6 TB Memory (4 GB/Core) Numa. Link 4 Local disks + 68 TB SAN Su. SE SLES 10 incl. SGI Pro. Pack 4 Intel Compiler and Tools Vampir Alinea DDT Debugger Batchsystem LSF Michael Kluge

CPU Intel Itanium II (Montecito), ca. 1. 7 Billion transistors IA-64 (not x 86!!!) CPU Intel Itanium II (Montecito), ca. 1. 7 Billion transistors IA-64 (not x 86!!!) 1. 6 GHz Dual-Core per Core: – L 1: 16 KB Data (no floating-point data) / 16 KB instructions – L 2: 256 KB Data / 1024 KB Instructions – L 3: 9 MB Instuction bundles with 128 bit 3 instructions per bundle No out-of-order execution Depends extremely on the compiler (do not use GCC!!) Michael Kluge

Connection to local memory and the rest of the system DDR 2 DIMM Itanium Connection to local memory and the rest of the system DDR 2 DIMM Itanium II Socket DDR 2 DIMM 10, 7 GB/s SHUB 2. 0 Numa. Link 4 2*6, 4 GB/s DDR 2 DIMM DDR 2 DIMM Michael Kluge

The whole system architecture 1 Chip (2 Cores) per blade 8 Blades per IRU The whole system architecture 1 Chip (2 Cores) per blade 8 Blades per IRU 4 IRUs per Rack 32 Racks 1024 Chips 2048 Cores spread over 5 partitions one Paritition = 1 computer (1 operating system instance) Michael Kluge

jupiter - Topology Folie: SGI Michael Kluge jupiter - Topology Folie: SGI Michael Kluge

Altix partitions On all partitions: 4 CPUs set aside for the operating system mars: Altix partitions On all partitions: 4 CPUs set aside for the operating system mars: – 384 GB main memory – 32 Prozessoren Login – 346 Prozessoren batch operations jupiter, saturn, uranus – 2 TB main memory – 506 CPUs batch operations neptun – 124 Prozessoren interactive use – 2 * FPGA – 4 graphic boards Michael Kluge

User‘s view on the Altix Login via SSH -> terminal emulation Boot-CPU-Set with 4 User‘s view on the Altix Login via SSH -> terminal emulation Boot-CPU-Set with 4 processors Su. SE Enterprise Server 10 SP 2 Standard Linux-Kernel Batch system places user requests on the rest of the available processors (also on the other partitions) Access via ssh Firewall Login mars LSF jupiter, saturn, uranus LSF neptun FPGA Graphics Michael Kluge

Linux Networx PC-Farm (Deimos) 1292 AMD Opteron x 85 Dual-Core CPUs (2, 6 GHz) Linux Networx PC-Farm (Deimos) 1292 AMD Opteron x 85 Dual-Core CPUs (2, 6 GHz) 726 Compute nodes with 2, 4 oder 8 CPU Cores Per core 2 Gi. Byte main memory 2 Infiniband interconnects (MPI- and I/O-Fabric) 68 TByte SAN-Storage Per node 70, 150, 290 GByte scratchdisk OS: Su. SE SLES 10 Batch system: LSF Compiler: Pathscale, PGI, Intel, Gnu 3 rd party applications: Ansys 100, CFX, Fluent, Gaussian, LS-DYNA, Matlab, MSC, … Slide 28 - Guido Juckeland

Deimos - Partitions 2 Master Nodes – Not accessible for users, PC-Farm management 4 Deimos - Partitions 2 Master Nodes – Not accessible for users, PC-Farm management 4 Login Nodes – 4 Core Nodes – Accessible with DNS Round Robin under deimos. hrsk. tu-dresden. de Single-, Dual- und Quad-Nodes – 1, 2 or 4 CPUs – 4, 8 or 16 Gi. Byte main memory (24 Quads with 32 Gi. Byte) – 80, 160 or 300 GByte local disks Setup in phase 1 and phase 2 nodes – Identical hardware – Differences in the connection to the MPI- and the I/O-Fabric (later) Slide 29 - Guido Juckeland

(4 Gi. Byte) Memory Deimos – Layout of a single-CPU node AMD Opteron 185 (4 Gi. Byte) Memory Deimos – Layout of a single-CPU node AMD Opteron 185 Hypertransport Peripheral devices (Infiniband, Ethernet, Disk) Slide 30 - Guido Juckeland

Hypertransport Peripheral devices (Infiniband, Ethernet, Festplatte) Slide 31 - Guido Juckeland AMD Opteron 285 Hypertransport Peripheral devices (Infiniband, Ethernet, Festplatte) Slide 31 - Guido Juckeland AMD Opteron 285 (4 Gi. Byte) AMD Opteron 285 Memory (4 Gi. Byte) Memory Deimos – Layout of a dual-CPU nodes

(4 Gi. Byte) Memory Hypertransport AMD Opteron 885 Hypertransport Peripheral devices (Infiniband, Ethernet, Festplatte) (4 Gi. Byte) Memory Hypertransport AMD Opteron 885 Hypertransport Peripheral devices (Infiniband, Ethernet, Festplatte) Slide 32 - Guido Juckeland (4 Gi. Byte) Memory Hypertransport AMD Opteron 885 (4 Gi. Byte) AMD Opteron 885 Memory (4 Gi. Byte) Memory Deimos - Layout of a quad-CPU Node

Deimos Infiniband-Layout (rough sketch) Node MPI Netzwerk Node IO Netzwerk Node . . . Deimos Infiniband-Layout (rough sketch) Node MPI Netzwerk Node IO Netzwerk Node . . . Node Slide 33 - Guido Juckeland

Deimos MPI-Fabric 3 288 -Port Voltaire ISR 9288 IB-Switches with 4 x Infiniband Ports Deimos MPI-Fabric 3 288 -Port Voltaire ISR 9288 IB-Switches with 4 x Infiniband Ports +-------------------+ +----------+ | Switch 1 | | Switch 2 | | Switch 3 | | | 30 x | | | Rack 05 |-------| Rack 20 |-------| Rack 25 | | | | all Phase 1 Nodes | | Phase 2 Duals+Quads | | Phase 2 Singles | +-------------------+ +----------+ Slide 34 - Guido Juckeland

Deimos I/O Fabric Tree structure with – 1 192 Port Voltaire ISR 9288 IB-Switch Deimos I/O Fabric Tree structure with – 1 192 Port Voltaire ISR 9288 IB-Switch with 4 x Infiniband Ports (Rack 07) – 36 24 Port Mellanox IB-Switch (4 x) passive 24 Port Mellanox . . . Voltaire Core-Switch 24 Port Mellanox Phase 2 Phase 1 Slide 35 - Guido Juckeland