Скачать презентацию Academic Compute Cloud Provisioning and Usage AAA Project Скачать презентацию Academic Compute Cloud Provisioning and Usage AAA Project

5dd962f1ec5acb50f65ba605e2c94b3f.ppt

  • Количество слайдов: 35

Academic Compute Cloud Provisioning and Usage AAA Project Peter Kunszt ETH/Systems. X. ch 2012, Academic Compute Cloud Provisioning and Usage AAA Project Peter Kunszt ETH/Systems. X. ch 2012, October 23

Project Goals • How to extend current cluster services using cloud technology? • Support Project Goals • How to extend current cluster services using cloud technology? • Support new application models (Map. Reduce, specialized servers). • Test real applications. • Understand performance implications. 1. Define Service Models: How to move to cloud-like service orientation models. 2. Define Business Models: How to accommodate pay-peruse, Op. Ex vs. Cap. Ex, how to plan an academic private cloud, and how to use and offer public clouds 3. Run real applications: Run a regular, a compute-intensive and a data-intensive application on the cloud.

Project Goals • How to extend current cluster services using cloud Provide technology? input Project Goals • How to extend current cluster services using cloud Provide technology? input to the mid- and long • Support new applicationfor cluster and cloud term strategy models (Map. Reduce, specialized servers). infrastructure at ETH and UZH. • Test real applications. • Understand performance implications. 1. Define Service Models: How to move to cloud-like service orientation models. Disseminate results in Switzerland 2. Define Business Models: How to accommodate pay-peruse, Op. Ex vs. Cap. Ex, how to plan an academic private broadlyhow academia and toclouds in to use and offer public interested cloud, and 3. Runparties (Workshop at project end) real applications: Run a regular, a compute-intensive and a data-intensive application on the cloud.

Project Organization SWITCH AAA Project Lead: Peter Kunszt UZH Sergio Maffioletti Riccardo Murri Christian Project Organization SWITCH AAA Project Lead: Peter Kunszt UZH Sergio Maffioletti Riccardo Murri Christian Panse Tyanko Alekseyev Antonio Messina ETHZ Olivier Byrde +Brutus team members Sandro Mathys Software Peter Kunszt Sy. BIT team, FGCZ Malmström group Guido Capitani others as needed Business Model Dean Flanders Markus Eurich Consultants

Project Organization SWITCH AAA Project Lead: Peter Kunszt UZH Sergio Maffioletti Riccardo Murri Christian Project Organization SWITCH AAA Project Lead: Peter Kunszt UZH Sergio Maffioletti Riccardo Murri Christian Panse Tyanko Alekseyev Antonio Messina ETHZ Olivier Byrde +Brutus team members Sandro Mathys Software Peter Kunszt Sy. BIT team, FGCZ Malmström group Guido Capitani others as needed Business Model Dean Flanders Markus Eurich Consultants

Motivation • Today : World of Products – Hardware, Software to be bought as Motivation • Today : World of Products – Hardware, Software to be bought as products – Users buy, set up, install, configure and use • Evolving into: World of Services – Software and Services bought directly as Apps – Users make use what they need immediately Users will buy more services in the future, not just products. These services will be often times in the cloud. We too want to offer services, not just products.

DEFINITION Cloud Attributes: When do we talk about a cloud • Self-service, On-demand, Cost DEFINITION Cloud Attributes: When do we talk about a cloud • Self-service, On-demand, Cost transparency – Access to immediately available resources, paying for usage only. No long-term commitments. No up -front investments needed. Operational expenses only. • Elasticity, Multi-tenancy, Scalability – Grow and shrink size of resource on request. Sharing with other users without impacting each other. Economies of scale. 7

Definitions • Self-service: A consumer can unilaterally provision computing capabilities, such as server time Definitions • Self-service: A consumer can unilaterally provision computing capabilities, such as server time and network storage, without requiring human interaction. • On-demand: As needed, at the time when needed, automatic provisioning. • Cost Transparency: Accounting of actual usage transparent to user and service provider both, measured in corresponding terms (Hours CPU time, GB per Month, MB Transfer, etc)

Definitions • Elastic: Capabilities can be elastically provisioned and released, in some cases automatically, Definitions • Elastic: Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. • Multi-tenant: The provider’s computing resources are pooled to serve multiple consumers, with resources dynamically assigned and reassigned according to consumer demand. • Scalable: To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time. http: //csrc. nist. gov/publications/nistpubs/800 -145/SP 800 -145. pdf

HPC Pyramid Computing needs CSCS Local Cluster (e. g. ETH Brutus) Servers / Mini-clusters HPC Pyramid Computing needs CSCS Local Cluster (e. g. ETH Brutus) Servers / Mini-clusters Laptop, Desktop, i. Pad Number of users

Relation to Cloud: As User (extension) Computing needs CSCS Local Cluster (e. g. ETH Relation to Cloud: As User (extension) Computing needs CSCS Local Cluster (e. g. ETH Brutus) Servers / Mini-clusters Laptop, Desktop, i. Pad Number of users Cloud Use Burst

Today, University Clusters do not make use of the Cloud: • Technical details to Today, University Clusters do not make use of the Cloud: • Technical details to be investigated: – Bursting the cluster into the cloud • Networking? • User Management? • File System? • Cloud-compatible licenses for commercial products are often not available • No billing mechanism to bill users of cluster for pay-per-use services

Relation to Cloud: As Provider Computing needs CSCS Account / charge usage Local Cluster Relation to Cloud: As Provider Computing needs CSCS Account / charge usage Local Cluster (e. g. ETH Brutus) Servers / Mini-clusters Laptop, Desktop, i. Pad Number of users Cloud Expose to

Not clear how to be a Cloud Provider with a University Cluster • • Not clear how to be a Cloud Provider with a University Cluster • • • Univ. cluster is not self-service Capital expenses, not just pay-per-use Long-term commitment Not extensible on-demand, not elastic Sharing with others only according to policies More stringent terms of use, needs account • We have examples to look at: – SDSC, Cornell, Oslo

Infrastructure and Platform as a Service Classic Approach Today Iaa. S . Paa. S Infrastructure and Platform as a Service Classic Approach Today Iaa. S . Paa. S From www. cloudadoption. org 95% time savings Saa. S Infrastructure START Platform Software FINISH

Software & Apps run on platforms, NOT infrastructure www. cloudadoption. org Software & Apps run on platforms, NOT infrastructure www. cloudadoption. org

Cloud Stack Users or Portals. Can directly use each stack. CLIENTS User Interface Machine Cloud Stack Users or Portals. Can directly use each stack. CLIENTS User Interface Machine Interface Software Components Services Platform Compute DEFINITION Storage Network Infrastructure HARDWARE Saa. S = Software as a Service • Scientific / office / business / etc. Software as a service. Interactive or programmable. Paa. S = Platform as a Service • Programming and deployment frameworks. Integrated programmable high-level services for composition. Iaa. S = Infrastructure as a Service • Virtual or hosted hardware: for HPC, compute, storage, network, specialized servers (memory, GPU, DB) Any kind of infrastructure for any of the stacks.

Who can makes use of what User Portal Saa. S Paa. S Iaa. S Who can makes use of what User Portal Saa. S Paa. S Iaa. S Hardware • Users may use any service • Portals may use any service • Saa. S may or may not be built on top of Paa. S or Iaa. S • Paa. S may or may not be built on top of Iaa. S

DEFINITION Public, Private, Hybrid Clouds Private Cloud • Own infrastructure only • In-house or DEFINITION Public, Private, Hybrid Clouds Private Cloud • Own infrastructure only • In-house or hosted • Internal use or for sale Hybrid Cloud Connect • Offered by partner organizations or cloud providers • Private Cloud connected to Public Cloud • Remote cloud resources on-demand • Only operational expenses Institutional boundary • Constraints on own • Full control on cloud stack: needs to stack, accounting, etc interoperate with public cloud Public Cloud • No control on cloud stack, dependency on external partner

Goal: Understand the relationships. . • . . in terms of virtual Servers • Goal: Understand the relationships. . • . . in terms of virtual Servers • . . in terms of Storage • . . in terms of Networking

Goal: How to evolve the HPC Service. . • . . to be able Goal: How to evolve the HPC Service. . • . . to be able to offer a Platform as a Service. • . . to be able to make use of public clouds seamlessly (Hybrid model, Cloud. Bursting)

Goal: New Software Services Goal: New Software Services

Goal: New Business Models • Cannot charge at full cost if we want to Goal: New Business Models • Cannot charge at full cost if we want to be the service provider (competitive advantage) • Internal and external views • Efficient, fair, feasible and generally accepted funding and charging model • New opportunities should not require to change existing business procedures for existing infrastructure (evolution not revolution) • Transparent Financial Accounting mechanism

Status: Information and Survey • We collected a lot of information and conducted a Status: Information and Survey • We collected a lot of information and conducted a survey on existing solutions • Choices (we need to limit ourselves): – Open. Stack – VMWare – HP Matrix

Cloud Stack Comparison Matrix Cloud Stack Comparison Matrix

Open. Stack Distribution comparison Open. Stack Distribution comparison

Public Iaa. S Comparison Public Iaa. S Comparison

Status: Infrastructure 1 • ETH: HP Cloud. System Matrix Testbed – Operational as of Status: Infrastructure 1 • ETH: HP Cloud. System Matrix Testbed – Operational as of THIS WEEK • 8 Intel, 8 AMD blades • 128 GB memory per blade • 10 TB storage 3 PAR • HP Matrix cloud software is fixed • This is on RENT we have to give it back

Status: Infrastructure 2 • ETH: Build our own from new components. – Standard cluster Status: Infrastructure 2 • ETH: Build our own from new components. – Standard cluster nodes x 16, diskless – 128 GB RAM on each node – Very fast storage (SSD based) for VM images • Attach standard storage NAS from ETH • Cloud Stack: – Open. Stack – VMWare • Will be here in 2 weeks • This remains at ETH after the project

Status: Infrastructure 3 • University of Zurich: Recycle existing components. – Set of old Status: Infrastructure 3 • University of Zurich: Recycle existing components. – Set of old cluster nodes, heterogeneous – Cloud filesystem using local node storage (technologies will be evaluated) • Gluster. FS • Ceph

Status: Software • Use Cases are defined and chosen • Map. Reduce (Hadoop) – Status: Software • Use Cases are defined and chosen • Map. Reduce (Hadoop) – Existing sw deployment: Crossbow (genomics) – New development (proteomics) • Compute intensive – GAMESS – Rosetta • Data intensive – HCS (High Content Screening) – image analysis • Servers – Matlab, R, CLC Bio, etc servers

Status: Business Model • Several models are being worked out – Shareholder model – Status: Business Model • Several models are being worked out – Shareholder model – one-time fee for TFLOPS or TB – Subscription model – yearly fee – Pay-per-use model • Self service options – Very detailed like Amazon – High-level ‘virtual cluster’ or Paa. S – Top-level Saa. S user gateways

Lots of Interactions • With Cloud providers – IBM, Amazon, Cloud. Sigma, HP, Google Lots of Interactions • With Cloud providers – IBM, Amazon, Cloud. Sigma, HP, Google • Software providers – VMWare, HP, Dell, Open. Stack flavors (Piston, . . ) • Universities – SWITCH, ZHAW, SDSC, Cornell, Imperial College, U Oslo, Zaragoza

Next Steps • Cloud Bursting of Cluster into our own cloud and into Amazon Next Steps • Cloud Bursting of Cluster into our own cloud and into Amazon (reproducing VM-MAD) – Startup and teardown times – Management tests – Performance • Use Cases are set up on the infrastructure

SDCD 2012: Supporting Science with Cloud Computing • November 19 2012, University of Bern, SDCD 2012: Supporting Science with Cloud Computing • November 19 2012, University of Bern, http: //www. swing-grid. ch/sdcd 2012/ • • • The Eco. Cloud Project [EPFL: Anne Wiggins] Academic Compute Cloud Project at ETH [ETH/Systems. X: Peter Kunszt] From Bare-Metal to Cloud [ZHAW/ICClab: Andrew Edmonds] Review of CERN Data Center Infrastructure [CERN: Gavin Mc. Cance] Big Science in the Public Clouds: Watching ATLAS proton collisions at Cloud. Sigma [Cloud. Sigma: Michael Higgins] Supporting Research with Flexible Computational Resources [University Oxford: David Wallom] The i. Plant Collaborative: Science in the Cloud for Plant Biology [University of Arizona/i. Plant: Edwin Skidmore] Tiny Particle within Huge Data [ETH: Christoph Grab] Roundtable discussion: Cloud Strategies and thoughts for Researchers in Switzerland