Скачать презентацию Power and Performance Management of Virtualized Computing Environments Скачать презентацию Power and Performance Management of Virtualized Computing Environments

4761a5c5ee9e287808167fc2734a9a3f.ppt

  • Количество слайдов: 48

Power and Performance Management of Virtualized Computing Environments via Lookahead Control 1 2 2 Power and Performance Management of Virtualized Computing Environments via Lookahead Control 1 2 2 Dara Kusic , Jeffrey O. Kephart , James E. Hanson , 1 3 Nagarajan Kandasamy , and Guofei Jiang 1 - Drexel University, Philadelphia, PA 19104 2 - IBM T. J. Watson Research Center, Hawthorne, NY 10532 3 - NEC Labs America, Princeton, NJ 08540 Presented by Tongping Liu 1/48

OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed formulation and controller design u. Performance u. Conclusions 2/48 results

DATA-CENTER ENERGY COSTS u. Server energy consumption is growing at 9% per year u. DATA-CENTER ENERGY COSTS u. Server energy consumption is growing at 9% per year u. Data centers are projected to surpass the airline industry in CO 2 emissions by 2020 Mc. Kinsey & Co. Report: http: //uptimeinstitute. org/content/view/168/57 3/48 Carbon dioxide emissions as percentage of world total – industries Data centers Airlines Shipyards Steel plants Carbon emissions by countries (Metric tons of CO 2 per year) Data centers Argentina Netherlands Malaysia

SERVER UTILIZATION IN DATA CENTERS u. Server utilization averages about 6%, accounting for idle SERVER UTILIZATION IN DATA CENTERS u. Server utilization averages about 6%, accounting for idle servers Peak daily server utilization (%) 100 90 90 80 80 70 70 60 60 50 50 40 40 30 Up to 30% of servers are idle! 20 30 20 10 0 Mc. Kinsey & Co. Report: http: //uptimeinstitute. org/content/view/168/57 4/48 10 0 10 20 30 40 50 90 100 Average daily server utilization (%)

VIRTUALIZATION AS THE ANSWER Performance-isolated platforms, called virtual machines (VMs), allow resources (e. g. VIRTUALIZATION AS THE ANSWER Performance-isolated platforms, called virtual machines (VMs), allow resources (e. g. , CPU, memory) to be shared on a single server u Enables consolidation of online services onto fewer servers u Technique Selectively turn off core components to increase remaining unit efficiency Efficiency Impact • 3 -5% u Increases per-server utilization and mitigates “server sprawl” Deploy virtualization for existing and new demand • 25 -30% u Enables on-demand computing, a provisioning model where resources are dynamically provisioned as per workload demand Implement free cooling • 0 -15% Introduce greener and more power efficient servers • 10 -20% Mc. Kinsey & Co. Report: http: //uptimeinstitute. org/content/view/168/57 5/48

PROBLEM STATEMENT u We address combined power and performance management in a virtualized computing PROBLEM STATEMENT u We address combined power and performance management in a virtualized computing environment – The problem is posed as one of sequential optimization under uncertainty and solved using limited look-ahead control (LLC) – The notion of risk is encoded explicitly in the problem formulation u Summary of main results – A server cluster managed using LLC saves 26% in power-consumption costs over a 24 hour period when compared to an uncontrolled system – Power savings are achieved with very few SLA violations (1. 6% of the total number of requests) 6/48

OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed formulation and controller design u. Performance u. Conclusions 7/48 results

THE EXPERIMENTAL TESTBED The testbed is a two-tier architecture with front-end application servers and THE EXPERIMENTAL TESTBED The testbed is a two-tier architecture with front-end application servers and backend databases u It hosts two online services (Gold and Silver) u Servers are virtualized u Performance goals u – Minimize power consumption – Minimize SLA violations u We target the application and the database tiers 8/48

EXPERIMENTAL SYSTEM Six Dell servers (models 2950 and 1950) comprise the experimental testbed u EXPERIMENTAL SYSTEM Six Dell servers (models 2950 and 1950) comprise the experimental testbed u Virtualization of the CPU and memory is enabled by VMware ESX Server 3. 0 u Virtual machines run SUSE Enterprise Linux Server Edition 10 u Control directives use the VMware API, Linux shell commands, and IPMI u Silver application is Trade 6 only; Gold application is Trade 6 + extra CPU load u Host name CPU speed # of CPU cores Memory Apollo 2. 3 GHz 8 8 GB Bacchus 2. 3 GHz 2 8 GB Chronos 1. 6 GHz 8 4 GB Demeter 1. 6 GHz 8 4 GB Eros 1. 6 GHz 8 4 GB Poseidon 2. 3 GHz 8 8 GB 9/48

CHARACTERISTICS OF THE INCOMING WORKLOAD We assume a session-less workload, i. e. , incoming CHARACTERISTICS OF THE INCOMING WORKLOAD We assume a session-less workload, i. e. , incoming requests are independent of each other u The transaction mixed is fixed to a constant proportion of browse/buy requests u u The workload to the computing system is time varying and shows significant variability over short time periods 10/48

APPLICATION ENVIRONMENT u. Online services are enabled by enterprise applications Application server u. Trade APPLICATION ENVIRONMENT u. Online services are enabled by enterprise applications Application server u. Trade 6 is an example –It is transaction-based stock trading application from IBM –It can be hosted across one or more servers in a multi-tier architecture Trade Servlets Web Clients Trade Action Trade Services DB 2 Database Trade Server Pages Web. Sphere Application Server 11/48 Database

OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed formulation and controller design u. Performance u. Conclusions 12/48 results

PROBLEM FORMULATION u The power/performance management problem is posed as a dynamic resource provisioning PROBLEM FORMULATION u The power/performance management problem is posed as a dynamic resource provisioning problem under dynamic operating constraints u Objectives – Maximize the profit generated by the system (i. e. , minimize SLA violations and the power consumption cost) u Decisions to be optimized – Number of servers to turn on or off – Number of VMs to provision to each service – The CPU share given to each VM – Distribute incoming workload to different servers 13/48

PROBLEM FORMULATION (Contd. ) u. Dollars generated (Revenue) –Obtained as per a (nonlinear) reward-refund PROBLEM FORMULATION (Contd. ) u. Dollars generated (Revenue) –Obtained as per a (nonlinear) reward-refund curve specified by the SLA Reward Violation of SLA results in a refund to client Reward is defined by SLA for each service class Refund 14/48

PROBLEM FORMULATION (Contd. ) u. Power consumption costs of operating servers u. Switching u PROBLEM FORMULATION (Contd. ) u. Power consumption costs of operating servers u. Switching u Key characteristics of the control problem – Opportunity cost lost due to the unavailability of servers/VMs involved in provisioning decisions – Transient power consumption costs – Some control actions have (long) dead times; e. g. , switching on a server, instantiating VMs, migrating VMs – Decisions must be optimized over a discrete domain – Optimization must be performed quickly, given the dynamics of input u We 15/48 use a limited look-ahead control (LLC) concept

THE LLC FRAMEWORK u LLC is same as model predictive control, but in a THE LLC FRAMEWORK u LLC is same as model predictive control, but in a discrete domain and quickly u Advantages – Use predictions to improve control performance – Robust (iterative feedback) even in dynamic operating conditions – Inherent compensation for dead times – Multi-objective and non-linear optimization in the discrete domain under explicit constraints 16/48

THE LLC FRAMEWORK (Contd. ) k+1 u Use a system model to estimate future THE LLC FRAMEWORK (Contd. ) k+1 u Use a system model to estimate future system states over a prediction horizon u Obtain an “optimal” sequence of control inputs u Apply x(k) the first control input in the sequence at time k +1; discard the rest 17/48 k+2 k+3 k+4

WORKLOAD ESTIMATION USING A PREDICTIVE FILTER Training phase Prediction error is about 8% u. WORKLOAD ESTIMATION USING A PREDICTIVE FILTER Training phase Prediction error is about 8% u. A Kalman filter is used to estimate the workload over the prediction horizon 18/48

CONSTRUCTING THE SYSTEM MODEL System model will base on observed state, control input and CONSTRUCTING THE SYSTEM MODEL System model will base on observed state, control input and estimated workload to create new state. u The behavior of each application is captured using simulation-based learning and stored in an approximation structure (e. g. , lookup table, neural network) – OFFLINE mode 19/48 u

CONSTRUCTING THE SYSTEM MODEL Example 1: Given a 3 GHz CPU share and 1 CONSTRUCTING THE SYSTEM MODEL Example 1: Given a 3 GHz CPU share and 1 GB of memory, how many requests can a Gold VM handle before incurring SLA violations? u Average response time is below the limit doesn’t mean that no violations. u 20/48

CONSTRUCTING THE SYSTEM MODEL u Example 2: Given a 6 GHz CPU share and CONSTRUCTING THE SYSTEM MODEL u Example 2: Given a 6 GHz CPU share and 1 GB of memory, how many requests can a Gold VM handle before incurring SLA violations? 21/48

Observation – non-linear behavior u 3 G: 22 requests, 6 G: 29 requests, why Observation – non-linear behavior u 3 G: 22 requests, 6 G: 29 requests, why we can’t achieve 2 speedup if CPU share is 2 times? ? u Memory 22/48 or IO is not considered? ?

CONSTRUCTING THE SYSTEM MODEL (Contd. ) Power = current * voltage u Two observations: CONSTRUCTING THE SYSTEM MODEL (Contd. ) Power = current * voltage u Two observations: u – Power consumption of boot time is larger than idle state. – Power consumption having VMs is not greatly larger than idle state. 23/48

CONSTRUCTING THE SYSTEM MODEL (Contd. ) - Power consumptions u Power consumption is closely CONSTRUCTING THE SYSTEM MODEL (Contd. ) - Power consumptions u Power consumption is closely related to CPU usage. 24/48

Does increase of CPU utilization increase Computer Power consumption? u. The more utilization of Does increase of CPU utilization increase Computer Power consumption? u. The more utilization of the CPU, the more signals generated and processed by it. u. Consequently, the more utilization of the CPU, the greater the energy requirement. u. Power = energy x time. u. So we can than conclude that the greater the CPU utilization the greater the power consumption. 25/48

Key Observations (1) Idle machine consumes 70% or more power of full utilization u Key Observations (1) Idle machine consumes 70% or more power of full utilization u Conclusion (1): Power down machine to achieve maximum power savings. u (2) Intensity of the workload at VMs doesnot affect power consumption and cpu utilization. Conclusion (2): Only the number of VMs can affect power consumption. u (3) Power consumed by a server is a function of instantiated VMs on it. 26/48

EXPERIMENTAL SYSTEM Six Dell servers (models 2950 and 1950) comprise the experimental testbed u EXPERIMENTAL SYSTEM Six Dell servers (models 2950 and 1950) comprise the experimental testbed u Virtualization of the CPU and memory is enabled by VMware ESX Server 3. 0 u Virtual machines run SUSE Enterprise Linux Server Edition 10 u Control directives use the VMware API, Linux shell commands, and IPMI u Silver application is Trade 6 only; Gold application is Trade 6 + extra CPU load u Host name CPU speed # of CPU cores Memory Apollo 2. 3 GHz 8 8 GB Bacchus 2. 3 GHz 8 8 GB Chronos 1. 6 GHz 8 4 GB Demeter 1. 6 GHz 8 4 GB Eros 1. 6 GHz 8 4 GB Poseidon 2. 3 GHz 8 8 GB 27/48

CPU scheduling mode u work-conservative mode (WC-mode): in order to keep the server resources CPU scheduling mode u work-conservative mode (WC-mode): in order to keep the server resources well utilized – Under WC-mode, the shares are merely guarantees, and CPU is idle if and only if there is no runnable work. u Non-work-conservative: – With NWC-mode, the shares are caps, i. e. , each client owns its fraction of the CPU. – It means that if one VM is assigned to 3 G HZ cpu, this VM cannot use more than this even if the system is 10 G HZ and no other VM at all. u Assumption: – Esx server is worked on non-work-conservative mode – Cpu assignment is not larger than maximum limit of hardware. 28/48

DEVELOPING THE OPTIMIZER u Issue 1: Risk-aware control – Due to the energy and DEVELOPING THE OPTIMIZER u Issue 1: Risk-aware control – Due to the energy and opportunity costs incurred when switching hosts and VMs on/off, excessive switching caused by workload variability may actually reduce profits – We need to encode a notion of risk in the cost function Cost that accumulates during the time a server is being turned on but is unavailable to perform any useful service 29/48

RISK-AWARE CONTROL u u Environment-input estimates will have prediction errors We encode a notion RISK-AWARE CONTROL u u Environment-input estimates will have prediction errors We encode a notion of risk in the optimization problem – Generate a set of expected next states for lots of the predicted environment inputs Estimated environment input Construct an uncertainty bound for environment input of interest: Averaged past observed error between actual and forecasted arrival rate 30/48

RISK-AWARE CONTROL (Contd. ) u. A utility function encodes risk into the objective function RISK-AWARE CONTROL (Contd. ) u. A utility function encodes risk into the objective function Apply a mean-square variance model of utility Utility model with tunable risk-preference parameter, β A > 2*Mean Formulate a utility maximization problem Maximize utility over horizon and client classes 31/48 Uncertainty as variance Tunable risk-preference parameter, β β < 0 : risk-seeking β > 0 : risk-averse β = 0 : risk-neutral

DEVELOPING THE OPTIMIZER (Contd. ) u Issue 2: Execution-time overhead of the controller – DEVELOPING THE OPTIMIZER (Contd. ) u Issue 2: Execution-time overhead of the controller – “Curse of dimensionality” - Problem will show an exponential increase in worst -case complexity with more control options and longer prediction horizons u We use a control hierarchy to reduce execution-time overhead – An L 0 controller decides the CPU share to assign to VMs – An L 1 controller decides the number of VMs for each service and the number of servers to keep powered on – The average execution time of the L 1 controller is about 10 seconds 32/48

OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed OUTLINE u. Motivation and problem statement u. Description u. Problem of the experimental testbed formulation and controller design u. Performance u. Conclusions 33/48 results

EXPERIMENTAL SYSTEM Six Dell servers (models 2950 and 1950) comprise the experimental testbed u EXPERIMENTAL SYSTEM Six Dell servers (models 2950 and 1950) comprise the experimental testbed u Virtualization of the CPU and memory is enabled by VMware ESX Server 3. 0 u Virtual machines run SUSE Enterprise Linux Server Edition 10 u Control directives use the VMware API, Linux shell commands, and IPMI u Silver application is Trade 6 only; Gold application is Trade 6 + extra CPU load u Host name CPU speed # of CPU cores Memory Apollo 2. 3 GHz 8 8 GB Bacchus 2. 3 GHz 8 8 GB Chronos 1. 6 GHz 8 4 GB Demeter 1. 6 GHz 8 4 GB Eros 1. 6 GHz 8 4 GB Poseidon 2. 3 GHz 8 8 GB 34/48

EXPERIMENTAL PARAMETERS Parameter Cost per Kilo. Watt hour Value $0. 3 Time delay to EXPERIMENTAL PARAMETERS Parameter Cost per Kilo. Watt hour Value $0. 3 Time delay to power on a VM 1 min. 45 sec. Time delay to power on a host 2 min. 55 sec. Prediction horizon L 1: 3 steps, L 0: 1 step Control sampling period L 1: 150 sec, L 0: 30 sec Initial configuration for Gold service (application tier) 3 VMs Initial configuration for Silver Service (application tier) 3 VMs 35/48

MAIN RESULTS u. A risk-neutral controller conserves, on average, 26% more energy than a MAIN RESULTS u. A risk-neutral controller conserves, on average, 26% more energy than a system without dynamic control with very few SLA violations Energy savings % of SLA violations (Silver) % of SLA violations (Gold) Workload 1 18% 3. 2% 2. 3% Workload 2 17% 1. 2% 0. 5% Workload 3 17% 1. 4% 0. 4% Workload 4 45% 1. 1% 0. 2% Workload 5 32% 3. 5% 1. 8% Workload u More SLA violations for Silver requests than for Gold requests. 36/48

RESULTS (Contd. ) u CPU shares assigned to the Gold and Silver applications over RESULTS (Contd. ) u CPU shares assigned to the Gold and Silver applications over a 24 -hour period – L 0 layer 37/48

RESULTS (Contd. ) u Number of virtual machines assigned to the Gold and Silver RESULTS (Contd. ) u Number of virtual machines assigned to the Gold and Silver applications over a 24 -hour period – L 1 layer 38/48

EFFECT OF THE RISK PREFERENCE PARAMETER risk-averse (b = 2) controller conserves about the EFFECT OF THE RISK PREFERENCE PARAMETER risk-averse (b = 2) controller conserves about the same amount of energy as a risk-neutral (b = 0) controller u. A Energy savings (riskneutral control) (b = 0) Energy savings (riskaverse control) (b = 2) Workload 6 20. 8 % 20. 9 % Workload 7 25. 3 % 25. 2 % Workload 39/48

EFFECT OF THE RISK PREFERENCE PARAMETER (Contd. ) u A risk-averse controller (b = EFFECT OF THE RISK PREFERENCE PARAMETER (Contd. ) u A risk-averse controller (b = 2) maintains a higher Qo. S (Less violations) than a risk-neutral (b = 0) controller by reducing switching activity SLA violations (riskneutral control) (b = 0) SLA violations (riskaverse control) (b = 2) % reduction in SLA violations Workload 6 28, 635 (2. 3%) 15, 672 (1. 7%) 45% Workload 7 34, 201 (2. 7%) 25, 606 (2. 0%) 25% Switching activity (risk-neutral control) (b = 0) Switching activity (risk-averse control) (b = 2) % reduction in switching activity Workload 6 30 28 7% Workload 7 40 30 25% Workload Best-case risk-averse controller: b = 2 40/48

OPTIMALITY CONSIDERATIONS u The controller cannot achieve optimal performance – Limited by errors in OPTIMALITY CONSIDERATIONS u The controller cannot achieve optimal performance – Limited by errors in workload predictions – Limited by constrained control inputs – Limited by a finite prediction horizon u To evaluate optimality, profit gains of a risk-neutral and best-case risk-averse controller were compared against an “oracle” controller with perfect knowledge of the future Controller Total Energy Savings Total SLA violations Num. times hosts switched Risk neutral 25. 3% 34, 201 (2. 7%) 40 Risk averse 25. 2% 25, 606 (2. 0%) 38 Oracle 16. 3% 14, 228 (1. 1%) 32 41/48

CONCLUSIONS u We have addressed power and performance management in a virtualized computing environment CONCLUSIONS u We have addressed power and performance management in a virtualized computing environment within a LLC framework u The cost of control and the notion of risk is encoded explicitly in the problem formulation u A server cluster managed using LLC saves 26% in powerconsumption costs over a 24 hour period when compared to an uncontrolled system u Power savings are achieved with very few SLA violations (1. 6% of the total number of requests) u Our recommendation is a risk-averse controller since it reduces SLA violations and switching activity 42/48

Conclusion (1) – Why significant? u Why significant? – Using virtualization, implement a dynamic Conclusion (1) – Why significant? u Why significant? – Using virtualization, implement a dynamic resource provisioning model – Integrate power and performance management, reduce energy cost (26%) while causing little SLA( service level agreement) violation (less than 3%) 43/48

Conclusion(2) - Alternate approach? u Alternate approach? Technique Selectively turn off core components to Conclusion(2) - Alternate approach? u Alternate approach? Technique Selectively turn off core components to increase remaining unit efficiency Efficiency Impact • 3 -5% Deploy virtualization for existing and new demand Implement free cooling • 0 -15% Introduce greener and more power efficient servers 44/48 • 25 -30% • 10 -20%

Conclusion (3) – Improvement? u Simplify the control logic to reduce the exe. time Conclusion (3) – Improvement? u Simplify the control logic to reduce the exe. time u Integrate the memory usage when modify VM configuration u Provide a mechanism to decide the granularity to create VMs – One 6 G VM can handle more requests than that of two 3 G VMs. 45/48

SCALABILITY u. Execution time of the controller can be reduced through various techniques – SCALABILITY u. Execution time of the controller can be reduced through various techniques – Approximating control – Implementing the controller in hardware – Increasing the number of tiers in the control hierarchy – Simplifying the iterative search process to “hold” a control input constant over the prediction horizon u 46/48 A neural network or regression tree can be trained to learn the decision-making behavior of the optimizer

Scalability problem u. Scalability is not good, current result is based on 5 hosts Scalability problem u. Scalability is not good, current result is based on 5 hosts only. But there can be dozens or thousands of servers in actual data center. » 5 hosts - < 10 sec » 10 hosts - 2 min. 30 sec » 15 hosts – 30 min. 47/48

Questions? Thank you! 48/48 Questions? Thank you! 48/48