1365412d3dd1faa4ad138054ada3d07a.ppt
- Количество слайдов: 97
SFM 2012: MDE slide Software Performance Modeling Dorina C. Petriu, Mohammad Alhaj, Rasha Tawhid Carleton University Department of Systems and Computer Engineering Ottawa, Canada, K 1 S 5 B 6 http: //www. sce. carleton. ca/faculty/petriu. html
SFM 2012: MDE slide 2 Analysis of Non-Functional Properties l Model-Driven Engineering enables the analysis of non-functional properties (NFP) of software models n n n l Approach: n n n l examples of NFPs: performance, scalability, reliability, security, etc. many existing formalisms and tools for NFP analysis t queueing networks, Petri nets, process algebras, Markov chains, fault trees, probabilistic time automata, formal logic, etc. research challenge: bridge the gap between MDD and existing NFP analysis formalisms and tools rather than ‘reinventing the wheel’ additional annotations for expressing different NFPs to software models define model transformation from annotated software models to different NFP analysis models using existing solvers, analyze the NFP models and give feedback to designers In the UML world: define extensions as UML Profiles for expressing NFPs n n UML Profile for Schedulability, Performance and Time (SPT) UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE)
SFM 2012: MDE slide 3 Software performance evaluation in MDE Model-to-code Transformation Software Code generation UML Tool UML + MARTE Software Model-to-model Transformation Feedback to designers l Performance Model Performance Analysis Results Performance Analysis Tool Performance evaluation Software performance evaluation in the context of Model-Driven Engineering: n starting point: UML software model used also for code generation n add performance annotations (using the MARTE profile) n generate a performance analysis model t queueing networks, Petri nets, stochastic process algebra, Markov chain, etc. n solve analysis model to obtain quantitative results n analyze results and give feedback to designers
SFM 2012: MDE slide 4 PUMA transformation approach l PUMA project: Performance from Unified Model Analysis Software model with performance annotations (Smodel) Transform Smodel to CSM (S 2 C) Transform CSM to some Pmodel (C 2 P) Improve Smodel Performance results and design advice Core Scenario Model (CSM) Explore solution space Performance model (Pmodel)
SFM 2012: MDE Transformation Target: Performance Models slide 5
SFM 2012: MDE slide 6 Performance modeling formalisms l Analytic models n n l Queueing Networks (QN) t capture well contention for resources t efficient analytical solutions exists for a class of QN (“separable” QN): possible to derive steady-state performance measures without resorting to the underlying state space. Stochastic Petri Nets t good flow models, but not as good for resource contention t Markov chain-based solution suffers from state space explosion Stochastic Process Algebra t introduced in mid-90 s by merging Process Algebra and Markov Chains Stochastic Automata Networks t communicating automata synchronized by events; random execution times t Markov chain-based solution (corresponds to the system space state) Simulation models n n less constrained in their modeling power, can capture more details harder to build and more expensive to solve (running the model repeatedly).
SFM 2012: MDE slide 7 Queueing Networks (QN) l Queueing network model = a directed graph: n n n nodes are service centres, each representing a resource; customers, representing the jobs, flow through the system and compete for these resources; arcs with associated routing probabilities (or visit ratios) determine the paths that customers take through the network. used to model systems with stochastic characteristics multiple customer classes: each class has its own workload intensity (the arrival rate or number of customers), service demands and visit ratios bottleneck service center: saturates first (highest demand, utilization) Open QN system Closed QN system
SFM 2012: MDE slide 8 Single Service Center: Non-linear Performance l Typical non-linear behaviour for queue length and waiting time n n server reaches saturation at a certain arrival rate (utilization close to 1) at low workload intensity: an arriving customer meets low competition, so its residence time is roughly equal to its service demand as the workload intensity rises, congestion increases, and the residence time along with it as the service center approaches saturation, small increases in arrival rate result in dramatic increases in residence time. Utilization Residence Time Queue length
SFM 2012: MDE slide 9 Layered Queueing Network (LQN) model http: //www. sce. carleton. ca/rads/lqn-documentation l LQN is a extension of QN n n n models both software tasks (rectangles) and hardware devices (circles) represents nested services (a server is also a client to other servers) software components have entries corresponding to different services arcs represent service requests (synchronous, asynchronous and forwarding) multi-servers used to model components with internal concurrency client. E Client. T Client CPU service 1 service 2 Appl entries query 1 task query 2 device Appl CPU DB DB CPU Disk 1 Disk 2
SFM 2012: MDE slide 10 LQN extensions: activities, fork/join 1. . m Local Wks Local Client 1. . n e 1 e 2 e 4 e 3 e 5 Web Server & & e 6 Secure Proc SDisk Internet e. Comm Server a 3 e. Comm Proc a 4 [e 4] Secure DB Remote Wks Web Proc a 1 a 2 Remote Client e 7 Disk DB DB Proc
LQN SFM 2012: MDE Metamodel package slide 11 LQNmetamodel -fwd. To. Entry Processor Forward 1 -multiplicity : Integer = 1 -fwd. By. Entry -scheduler. Type -prob. Forward 0. . * -host 1 Call -call. To. Entry -allocated. Task 0. . * 1 Task 0. . 1 -multiplicity : Integer = 1 -mean. Count . . . -call. By. Activity -priority. On. Host : Integer = 1 0. . * -scheduler. Type -schedulable. Process 1 -task. Operation Sync. Call 1. . * -fwd. To Entry 0. . 1 1 -fwd. By Async. Call 1 -call. To 1 1 -reply. To -predecessor 0. . * 1 Precedence -successor -first. Activity 1 1 0. . 1 1 Activity -act. Set. For. Entry Sequence -think. Time : float = 0. 0 -host. Demand : float 0. . * 1. . * -host. Dem. CV : float = 1. 0 -deterministic. Flag : Integer = 0 -act. Set. For. Task 1. . * -repetitions. For. Loop : float = 1. 0 -prob. For. Branch : float = 1. 0 0. . * -after Branch -before Merge -reply. Fwd. Flag : Boolean Fork Phase 1 -reply. Flag = true Phase 2 -reply. Flag = False -successor. after = phase 2. . . -successor = NIL Join
SFM 2012: MDE slide 12 Performance versus Schedulability l Difference between performance and schedulability analysis n n l Statistical performance results (analysis outputs): n n n l mean (and variance) of throughput, delay (response time), queue length resource utilization probability of missing a target response time Input parameters to the analysis - also probabilistic: n n n l performance analysis: timing properties of best-effort and soft real-time systems t e. g. , information processing systems, web-based applications and services, enterprise systems, multimedia, telecommunications schedulability analysis: applied to hard real-time systems with strict deadlines t analysis often based on worst-case execution time, deterministic assumptions random arrival process random execution time for an operation probability of requesting a resource Performance models represents a system at runtime n must include characteristics of software application and underlying platforms
SFM 2012: MDE slide UML Profiles for performance annotations: SPT and MARTE
SFM 2012: MDE slide 14 UML SPT Profile Structure General Resource Modeling Framework «profile» RTresource. Modeling «import» «profile» RTconcurrency. Modeling «import» «profile» RTtime. Modeling «import» Analysis Models «profile» SAProfile «import» «profile» RSAprofile Infrastructure Models «profile» PAprofile «model. Library» Real. Time. CORBAModel
SFM 2012: MDE slide 15 SPT Performance Profile: Fundamental concepts l Scenarios define execution paths with externally visible end points. n l Each scenario is executed by a workload: n n l l l open workload: requests arriving at in some predetermined pattern closed workload: a fixed number of active or potential users or jobs Scenario steps: the elements of scenarios joined by predecessor-successor relationships which may include forks, joins and loops. n l Qo. S requirements can be placed on scenarios. a step may be an elementary operation or a whole sub-scenario Resources are used by scenario steps. Quantitative resource demands for each step must be given in performance annotations. The main reason for building performance models is to compute additional delays due to the competition for resources! Performance results include resource utilizations, waiting times, response times, throughputs. Performance analysis is applied to real-time systems with stochastic characteristics and soft deadlines (use mean value analysis methods).
SFM 2012: MDE slide 16 SPT Performance Profile: the domain model Workload Resources Performance. Context 1 0. . n 1 1. . n Workload 1. . n 1 1. . n 0. . n PScenario host. Exec. Demand response. Time priority 0. . n utilization scheduling. Policy throughput 0. . n 1 +root Closed. Workload population external. Delay Open. Workload occurence. Pattern +successor {ordered} 1. . n 1 PStep probability repetition delay operations interval execution. Time +predecessor Scenario/ Step PResource +host 0. . 1 PProcessing. Resource processing. Rate context. Switch. Time priority. Range is. Preeemptible PPassive. Resource waiting. Time response. Time capacity access. Time
SFM 2012: MDE slide 17 MARTE overview MARTE domain model Marte. Foundations Marte. Design. Model Specialization of MARTE foundations for modeling purpose (specification, design, etc. ): RTE model of computation and communication Software resource modeling Hardware resource modeling Foundations for modeling and analysis of RT/E systems : Core. Elements NFPs Time Generic resource modeling Generic component modeling Allocation Marte. Analysis. Model Specialization of MARTE foundations for annotating models for analysis purpose: Generic quantitative analysis Schedulability analysis Performance analysis
SFM 2012: MDE GQAM dependencies and architecture l l l GQAM (Generic Quantitative Analysis Modeling): Common concepts for analysis SAM: Modeling support for schedulability analysis techniques. PAM: Modeling support for performance analysis techniques. slide 18
SFM 2012: MDE slide 19 Annotated deployment diagram comm. Rcv. Ovh and comm. Tx. Ovh are host-specific costs of receiving and sending messages block. T describes a pure latency for the link «comm. Host» internet: lan: {block. T = (100, us)} {block. T = (10, us), capacity = (100, Mb/s)} «exec. Host» web. Server. Host: eb. Host: {comm. Rcv. Overhead = (0. 15, ms/KB), comm. Tx. Overhead = (0. 1, ms/KB), res. Mult = 5} «deploy» «artifact» eb. A «manifest» : EBrowser {comm. Rcv. Overhead = (0. 1, ms/KB), comm. Tx. Overhead = (0. 2, ms/KB)} res. Mult = 5 describes a symmetric multiprocessor with 5 processors «deploy» «artifact» web. Server. A «manifest» : Web. Server «exec. Host» db. Host: {comm. Rcv. Overhead = (0. 14, ms/KB), comm. Tx. Overhead = (0. 07, ms/KB), res. Mult = 3} «deploy» «artifact» database. A «manifest» : Database
SFM 2012: MDE slide 20 Simple scenario initial step is stereotyped for workload (open), execution demand request message size «Pa. Run. TInstance» eb: EBrowser a swimlane or lifeline stereotyped «Pa. Run. TInstance» references a runtime active instance; pool. Size specifies the multiplicity «Pa. Run. TInstance» web. Server: Web. Server 1: {pool. Size = (webthreads=80), instance = webserver} «Pa. Workload. Event» {open (inter. Arr. T=(exp(17, ms))), «Pa. Step» {host. Demand = (4. 5, ms)} «pa. Comm. Step» «Pa. Run. TInstance» database: Database {pool. Size = (dbthreads=5), instance = database} 2: «pa. Step» «pa. Comm. Step» {host. Demand = (12. 4, ms), rep = (1. 3, -, mean), msg. Size = (2, KB)} 3: 4: «Pa. Comm. Step» {msg. Size = (75, KB)} «pa. Comm. Step» {msg. Size = (50, KB)}
SFM 2012: MDE Transformation Principles from SModels to PModels slide
SFM 2012: MDE slide 22 UML model for performance analysis l For performance analysis, a UML model should contain: n Key use cases realized by representative scenarios • frequently executed, with performance constraints • each scenario is a graph of steps (partial ordering) n Resources used by each scenario t resource types: active or passive, physical or logical, hardware or software • examples: processor, disk, process, software server, lock, buffer t quantitative resource demands for each scenario step • how much, how many times? n Workload intensity for each scenario open workload: arrival rate of requests for the scenario t closed workload: number of simultaneous users t
SFM 2012: MDE slide 23 Direct UML to LQN Transformation: our first approach l Mapping principle: n n l Generate LQN model structure (tasks, devices and their connections) from the structural view: n n l software and hardware resources → service centres scenarios → job flow from centre to centre active software instances → LQN tasks map deployment nodes → LQN devices Generate LQN detailed elements (entries, phases, activities and their parameters) from the behavioural view: n n n identify communication patterns in key scenarios due to architectural patterns t client/server, forwarding server chain, pipeline, blackboard, etc. aggregate scenario steps according to each pattern and map to entries, phases, etc. compute LQN parameters from resource demands of scenario steps.
SFM 2012: MDE slide 24 Generating the LQN model structure a) High-level architecture Client Server CLIENT SERVER Client 1. . n <
SFM 2012: MDE slide 25 Client Server Pattern l Structure n l the participants and their relationship Behaviour n Synchronous communication style - the client sends the request and remains blocked until the sender replies Client Sever Client. Server Client 1. . n 1 Server request service waiting Server wait for reply a) Client Sever collaboration continue work serve request and reply complete service (opt) b) Client Sever behaviour
SFM 2012: MDE slide 26 Mapping the Client Server Pattern to LQN Client User Web. Server LQN waiting request service e 1 Client [ph 1] wait for reply continue work and reply e 2, ph 1 . . . e 1, ph 1 serve request e 2, ph 2 complete Client CPU e 2 [ph 1, ph 2] Server service (opt) Server CPU For each subset of scenario steps mapped to a LQN phase or activity, compute the execution time S: S = Si=1, n ri si where ri = number of repetitions and si = execution time of step i.
SFM 2012: MDE slide 27 Identify patterns in a scenario User. Interface EComm. Serv browse and select items phase 1 idle check valid item code add item to query waiting phase 1 DBMS LQN <
SFM 2012: MDE Transformation using a pivot language slide
SFM 2012: MDE slide 29 Pivot languages l l Pivot language, also called a bridge or intermediate language, can be used as an intermediary for translation. Avoids the combinatorial explosion of translators across every combination of languages. Examples of pivot languages for performance analysis: n Core Scenario Model (CSM) n Klaper n PMIF + S-PMIF n Palladio Model l Transformations from N source languages to M target languages require N*M transformations. L 1 L 2. . . LN l L’ 1 L’ 2. . . L’M Using a pivot language, only N+M transformations. Also, a smaller semantic gap L 1 L 2. . . LN L’ 1 Lp L’ 2. . . L’M
SFM 2012: MDE slide 30 Core Scenario Model l l CSM: a pivot Domain Specific Language used in the PUMA project at Carleton University (Performance from Unified Model Analysis) Semantically – between the software and performance domains n n focused on scenarios and resources performance data is intrinsic to CSM t quantitative resource demands made by scenario steps t workload PUMA Transformation chain UML+SPT UML+MARTE UCM LQN CSM QN Petri Net Simulation
SFM 2012: MDE slide 31 CSM Metamodel Scenario/steps Resources Workload
SFM 2012: MDE CSM metamodel l Basic scenario elements, similar to the SPT Performance Profile n n l scenario composed of steps t a step may be refined as a sub-scenario precedence relationships among steps t sequence, branch, merge, fork, join, loop steps performed by components running on hosts (Processor resources) resources and acquire/release operations on resources t inferred for Component-based resources (Processes) Four kinds of resources in CSM: n n Processing. Resource (a node in a deployment diagram) Component. Resource (process, or active object) t component in a deployment t lifeline in SD may correspond to a runtime component t swimlane in AD may correspond to a runtime component Logical. Resource (declared as GRMresource) ext. Op resource - implied resource to execute external operations slide 32
SFM 2012: MDE slide 33 CORBA-based case study l Two CORBA-based client-server systems: n n l H-ORB (handle-driven ORB): the client gets the address of the server from the agent and communicates with the server directly. F-ORB (forwarding ORB): the agent forwards the client request to the appropriate server, which returns the results of the computations directly to the client. Synthetic application: n n n Contains two services A and B; two copies of each service are provided; The clients connect to these services through the ORB. Each client executes a cycle repeatedly, making one request to Server A (distributed randomly between copies A 1 and A 2) and one to Server B (distributed randomly between copies B 1 and B 2). The client performs a bind operation before every request. Since the experiments were performed on a local area network, the inter-node delay that would appear in a wide-area network was simulated by making a sender process sleep for D units of time before sending a message.
SFM 2012: MDE slide 34 H-ORB deployment and scenario Deployment Scenario as activity diagram
SFM 2012: MDE H-ORB scenario as sequence diagram «Ga. Analysis. Context» sd HORB «Pa. Run. TInstance» Agent Server. A 1 Server. A 2 Server. B 1 Server. B 2 Client ref «Ga. Workload. Event» {pattern=(closed(Population= $N))} Sleep Get. Handle() ref alt «Pa. Step» {host. Demand=(4, ms)} Sleep «Pa. Step» {prob=0. 5} «Pa. Step» {host. Demand=($SA, ms)} A 1 Work() ref Sleep «Pa. Step» {prob=0. 5} «Pa. Step» {host. Demand=($SA, ms)} A 2 Work() ref alt Sleep Get. Handle() ref «Pa. Step» {host. Demand=(4, ms)} Sleep «Pa. Step» {prob=0. 5} «Pa. Step» {host. Demand=($SB, ms)} B 1 Work() ref «Pa. Step» {host. Demand=($SB, ms)} B 2 Work() Sleep «Pa. Step» {prob=0. 5} ref Sleep slide 35
SFM 2012: MDE slide 36 Transformation from UML+MARTE to CSM l Structural elements are generated first: n n l CSM Processing. Resource CSM Component Scenarios described by SD n n n CSM Start Path. Connection is generated first, and the workload information is attached to it Lifelines stereotyped «Pa. Run. TInstance» correspond to an active runtime instance The translation follows the message flow of the scenario, generating corresponding Steps and Path. Connections a UML Execution Occurrence generates a simple Step Complex CSM Steps with a nested scenario correspond to operand regions of UML Combined Fragments and Interaction Occurrences. Mapping of MARTE stereotypes to CSM model elements
SFM 2012: MDE Transformation from SD to CSM «Ga. Analysis. Context» sd HORB «Pa. Run. TInstance» Client ref Agent Server. A 1 Server. A 2 Server. B 1 Server. B 2 «Ga. Workload. Event» {pattern=(closed(Population= $N))} Sleep Get. Handle() «Pa. Step» {host. Demand=(4, ms)} ref alt Sleep «Pa. Step» {prob=0. 5} «Pa. Step» {host. Demand=($SA, ms)} A 1 Work() ref Sleep «Pa. Step» {prob=0. 5} «Pa. Step» {host. Demand=($SA, ms)} A 2 Work() ref Sleep Get. Handle() «Pa. Step» {host. Demand=(4, ms)} ref alt Sleep «Pa. Step» {prob=0. 5} «Pa. Step» {host. Demand=($SB, ms)} B 1 Work() ref «Pa. Step» {host. Demand=($SB, ms)} B 2 Work() Sleep «Pa. Step» {prob=0. 5} ref Sleep slide 37
SFM 2012: MDE slide 38 Transformation from CSM to LQN l The first transformation phase parses the CSM resources and generates: n n l The second transformation phase traverses the CSM to determine: n n l a LQN Task for each CSM Component a LQN Processor for each CSM Processing. Resource the branching structure and the sequencing of Steps within branches the calling interactions between Components. A new LQN Entry is generated whenever a task receives a call n The entry internals are described by LQN Activities that represent a graph of CSM Steps or by Phases. LQN model for the H-ORB system
SFM 2012: MDE slide 39 Validation of LQN against measurements l The LQN results are compared with measurements of the implementation of a performance prototype based on a Commercial-Off-The-Shelf (COTS) middleware product and a synthetic workload running on a network of Sun workstations using Solaris 2. 6 H-ORB F-ORB
SFM 2012: MDE slide Extending PUMA for Service-Oriented Architecture
SFM 2012: MDE PUMA 4 SOA approach l Extensions n n n l Smodel adapted to service-based systems: t Business process model t Service model Separation between: t PIM: platform independent t PSM: platform specific Use Performance Completion feature model to specify platform variability Techniques n n Use Aspect-oriented Models for platform operations t aspect composition may take place at different levels: UML, CSM, LQN Traceability between different kinds of models slide 41
SFM 2012: Eligibility Referral. MDE System (ERS) slide 42 Source PIM: (a) Business Process Model
SFM 2012: MDE slide 43 Source PIM: (b) Service Architecture Model <
SFM 2012: MDE slide 44 Source PIM: (c) Service Behaviour Model join points for platform aspects
SFM 2012: the Models describing. MDE platform slide 45 Deployment of the primary model Admission node Insurance node Transferring node
SFM 2012: MDE slide 46 Performance Completion Feature Model l l Describes the variability in the service platform The Service Platform feature model in the example defines: n n l three mandatory feature groups: Operation, Message Protocol and Realization two optional feature groups: Communication and Data compression Each feature is described by an aspect model to be composed with the base model Service Platform Data Compression <1 -1> <
SFM 2012: MDE Generic Aspect Model: Service Invocation slide 47 l Generic Aspect Model: Service Invocation Aspect n n l e. g. defines the structure and behavior of the platform aspect in a generic format uses generic names (i. e. , formal parameters) for software and hardware resources uses generic performance annotations (MARTE variables) Advantage: reusability Context-specific aspect model: n n after identifying a join point, the generic aspect model is bound to the context of the join point the context-specific aspect model is composed with the platform model
SFM 2012: MDE slide 48 Binding generic to concrete resources l l l Generic names (parameters) are bound to concrete names corresponding to the context of the join point Sometime new resources are added to the primary model User input is required (e. g. , in form of Excel spreadsheet, as discussed later).
SFM 2012: MDE slide 49 Binding performance annotation variables l l Annotation variables allowed in MARTE are used as generic performance annotations Bound to concrete reusable platform-specific annotations
SFM 2012: MDE slide 50 PSM: scenario after composition in UML composed service invocation aspect composed service response aspect
SFM 2012: MDE Aspect composition at CSM level l Steps: n n n l Advantages: n n l PIM and Platform aspect models in UML are transformed into CSM separately. The UML workflow model is transformed into the CSM top level model. The UML service behavior models are transformed into a set of CSM sub-scenario models. AOM is used to perform aspect composition to generate the CSM of PSM. CSM is then transformed into LQN. CSM has a lightweight metamodel compared with UML it is easier to implement the aspect composition in CSM and to insure its consistency. Drawbacks: n point cuts cannot be defined completely at the CSM level, because not all details of the UML service architecture model are transformed to CSM. slide 51
SFM 2012: MDE Aspect composition at LQN level l Steps: n n l Advantages: n l The PIM and Platform Aspect models at CSM are first transformed into LQN model separately. The CSM top level scenario model is transformed into a top level LQN activity graph. The CSM sub-scenario models are transformed into a set of tasks with entries that represents the steps. AOM aspect composition is performed to generate the LQN of PSM. LQN has a lightweight metamodel similar to CSM Drawbacks: n n point cuts cannot be defined completely at the LQN level, because many details of the UML service architecture model are lost. The granularity of the aspects should correspond to entries – otherwise the composition becomes more difficult. slide 52
SFM 2012: MDE slide 53 Traceability of Model Transformations l PUMA 4 SOA defines two modeling layers for the software: n n l l workflow layer represented by the Workflow model. service layer represented by the Service Architecture model and the Service behaviour model Model traceability is maintained by separating the transformation of the workflow layer and the service layer Why traceability is desirable: n n it makes the model transformation more modular, especially when there are many workflows in the UML design model. it facilitates the reporting of performance results in software model terms.
SFM ERF LQN Model for 2012: MDEcase study slide 54
SFM 2012: MDE slide 55 Finer service granularity l l l A: The base case; multiplicity of all tasks and hardware devices is 1, except for the number of users. Transferring processor is the system bottleneck. B: Resolve bottleneck by increasing the multiplicity of the bottleneck processor node to 4 processors. Only slight improvement because the next bottleneck - the middleware MW_NA - kicks in. C: The software bottleneck is resolved by multi-threading MW_NA. D: Increasing to 2 the number of disks units for Disk 1 and adding additional threads to the next software bottleneck tasks, dm 1 and MW-DM 1. The throughput goes up by 24% with respect to case C. The bottleneck moves to DM 1 processor. E: Increasing the number of DM 1 processors to 2 has a considerable effect.
SFM 2012: MDE slide 56 Coarser service granularity l l A: The base case. The software task dm 1 is the initial bottleneck. B: The software bottleneck is resolved by multi-threading dm 1. The response time is reduced slightly and the bottleneck moves to Disk 1. C: Increasing to 2 the number of disks units for Disk 1 has a considerable effect. The maximum throughput goes up by 60% with respect to case B. The bottleneck moves to the Transferring processor. D: Increasing the multiplicity of the Transferring processor to 2 processors, and adding additional threads to the next software bottleneck task MW-NA; the throughput grows by 11 %.
SFM 2012: MDE slide 57 Coarser versus finer service granularity l Difference between the D cases of the two alternatives. The compared configurations are similar in number of processors, disks and threads, except that the system with coarser granularity performs fewer service invocations through the web service middleware.
SFM 2012: MDE Extending PUMA for Software Product Lines slide
SFM 2012: MDE slide 59 Software Product Line (SPL) l Software Product Line (SPL) engineering takes advantage of the commonality and variability between the products of a family n n n l Objective of the research: n n l SPL challenges to model/manage commonality and variability between family members to support the generation of a specific products by reusing core family assets. integrate performance analysis in the UML-based model-driven development process for SPL parametric performance annotations become part of the reusable family assets Why? n n early performance analysis helps developers to gain insight in the performance properties of a system during its development help developers to choose between different design alternatives early in the lifecycle, to built systems that will meet their performance requirements.
SFM 2012: MDE slide 60 Challenge l Main research challenge: a large semantic difference between a SPL model and a performance model n n l SPL model t a collection of core “generic” asset models, which are building blocks for many different products with all kind of options and alternatives t cannot be implemented, run and measured as such Performance model t “model @ runtime” focusing on how a running system is using its resources in a certain operational mode under a well defined workload Proposed Approach: n Two-phase process for automating the derivation of a performance model for a concrete product from an annotated SPL model: t transformation of an annotated SPL model to an annotated model for a given product – includes binding of parametric performance annotations t further transformation to a performance model by known techniques (PUMA).
SFM 2012: MDE slide 61 Features l l l Feature is a concept for modeling variability that represents a requirement or characteristic provided by one or more members of the product line Feature model is essential for both variability management and product derivation Feature models are used in this work to represent two different variability spaces: n n l Regular feature model: representing functional variability between products Performance Completion (PC) feature model: representing variability in the platform Mapping feature to the model elements realizing it: n n Regular feature: by PL stereotypes indicating the feature or condition PC feature: by MARTE performance-related stereotypes and attributes
SFM 2012: MDE slide 62 Transformation approach Domain engineering UML+MARTE+SPLV SPL Model Feature model Feature Configuration Performance Feedback Diagnosis PC-Feature model M 2 MT: Instantiate Specific Product Model with Generic Annotations Application engineering Results LQN Parameter Spreadsheet User enters concrete values Performance M 2 MT: Generate Parameter Spreadsheet Concrete Annotations Spreadsheet M 2 MT: Perform Binding Software Domain first phase Solver UML+ MARTE M 2 MT: PUMA Product Model Transformation LQN Performance Model Performance Domain second phase
SFM 2012: MDE slide 63 E-commerce SPL Feature Model: FODA notation
SFM 2012: MDE slide 64 E-commerce SPL Feature Model: UML «common feature» «requires» «optional feature» Purchase Order «mutually includes» «exactly-one-of feature group» Customer «at-least-one-of feature group» Customer Attractions {mutually exclusive feature} «alternative feature» Business Customer «optional feature» Sales «exactly-one-of feature group» Catalog «optional feature» Help Desk {mutually exclusive feature} «alternative feature» Home Customer «alternative feature» Static «at-least-one-of feature group» Delivery «mutually includes» {mutually exclusive feature} «at-least-one-of feature group» Payment «optional feature» Electronic «alternative feature» Centralized «optional feature» Credit. Card «optional feature» Shipping «at-least-one-of feature group» Invoices «optional feature» Several Language «optional feature» Debit. Card «optional feature» I/E Laws «more-than-one-required» «optional feature» Check «at-least-one-of feature group» Shipping. Type «more-than-one-required» «optional feature» Switching Menu «optional feature» On-line Display «mutually includes» «optional feature» Printed Invoice «optional feature» Tariffs Calculation «optional feature» Currency Conversion «mutually includes» «optional feature» Call Center «at-least-one-of feature group» International Sale «alternative feature» Dynamic «exactly-one-of feature group» Data Storage «alternative feature» Distributed «requires» «mutually includes» «optional feature» Promotions «optional feature» Membership Discount «at-least-one-of feature group» Customer Inquiries E-Commerce Kernel «requires» «optional feature» Normal «optional feature» Express «optional feature» Package Slip
SFM 2012: MDE slide 65 Modeling Variability in design models l Variability in Use Case Models n n n l Variability in Structure Models n l stereotypes applied to use cases: «kernel» , «optional» , «alternative» feature-based grouping of use cases in packages variation points inside a use case: t complex variations: use “extend” and “include” relationships t small variations: define variation points in the scenarios realizing a use case stereotypes for classes: “kernel class”, “optional class”, “variant class” Variability in Behavior Models n n scenario models (for every scenario corresponding to every use case): stereotypes for interaction diagrams: «kernel» , «optional» , or «alternative» variation points defined as alternative fragments possible to model variability by using inherited and parameterized statecharts (not used in this paper).
SFM 2012: MDE slide 66 E-commerce SPL: Use Case Model Feature=Customer Inquiries Feature=Business Customer Feature=Customer Attractions Feature=Electronic Delivery «optional» Customer Inquiry {vp=Inquiries} «kernel» Browse Catalog {vp=Catalog} «kernel» Feature=Shipping Delivery Feature=International Sale «optional» Electronic «kernel» «optional» Customer Attractions {vp=Attractions} Shipping Process Delivery Order {vp=Shipping. Type} {ext point=Delivery} «extend» Make Purchase Order Feature=Business Customer «alternative» «optional» Create Requisition Deliver Purchase Order {vp=Data. Storage} «optional» Authorization «optional» Debit. Card Center «alternative» Credit. Card «alternative» «optional» Confirm Delivery Check Customer Check {vp=Data Storage} Account «extend» Feature=Debit Card «extend» «alternative» Bill Customer {ext point=Payment} {vp=Switching Menu} Feature=Check Feature=Home Customer «optional» Feature=Credit Card International Sales {vp=International} Supplier «kernel» Confirm Shipping Bank «alternative» Send Invoice Feature=Purchase Order Feature=Business Customer «optional» Prepare Purchase Order Feature=Purchase Order Wholesaler
SFM 2012: MDE slide 67 E-commerce SPL: Fragment of Class Diagram
SFM 2012: MDE slide 68 E-commerce SPL: Browse Catalog scenario «Ga. Analysis. Context» {context. Params= $N 1, $Z 1, $Req. T, $FSize, $Blocks} «variant» «kernel» sd Browse Catalog «Pa. Run. TInstance» {instance=$CBrowser, host=$Cust. Node} : Customer. Interface «Pa. Run. TInstance» {instance = $Cat. Server, host=$Cat. Node} : Catalog get. List «Ga. Workload. Event» {pattern=$Patt. BC} «Pa. Step» {host. Demand=($Get. LD, ms), alt resp. T=($Req. T, ms), calc)} «Pa. Comm. Step» {msg. Size = ($Get. L *0. 2, KB), comm. Tx. Ovh=($Get. LSend, ms), comm. Rcv. Ovh=($Get. LRcv, ms)} «Pa. Comm. Step» {msg. Size=($Ret. L, KB)} : Static. Storage «optional» «Pa. Run. TInstance» {instance =$Pro. DB, host=$Pro. DBNode} : Product. DB «optional» «Pa. Run. TInstance» {instance =$Pro. Dis, host=$Pro. Dis. Node} : Product. Diplay «Alt. Design. Time» {VP=Catalog} «Single. Choice. Feature» {Reg. B=True} [Static] [Dynamic] catalog. Info «optional» «Pa. Run. TInstance» {instance = $Disk. T, host=$Desk. TNode}
SFM 2012: MDE slide 69 E-commerce SPL: Bill Customer Scenario sd Bill Customer «Ga. Analysis. Context» {context. Params= $N 1, $Z 1, $Req. T, $FSize, $Blocks} «variant» «Pa. Run. TInstance» {instance=$Supplier, host=$Supplier. Node} : Supplier. Interface «optional» «Pa. Run. TInstance» {instance=$Billing, host=$Billing. Node} : Billing «kernel» «Pa. Run. TInstance» {instance =$Deli. Order, host=$Deli. Ord. Node} : Delivery. Order «optional» «Pa. Run. TInstance» {instance=$CAccount, host=$CAccount. Node} : Customer. Account «variant» «Pa. Run. TInstance» {instance=$CBrowser, host=$Cust. Node} : Customer. Interface «Opt. Design. Time» {VP=Switching Menu} «Single. Choice. Feature» {Opt. B=True} opt [Switching Menu] «Opt. Design. Time» {VP=Payment} [Debit. Card] opt ref «Opt. Design. Time» {VP=Payment} [Credit. Card] opt ref «Opt. Design. Time» {VP=Payment} [Check] opt ref «Multi. Choice. Feature» {Alt. B=True} «Single. Choice. Feature» {Reg. B=True} Pay by Debit. Card «Multi. Choice. Feature» {Alt. B=True} «Single. Choice. Feature» {Reg. B=True} Pay by Credit. Card «Multi. Choice. Feature» {Alt. B=True} «Single. Choice. Feature» {Reg. B=True} Pay by Check «optional» «Pa. Run. TInstance» {instance=$DMenu, host=$DMenu. Node} : Display. Menu
SFM 2012: MDE slide 70 Product Model Derivation l Select the desired feature configuration for the product: n l A generated UML+MARTE model for a specific product contains: n n n l l feature configuration is a set of compatible features that uniquely characterize a product use case model for the specific product class diagram sequence diagrams for each scenario in each selected use case for the product Each diagram of the generated product model is obtained from a SPL diagram by selecting only the model elements that realize the desired features Profile use in the generated product model: n n only MARTE is used (still with generic parameters) the PL profile has been eliminated as the variability dependent on regular features has been resolved.
SFM 2012: MDE slide 71 Feature configuration for Home Customer Product «common feature» E-Commerce Kernel «common feature» «requires» «optional feature» purchase. Order «mutually includes» «exactly-one-of feature group» Customer «alternative feature» Home Customer «at-least-one-of feature group» {mutually exclusive feature} Customer Attractions «optional feature» Sales «alternative feature» Business Customer Home Customer «optional feature» Sales «optional feature» Promotions «at-least-one-of feature group» Customer Inquiries E-Commerce Kernel «requires» «mutually includes» «requires» «exactly-one-of feature group» Catalog «alternative feature» Dynamic {mutually exclusive feature} «alternative feature» Static «alternative feature» Dynamic «optional feature» Help Desk «optional feature» Call Center «at-least-one-of feature group» International Sale «mutually includes» «at-least-one-of feature group» Payment Delivery «optional feature» «mutually includes» «optional feature» Tariffs Calculation «optional feature» Credit. Card Debit. Card «exactly-one-of feature group» Electronic Membership Discount Currency Conversion Data Storage «optional feature» Several Language Shipping «optional feature» {mutually exclusive feature} «optional feature» Credit. Card Debit. Card Electronic «optional feature» «more-than-one-required» I/E Laws «alternative feature» «optional feature» Centralized Distributed Shipping Check «more-than-one-required» «optional feature» «mutually includes» Switching Menu «more-than-one-required» «at-least-one-of feature group» Shipping. Type Invoices «optional feature» On-line Display Printed Invoice Normal Switching Menu «mutually includes» «optional feature» On-line Display «mutually includes» «optional feature» Printed Invoice «optional feature» Normal «optional feature» Express «optional feature» Package Slip
SFM 2012: MDE slide 72 Feature=International Sale Use Case Model for Home Customer Product Feature=Customer Attractions Feature=Customer Inquiries Feature=Electronic Delivery Feature=Shipping Delivery Feature=Business Customer «optional» Customer Inquiry {vp=Inquiries} Feature=Business Customer «kernel» Browse Catalog {vp=Catalog} «kernel» «optional» Electronic «optional» International Sales {vp=International} «kernel» Shipping Process Delivery Order {vp=Shipping. Type} «optional» Customer Attractions {ext point=Delivery} {vp=Attractions} «extend» Make Purchase Order «extend» Supplier «alternative» Create Requisition «optional» Bank {vp=Data. Storage} Deliver Purchase Order «kernel» «optional» «extend» Confirm Shipping Authorization Debit. Card «optional» Center «alternative» Credit. Card Bill Customer «alternative» «optional» Wholesaler Feature=Debit Confirm Delivery {ext point=Payment} Check Customer Feature=Purchase Check Menu} {vp=Data Storage} Card {vp=Switching Account Order «alternative» «extend» Feature=Check Send Invoice Feature=Business Feature=Credit «optional» Feature=Home Feature=Purchase Feature=Home Customer Card Prepare Purchase Customer Order
SFM 2012: MDE slide 73 Implicit selection of non-annotated elements Use case – actor: stereotype only the use cases Model. A: Model id=MA packaged. Element Customer Browse: Use. Case selected <
SFM 2012: MDE slide 74 Class Diagram (fragment) for Home Customer
SFM 2012: MDE slide 75 Implicit selection of non-annotated elements Association between two classes: stereotype only the classes packaged. Element Payment: Class id=CA owned. Attribute selected <
SFM 2012: MDE slide 76 Transformation of Browse Catalog scenario for Home Customer Prodduct sd Browse Catalog «Ga. Analysis. Context» {context. Params= $N 1, $Z 1, $Req. T, $FSize, $Blocks} «kernel» «variant» «Pa. Run. TInstance» {instance = $Cat. Server, host=$Cat. Node} «Pa. Run. TInstance» {instance=$CBrowser, host=$Cust. Node} : Customer. Interface : Catalog get. List «Ga. Workload. Event» {pattern=$Patt. BC} «Pa. Step» {host. Demand=($Get. LD, ms), alt resp. T=($Req. T, ms), calc)} «Pa. Comm. Step» {msg. Size = ($Get. L *0. 2, KB), comm. Tx. Ovh=($Get. LSend, ms), comm. Rcv. Ovh=($Get. LRcv, ms)} «Pa. Comm. Step» {msg. Size=($Ret. L, KB)} : Static. Storage «optional» «Pa. Run. TInstance» {instance =$Pro. DB, host=$Pro. DBNode} : Product. DB «optional» «Pa. Run. TInstance» {instance =$Pro. Dis, host=$Pro. Dis. Node} : Product. Diplay «Alt. Design. Time» {VP=Catalog} «Single. Choice. Feature» {Reg. B=True} [Static] [Dynamic] catalog. Info «optional» «Pa. Run. TInstance» {instance = $Disk. T, host=$Desk. TNode}
SFM 2012: MDE Generated Browse Catalog Scenario slide 77
SFM 2012: MDE slide 78 Transformation of Bill Customer scenario «optional» «Ga. Analysis. Context» {context. Params= $N 1, $Z 1, $Req. T, $FSize, $Blocks} «variant» «Pa. Run. TInstance» {instance=$Supplier, host=$Supplier. Node} : Supplier. Interface «optional» «Pa. Run. TInstance» {instance=$Billing, host=$Billing. Node} : Billing «kernel» «Pa. Run. TInstance» {instance =$Deli. Order, host=$Deli. Ord. Node} : Delivery. Order «variant» «Pa. Run. TInstance» {instance=$CAccount, host=$CAccount. Node} : Customer. Account «Pa. Run. TInstance» {instance=$CBrowser, host=$Cust. Node} : Customer. Interface «Opt. Design. Time» {VP=Switching Menu} «Single. Choice. Feature» {Opt. B=True} [Switching Menu] «Opt. Design. Time» {VP=Payment} «Multi. Choice. Feature» {Alt. B=True} «Single. Choice. Feature» {Reg. B=True} «optional» «Pa. Run. TInstance» {instance=$DMenu, host=$DMenu. Node} : Display. Menu
SFM 2012: MDE Generated Bill Customer scenario slide 79
SFM 2012: MDE slide 80 Handling generic parameters l Propose a user-friendly solution compared to an older approach: n l New solution: collect automatically all generic parameters that need binding from the generated UML+MARTE product model n n l l present them to developers in a spreadsheet format, together with context and guiding information developers will enter concrete binding values on the same spreadsheet Collect automatically the hardware resources (e. g. , hosts) n l binding information was given as a set of couples created manually by the developer after inspecting the generated product model present their list when the developer needs to choose a resource for software-to-hardware allocation Automate the mapping between PC-features and MARTE annotations. Transformation performs the actual binding after reading concrete values from spreadsheet.
SFM 2012: MDE slide 81 Parameter spreadsheet: derivation and use Domain engineering UML+MARTE+PL SPL Model Feature model Feature Configuration Performance Feedback Diagnosis PC-Feature model M 2 MT: Instantiate Specific Product Model with Generic Annotations Application engineering Results LQN Parameter Spreadsheet User enters concrete values Performance M 2 MT: Generate Parameter Spreadsheet Concrete Annotations Spreadsheet M 2 MT: Perform Binding Software Domain Solver UML+ MARTE M 2 MT: PUMA Product Model Transformation LQN Performance Model Performance Domain
SFM 2012: MDE slide 82 Kinds of generic parameters l The generic parameters of a product model derived from the SPL model are of different kinds: n n n l product-specific resource demands such as: execution times, number of repetitions, probabilities of different steps software-to-hardware allocation such as component instances to processors platform/environment-specific performance details (a. k. a. performance completions). Binding to concrete values: n n the performance analyst needs to provide concrete values for all generic parameters this transforms the generic product model into a platform-specific model describing the run-time behaviour of the product for a specific run-time environment.
SFM 2012: MDE slide 83 Performance completion feature model l Performance completions close the gap between the high-level design model and its different implementations, by introducing details of the execution environment/platform in the product model. secure. Communication <1 -1> secured unsecured external. Device. Type channel. Type <1 -1> LAN Internet <1 -1> monitor PAN <1 -1> SSL Protocol <1 -1> disk compressed uncompressed <1 -1> TLS Protocol security. Level internet. Connection <1 -1> Power-line <1 -1> low. Security data. Compression DSL CD DVD Hard Disk USB Platform. Choice Wireless <1 -1> medium. Security high. Security Enterprise Java. Beans CORBA . NET COM
SFM 2012: MDE slide 84 Mapping PC-features to MARTE PC-feature Affected Performance Attribute MARTE Stereotype MARTE Attribute secure. Communication overhead Pa. Comm. Step comm. Rcv. Overhead comm. Tx. Overhead channel. Type Channel Capacity Channel Latency Ga. Comm. Host capacity block. T data. Compression Message size Communication overhead Pa. Comm. Step msg. Size comm. Rcv. Overhead comm. Tx. Overhead message. Type Communication overhead Pa. Comm. Step comm. Tx. Overhead
SFM 2012: MDE slide 85 Generate parameter spreadsheet - context Domain engineering UML+MARTE+PL SPL Model Feature model Feature Configuration Performance Feedback Diagnosis PC-Feature model M 2 MT: Instantiate Specific Product Model with Generic Annotations Application engineering Results LQN Parameter Spreadsheet User enters concrete values Performance M 2 MT: Generate Parameter Spreadsheet Concrete Annotations Spreadsheet M 2 MT: Perform Binding Software Domain Solver UML+ MARTE M 2 MT: PUMA Product Model Transformation LQN Performance Model Performance Domain
SFM 2012: MDE slide 86 Generate parameter spreadsheet - details Multi-step transformation based on: Hugo Brunelière , “ATL Transformation Example: Microsoft Office Excel Extractor” from the Eclipse ATL website
SFM 2012: MDE Generated Spreadsheet Example slide 87
SFM 2012: MDE Message and its context slide 88
SFM 2012: MDE Mapping PC-features to MARTE slide 89
SFM 2012: MDE slide 90 Guidelines for choosing concrete values Guideline for Value
SFM 2012: MDE Spreadsheet with the user input slide 91
SFM 2012: MDE Using attribute “host” for allocation «optional» «Pa. Run. TInstance» {instance =$Pro. Dis, host=$Pro. Dis. Node} : Product. Diplay slide 92
SFM 2012: MDE slide 93 Perform Binding - transformation context Domain engineering UML+MARTE+PL SPL Model Feature model Feature Configuration Performance Feedback Diagnosis PC-Feature model M 2 MT: Instantiate Specific Product Model with Generic Annotations Application engineering Results LQN Parameter Spreadsheet User enters concrete values Performance M 2 MT: Generate Parameter Spreadsheet Concrete Annotations Spreadsheet M 2 MT: Perform Binding Software Domain Solver UML+ MARTE M 2 MT: PUMA Product Model Transformation LQN Performance Model Performance Domain
SFM 2012: MDE slide 94 Perform Binding - details Concrete Annotations Spreadsheet M 2 MT (a): Generate XML Model XML File with required syntax Product Model with Generic Annotations Product Deployment M 2 MT (d): Perform Binding XML Model M 2 MT (c): Generate XML File UML+ MARTE Product Model M 2 MT (b): Generate XML model with required syntax XML Model with required syntax
SFM 2012: MDE slide 95 Conclusions l Integrating performance analysis within the model-driven development of service-oriented systems has many potential benefits n n l For service consumers: how to chose the “best” services available For service providers: how to design and configure their systems to optimize the use of resources and meet performance requirements For software developers: analyze design and configuration alternative, evaluate tradeoffs For performance analysts: automate the generation of PModels from SModels to keep them in synch, reuse platform performance annotations Benefits of integrating performance analysis in the early phases of SPL development process n n n Reusability applied to performance annotations Annotate the SPL model once with generic performance annotations instead of starting from scratch for every product User-friendly approach for handling a large number of generic performance annotations.
SFM 2012: MDE slide 96 Challenges (1) l Human qualifications n n n l Abstraction level n n l Software developers are not trained in all the formalisms used for the analysis of non-functional properties (NFPs) Need to hide the analysis from developers, yet the software models have to be annotated with extra info for each NFP Who interprets the analysis results and gives feedback to developers for changing the software? Different NFPs may require source software models at different levels of abstractions/details How to keep all the models consistent? Tool interoperability n n difficult to integrate so many different tools some tools are running on different platforms
SFM 2012: MDE slide 97 Challenges (2) l Integrate NFP analysis in the software development process n n n l Impact of software model change on the NFP analysis n n l for each NFP explore the state space for different design alternatives, configurations, workload parameters, etc. in what order to evaluate the NFPs? is there a leading NFP? Propagate change throughout the transformation chain Incremental transformation instead of starting from scratch after every change. A lot more to do, theoretically and practically n n n merging performance modeling and measurements t use runtime monitoring data for better performance models t use performance models to support runtime changes (autonomic systems) applying variability modeling to service-oriented systems t manage runtime changes, adapt to context provide ideas and background for building better tools.