a2b645481bbcc371295d291e7bfb5781.ppt
- Количество слайдов: 35
User-session based Testing of Web Applications
Two Papers l A Scalable Approach to User-session based Testing of Web Applications through Concept Analysis – Uses concept analysis to reduce test suite size l An Empirical Comparison of Test Suite Reduction Techniques for User-session-based Testing of Web Applications – Compares concept analysis to other test suite reduction techniques
Talk Outline l Introduction l Background – User-session Testing – Concept Analysis l Applying Concept Analysis – Incremental Reduced Test Suite Update – Empirical Evaluation (Incremental vs. Batch) l l Empirical Comparison of Concept Analysis to other Test Suite Reduction Techniques Conclusions
Characteristics of Web-based Applications l Short time to market l Integration of numerous technologies l Dynamic generation of content l May contain millions of LOC l Extensive use – Need for high reliability, continuous availability l Significant interaction with users l Changing user profiles l Frequent small maintenance changes
User-session Testing l User session – A collection of user requests in the form of URL and name-value pairs l User sessions are transformed into test cases – Each logged request in a user session is changed into an HTTP request that can be sent to a web server l Previous studies of user-session testing – Previous results showed fault detection capabilities and cost effectiveness – Will not uncover faults associated with rarely entered data – Effectiveness improves as the number of sessions increases (downside: cost increases as well)
Contributions l View user sessions as use cases l Apply concept analysis for test suite reduction l Perform incremental test suite update l Automate testing framework l Evaluate cost effectiveness – Test suite size – Program coverage – Fault detection
Concept Analysis l l Technique for clustering objects that have common discrete attributes Input: – Set of objects O – Set of attributes A – Binary relation R • Relates objects to attributes • Implemented as a Boolean-valued table – A row for each object in O – A column for each attribute in A • Table entry [o, a] is true if object o has attribute a, otherwise false
Concept Analysis (2) l Identifies concepts given (O, A, R) l Concept is a tuple (Oi, Aj) – Concepts form a partial order l Output: – Concept lattice represented by a DAG • Node represents concept • Edge denotes the partial ordering – Top element T = most general concept • Contains attributes that are shared by all objects in O – Bottom element = most special concept • Contains objects that have all attributes in A
Concept Analysis for Web Testing l Binary relation table – User session s = object – URL u = attribute – A pair (s, u) is in the relation table if s requests u
Concept Lattice Explained l Top node T l Bottom node l Examples: – – Most general concept Contains URLs that are requested by all user sessions Most special concept Contains user sessions that requests all URLs – Identification of common URLs requested by 2 user sessions – Identification of user sessions that jointly request 2 URLs • us 3 and us 4 • PL and GS
Concept Analysis for Test Suite Reduction l Exploit lattice’s hierarchical use-case clustering l Heuristic – Identify smallest set of user sessions that will cover all URLs executed by original suite
Incremental Test Suite Update
Incremental Test Suite Update (2) l Incremental algorithm by Godin et al. – Create new nodes/edges – Modify existing nodes/edges l l Next-to-bottom nodes may rise up in the lattice Existing internal nodes never sink to the bottom Test cases are not maintained for internal nodes Set of next-to-bottom nodes (user sessions) form the test suite
Web Testing Framework
Empirical Evaluation l Test suite reduction – Test suite size – Replay time – Oracle time l Cost-effectiveness of incremental vs. batch concept analysis l Program coverage l Fault detection capabilities
Experimental Setup l Bookstore Application – 9748 LOC – 385 methods – 11 classes l JSP front-end, My. SQL backend l 123 user sessions l 40 seeded faults
Test Suite Reduction l Metrics – Test suite size – Replay time – Oracle time
Incremental vs. Batch Analysis l Metric – Space costs – Relative sizes of files required by incremental and batch techniques l Methodology – Batch: 123 user sessions processed – Incremental: 100 processed first, then 23 incrementally
Program Coverage l Metrics – Statement coverage – Method coverage l Methodology – Instrumented Java classes using Clover – Restored database state before replay – Wget for replaying user sessions
Fault Detection Capability l Metric – Number of faults detected l Methodology – Manually seeded 40 faults into separate copies of the application – Replayed user sessions through • Correct version to generate expected output • Faulty version to generate actual output – Diff expected and actual outputs
Empirical Comparison of Test Suite Reduction Techniques
Empirical Comparison of Test Suite Reduction Techniques l Compared 3 variations of Concept with 3 requirements-based reduction techniques – Random – Greedy – Harrold, Gupta, and Soffa’s reduction (HGS) l Each requirements-based reduction technique satisfies program or URL coverage – Statement, method, conditional, URL
Random and Greedy Reduction l Random – Selection process continues until reduced test suite satisfies some coverage criterion l Greedy – Each subsequent test case selected provides maximum coverage of some criterion – Example: • Select us 6 – maximum URL coverage • Then, select us 2 – most marginal improvement for all-URL coverage criterion
HGS Reduction l l Selects a representative set from the original by approximating the optimal reduced set Requirement cardinality = # of test cases covering that requirement Select most frequently occurring test case with lowest requirement cardinality Example: – Consider requirement with cardinality 1 – GM • Select us 2 – Consider requirement with cardinality 2 – PL and GB – Select test case that occurs most frequently in the union • us 6 occurs twice, us 3 and us 4 once • Select us 6
Empirical Evaluation l Test suite size l Program coverage l Fault detection effectiveness l Time cost l Space cost
Experimental Setup l Bookstore application l Course Project Manager (CPM) – Create grader/group accounts – Assign grades, create schedules for demo time – Send notification emails about account creation, grade postings
Test Suite Size l Suite Size Hypothesis – Larger suites than: • HGS and Greedy – Smaller suites than: • Random – More diverse in terms of use case representation l Results – Bookstore application: • HGS-S, HGS-C, GRD-S, GRD-C created larger suites – CPM • Larger suites than HGS and Greedy • Smaller than Random
Test Suite Size (2)
Program Coverage l Coverage Hypothesis – Similar coverage to: • Original suite – Less coverage than: • Suites that satisfy program-based requirements – Higher URL coverage than: • Greedy and HGS with URL criterion l Results – Program coverage comparable to (within 2% of) PRG_REQ techniques – Slightly less program coverage than original suite and Random – More program coverage than URL_REQ techniques, Greedy and HGS
Program Coverage (2)
Fault Detection Effectiveness l Fault Detection Hypothesis – Greater fault detection effectiveness than: • Requirements-based techniques with URL criterion – Similar fault detection effectiveness to: • Original suite • Requirements-based techniques with program-based criteria l Results – Best fault detection but low number of faults detected per test case - Random PRG_REQ – Similar fault detection to the best PRG_REQ techniques – Detected more faults than HGS-U
Fault Detection Effectiveness (2)
Time and Space Costs l Costs Hypothesis – Less space and time than: • HGS, Greedy, Random – Space for Concept Lattice vs. space for requirement mappings l Results – Costs considerably less than PRG_REQ techniques – Collecting coverage information for each session is the clear bottleneck of requirements-based approaches
Conclusions l Problems with Greedy and Random reduction – Non-determinism – Generated suites with wide range in size, coverage, fault detection effectiveness l Test suite reduction based on concept-analysis clustering of user sessions – – l Achieves large reduction in test suite size Saves oracle and replay time Preserves program coverage Preserves fault detection effectiveness • Chooses test cases based on use case representation Incremental test suite reduction/update – Scalable approach to user-session-based testing of web applications – Necessary for web applications that undergoes constant maintenance, evolution, and usage changes
References l l Sreedevi Sampath, Valentin Mihaylov, Amie Souter, Lori Pollock "A Scalable Approach to User-session based Testing of Web Applications through Concept Analysis, " Automated Software Engineering Conference (ASE), September 2004. Sara Sprenkle, Sreedevi Sampath, Emily Gibson, Amie Souter, Lori Pollock, "An Empirical Comparison of Test Suite Reduction Techniques for User-session-based Testing of Web Applications, " International Conference on Software Maintenance (ICSM), September 2005.
a2b645481bbcc371295d291e7bfb5781.ppt