eca01ee0c5aeb6818dbbaa0dba2a5376.ppt
- Количество слайдов: 87
Advanced Software Engineering: Software Testing COMP 3702(L 2) Anneliese Andrews Sada Narayanappa A Andrews - Software Thomas Thelin Engineering: Software Testing'06
News & Project ¨News ¨ Updated course program ¨ Reading instructions ¨ The book, deadline 23/3 ¨Project Option 1 ¨ IMPORTANT to read the project description thoroughly ¨ Schedule, Deadlines, Activities ¨ Requirements (7 -10 papers), project areas ¨ Report, template, presentation ¨Project Option 2: date! A Andrews - Software Engineering: Software Testing'06
Lecture ¨Some more testing fundamentals ¨Chapter 4 (Lab 1) ¨ Black-box testing techniques ¨Chapter 12 (Lab 2) ¨ Statistical testing ¨ Usage modelling ¨ Reliability A Andrews - Software Engineering: Software Testing'06
Terminology ¨ Unit testing: testing a procedure, function, or class. ¨ Integration testing: testing connection between units and components. ¨ System testing: test entire system. ¨ Acceptance testing: testing to decide whether to purchase the software. A Andrews - Software Engineering: Software Testing'06
Terminology (2) ¨Alpha testing: system testing by a user group within the developing organization. ¨Beta testing: system testing by select customers. ¨Regression testing: retesting after a software modification. A Andrews - Software Engineering: Software Testing'06
Test Scaffolding Allows us to test incomplete systems. ¨ Test drivers: test components. ¨ Stubs: test a system when some components it uses are not yet implemented. Often a short, dummy program --- a method with an empty body. A Andrews - Software Engineering: Software Testing'06
Test Oracles ¨ Determine whether a test run completed with or without errors. ¨ Often a person, who monitors output. ¨ Not a reliable method. ¨ Automatic oracles check output using another program. ¨ Requires some kind of executable specification. A Andrews - Software Engineering: Software Testing'06
Testing Strategies: Black Box Testing ¨Test data derived solely from specifications. Also called “functional testing”. ¨Statistical testing. Used for reliability measurement and prediction. A Andrews - Software Engineering: Software Testing'06
Testing Theory: Why Is Testing So Difficult? ¨Theory often tells us what we can’t do. ¨Testing theory main result: perfect testing is impossible. A Andrews - Software Engineering: Software Testing'06
An Abstract View of Testing ¨ Let program P be a function with an input domain D (i. e. , set of all integers). ¨ We seek test data T, which will include selected inputs of type D. ¨ T is a subset of D. ¨ T must be of finite size. Why? A Andrews - Software Engineering: Software Testing'06
We Need a Test Oracle ¨ Assume the best possible oracle --- the specification S, which is function with input domain D. ¨ On a single test input i, our program passes the test when P(i) = S(i) Or if we think of a spec as a Boolean function that compares the input to the output: S(i, P(i)) A Andrews - Software Engineering: Software Testing'06
Requirement For Perfect Testing [Howden 76] 1. If all of our tests pass, then the program is correct. x[x T P(x) = S(x)] y[y D P(y) = S(y)] • • If for all tests t in test set T, P(t) = S(t), then we are sure that the program will work correctly for all elements in D. If any tests fail we look for a bug. A Andrews - Software Engineering: Software Testing'06
Requirement For Perfect Testing 2. We can tell whether the program will eventually halt and give a result for any t in our test set T. x[x T “ a computable procedure for determining if P halts on input x”] A Andrews - Software Engineering: Software Testing'06
But, Both Requirements Are Impossible to Satisfy. ¨ 1 st requirement can be satisfied only if T= D. We test all elements of the input domain. ¨ 2 nd requirement depends on a solution to the halting problem, which has no solution. We can demonstrate the problem with Requirement 1 [Howden 78]. A Andrews - Software Engineering: Software Testing'06
Other Undecidable Testing Problems ¨Is a control path feasible? Can I find data to execute a program control path? ¨Is some specified code reachable by any input data? These questions cannot, in general, be answered. A Andrews - Software Engineering: Software Testing'06
Software Testing Limitations ¨There is no perfect software testing. ¨Testing can show defects, but can never show correctness. We may never find all of the program errors during testing. A Andrews - Software Engineering: Software Testing'06
Why test techniques? ¨Exhaustive testing (use of all possible inputs and conditions) is impractical ¨ must use a subset of all possible test cases ¨ want must have high probability of detecting faults ¨Need processes that help us selecting test cases ¨ Different people – equal probability to detect faults ¨Effective testing – detect more faults ¨ Focus attention on specific types of fault ¨ Know you’re testing the right thing ¨Efficient testing – detect faults with less effort ¨ Avoid duplication Andrews ¨ Systematic techniques are. Ameasurable Software Engineering: Software Testing'06
Dimensions of testing ¨Testing combines techniques that focus on ¨ Testers – who does the testing ¨ Coverage – what gets tested ¨ Potential problems – why you're testing (risks / quality) ¨ Activities – how you test ¨ Evaluation – how to tell whether the test passed or failed ¨All testing should involve all five dimensions A Andrews ¨ Testing standards Engineering: Software (e. g. IEEE) Software Testing'06
Black-box testing A Andrews - Software Engineering: Software Testing'06
Equivalence partitioning mouse picks on menu Partitioning is based on input conditions user queries numerical data output format requests responses to prompts command key input A Andrews - Software Engineering: Software Testing'06
Equivalence partitioning Input condition: ¨is a range ¨ one valid and two invalid classes are defined ¨requires a specific value ¨ one valid and two invalid classes are defined ¨is a boolean ¨ one valid and one invalid class are defined A Andrews - Software Engineering: Software Testing'06
Test Cases ¨Which test cases have the best chance of successfully uncovering faults? ¨ as near to the mid-point of the partition as possible ¨ the boundaries of the partition and ¨Mid-point of a partition typically represents the “typical values” ¨Boundary values represent the atypical or unusual values ¨Usually identify equivalence partitions based on specs and experience A Andrews - Software Engineering: Software Testing'06
Equivalence Partitioning Example ¨Consider a system specification which states that a program will accept between 4 and 10 input values (inclusive), where the input values must be 5 digit integers greater than or equal to 10000 ¨What are the equivalence partitions? A Andrews - Software Engineering: Software Testing'06
Example Equivalence Partitions A Andrews - Software Engineering: Software Testing'06
Boundary value analysis mouse picks on menu output domain user queries numerical data output format requests responses to prompts command key input A Andrews - Software Engineering: Software Testing'06
Boundary value analysis ¨Range a. . b a, b, just above a, just below b ¨Number of values: max, min, just below min, just above max ¨Output bounds should be checked ¨Boundaries of externally visible data structures shall be checked (e. g. arrays) A Andrews - Software Engineering: Software Testing'06
Some other black-box techniques ¨Risk-based testing, random testing ¨Stress testing, performance testing ¨Cause-and-effect graphing ¨State-transition testing A Andrews - Software Engineering: Software Testing'06
Black Box Testing: Random Testing ¨Generate tests randomly. ¨“Poorest methodology of all” [Meyers]. ¨Promoted by others. ¨Statistical testing: ¨ Test inputs from an operational profile. ¨ Based on reliability theory. ¨ Adds discipline to random testing. A Andrews - Software Engineering: Software Testing'06
Black Box Testing: Cause-Effect Analysis ¨Rely on pre-conditions and post-conditions and dream up cases. ¨Identify impossible combinations. ¨Construct decision table between input and output conditions. Each column corresponds to a test case. A Andrews - Software Engineering: Software Testing'06
Error guessing ¨Exploratory testing, happy testing, . . . ¨Always worth including ¨Can detect some failures that systematic techniques miss ¨Consider ¨ Past failures (fault models) ¨ Intuition ¨ Experience ¨ Brain storming ¨ ”What is the craziest thing we can do? ” ¨ Lists in literature A Andrews - Software Engineering: Software Testing'06
Black Box Testing: Error Guessing ¨“Some people have a knack for ‘smelling out’ errors” [Meyers]. ¨Enumerate a list of possible errors or error prone situations. ¨Write test cases based on the list. Depends upon having fault models, theories on the causes and effects of program faults. A Andrews - Software Engineering: Software Testing'06
Usability testing Characteristics Environments ¨ Accessibility ¨ Free form tasks ¨ Responsiveness ¨ Procedure scripts ¨ Efficiency ¨ Paper screens ¨ Comprehensibility ¨ Mock-ups ¨ Field trial A Andrews - Software Engineering: Software Testing'06
Specification-based testing ¨Formal method ¨Test cases derived from a (formal) specification (requirements or design) Specification Model (state chart) Test case generation A Andrews - Software Engineering: Software Testing'06 Test execution
Model-based Testing Model Usag e Requirements VALIDATION Specification Top-level Design Integration Detailed Design Unit Test phase Coding A Andrews - Software Engineering: Software Testing'06
Need For Test Models ¨ Testing is a search problem. ¨ Search for specific input & state combinations that cause failures. ¨ These combinations are rare. ¨ Brute force cannot be effective. Brute force can actually lead to overconfidence. ¨ Must target & test specific combinations. ¨ Targets based on fault models. ¨ Testing is automated to insure repeatable coverage of targets: “target coverage”. A Andrews - Software Engineering: Software Testing'06
Model-Based Coverage ¨ We cannot enumerate all state/input combinations for the implementation under test (IUT). ¨ We can enumerate these combinations for a model. ¨ Models allow automated target testing. “Automated testing replaces the tester’s slingshot with a machine gun. ” “The model paints the target & casts the bullets. ” A Andrews - Software Engineering: Software Testing'06
Test Model Elements ¨ Subject: the IUT. ¨ Perspective: focus on aspects likely to be buggy based on a fault model. ¨ Representation: often graphs (one format is UML) or checklists. ¨ Technique: method for developing a model and generate tests to cover model targets. A Andrews - Software Engineering: Software Testing'06
The Role of Models in Testing ¨ Model validation: does it model the right thing? ¨ Verification: implementation correct? Informal: checklist Formal: proof ¨ Consistency checking: is representation instance of meta model? ¨ Meta-model: UML, graphs, etc + technique ¨ Instance model: representation constructed A Andrews - Software Engineering: Software Testing'06
Models Roles in Testing ¨ Responsibility-based testing. Does behavior conform to the model representation? ¨ Implementation-based testing. Does behavior conform to a model of the implementation. ¨ Product validation. Does behavior conform to a requirements model, for example, Use Case models? A Andrews - Software Engineering: Software Testing'06
Models That Support Testability ¨Represent all features to be tested. ¨Limit detail to reduce testing costs, while preserving essential detail. ¨Represents all state events so that we can generate them. ¨Represents all state & state actions so that we can observe them. A Andrews - Software Engineering: Software Testing'06
Model Types ¨Combinational models: ¨ Model combinations of input conditions. ¨ Based on decision tables. ¨State Machines: ¨ Output depends upon current & past inputs. ¨ Based on finite state machines. ¨UML models: model OO structures. A Andrews - Software Engineering: Software Testing'06
Combinational Model - Spin Lock [Binder Fig. 6. 1] 8 5 2 7 4 0 6 3 9 A Andrews - Software Engineering: Software Testing'06
Combinational Model: Use a Decision Table ¨ 1 of several responses selected based on distinct input variables. ¨Cases can be modeled as mutually exclusive Boolean expressions of input variables. ¨Response does not depend upon prior input or output. A Andrews - Software Engineering: Software Testing'06
Decision Table Construction Combinational Models 1. Identify decision variables & conditions. 2. Identify resultant actions to be selected. 3. Identify the actions to be produced in response to condition combinations. 4. Derive logic function. A Andrews - Software Engineering: Software Testing'06
Combinational Models Auto Insurance Model ¨ Renewal decision table: Table 6. 1. ¨ Column-wise format: Table 6. 2. ¨ Decision tree: Fig. 6. 2. ¨ Truth table format: Fig. 6. 3. ¨ Tractability note: ¨ Decision table with n conditions has a maximum of 2 n variants. ¨ Some variants are implausible. A Andrews - Software Engineering: Software Testing'06
Insurance Renewal Decision Table [Binder Table 6. 1] Condition Sec. Variant # Claims Age Action Section Prem. Incr. Send Warning Cancel 1 0 <26 50 no no 2 0 >25 25 no no 3 1 <26 100 yes no 4 1 >25 50 no no 5 2 to 4 <26 400 yes no 6 2 to 4 >25 200 yes no 7 5 or > Any 0 yes A Andrews - Software Engineering: Software Testing'06
Insurance Renewal Decision Table Column-wise Format Variant Yes Condition Section 1 0 Insured age Action Section Number of Claims 2 0 3 4 5 6 5 or more 1 1 2 to 4 25 or 26 or younger older 25 or younger 26 or older 25 or 26 or younger older Any Premium increase($) 50 25 100 50 400 200 0 Send warning No No Yes Yes No Cancel No No No Yes A Andrews - Software Engineering: Software Testing'06 2 to 4 7
Implicit Variants that can be inferred, but not given. ¨ In the insurance example, we don’t care about age if there are five or more claims. The action is to cancel for any age. ¨ Other implicit variants are those that cannot happen --one cannot be both under 25 yrs old and over 25. A Andrews - Software Engineering: Software Testing'06
Test Strategies All explicit variants at least once – When decision boundaries are systematically exercised – weak, if can’t happen conditions or undefined boundaries result in implicit variants A Andrews - Software Engineering: Software Testing'06
Non binary Variable Domain Analysis ¨Exactly 0 claims, age 16 -25 ¨Exactly 0 claims, age 26 -85 ¨Exactly 1 claim, age 16 -25 ¨Exactly 1 claim, age 26 -85 ¨Two, 3, or 4 claims, age 16 -25 ¨Two, 3, or 4 claims, age 26 -85 ¨Five to 10 claims, age 16 -85 A Andrews - Software Engineering: Software Testing'06
Test Cases Variant 1 Boundaries ==0 Number Of Claims 1 On 2 3 4 5 0 0 0 Off (above) 1 Off (below) -1 Typical In 0 >= 16 On 16 Insured Age Off <= 25 15 On 25 Off Typical 26 20 20 20 Accept V 3 Reject Premium increase 50 100 Send warning No Yes Cancel No No No A Andrews - Software Engineering: Software Testing'06 Expected result In 0 Accept Reject Accept V 2 50 50 25 No No No
Additional Heuristics ¨Vary order of input variables ¨Scramble test order ¨Purposely corrupt inputs A Andrews - Software Engineering: Software Testing'06
Statistical testing / Usage based testing Usage specification Test case generation Test execution A Andrews - Software Engineering: Software Testing'06 Failure logging Reliability estimation
Usage specification models Algorithmic models Grammar model
Usage specification models Domain based models Operational profile A Andrews - Software Engineering: Software Testing'06 Markov model
Operational profiles A Andrews - Software Engineering: Software Testing'06
Operational profiles A Andrews - Software Engineering: Software Testing'06
Statistical testing / Usage-based testing Usage model Random sample Code A Andrews - Software Engineering: Software Testing'06
Usage Modelling Invoke Click on OK with non-valid hour Right-click Move Dialog Box Main Window Resize CANCEL or OK with Valid Hour ¨Each transition corresponds to an external event ¨Probabilities are set according to the future use of the system ¨Reliability prediction Close Window Terminate A Andrews - Software Engineering: Software Testing'06
Markov model P 41 P 21 N 1 ¨ System states, seen as nodes ¨ Probabilities of transitions P 12 N 2 P 31 P 14 P 24 P 13 P 34 N 3 N 4 To Node Conditions for a Markov model: ¨ Probabilities are constants ¨ No memory of past states Transition matrix N 2 N 3 N 4 N 1 From Node N 1 P 12 P 13 P 14 N 2 P 21 P 22 0 P 24 N 3 P 31 0 P 33 P 34 P 41 0 0 P 44 N 4 A Andrews - Software Engineering: Software Testing'06
Model of a program ¨ The program is seen as a graph ¨ One entry node (invoke) and one exit node (terminate) ¨ Every transition from node Ni to node Nj has a probability of Pij ¨ If no connection between Ni and Nj, then Pij= 0 P 21 Input N 1 P 12 P 31 N 2 P 14 F P 24 P 13 A Andrews - Software Engineering: Software Testing'06 N 3 P 34 N 4 Output
Clock Software A Andrews - Software Engineering: Software Testing'06
Input Domain – Subpopulations ¨Human users – keystrokes, mouse clicks ¨System clock – time/date input ¨Combination usage - time/date changes from the OS while the clock is executing ¨Create one Markov chain to model the input from the user A Andrews - Software Engineering: Software Testing'06
Operation modes of the clock ¨Window = {main window, change window, info window} ¨Setting = {analog, digital} ¨Display = {all, clock only} ¨Cursor = {time, date, none} A Andrews - Software Engineering: Software Testing'06
State of the system ¨A state of the system under test is an element of the set S, where S is the cross product of the operational modes. ¨States of the clock {main window, analog, all, none} {main window, analog, clock-only, none} {main window, digital, all, none} {main window, digital, clock-only, none} {change window, analog, all, time} {change window, analog, all, date} {change window, digital, all, time} {change window, digital, all, date} {info window, analog, all, Andrews - Software none} A Engineering: Software {info window, digital, all, none} Testing'06
Top Level Markov Chain Window operational mode is chosen as the primary modeling mode Rules for Markov chains Each arc is assigned a probability between 0 and 1 inclusive, The sum of the exit arc probabilities from each state is exactly 1. A Andrews - Software Engineering: Software Testing'06
Top Level Model – Data Dictionary Arc Label Input to be Applied Comments/Notes for Tester invoke Invoke the clock software Main window displayed in full Tester should verify window appearance, setting, and that it accepts no illegal input options. change Select the “Change Time/Date. . . ” item from the “Options” menu All window features must be displayed in order to execute this command The change window should appear and be given the focus Tester should verify window appearance and modality and ensure that it accepts no illegal input options. info Select the “Info. . . ” item from the “Options” menu The title bar must be on to apply this input The info window should appear and be given the focus Tester should verify window appearance and modality and ensure that it accepts no illegal input options. exit Select the “Exit” option from the “Options” menu The software will terminate, end of test case end Choose any action and return to the main window The change window will disappear and the main window will be given the focus ok Press the ok button on the info window The info window will disappear and the main window will be A Andrews -given the focus Software Engineering: Software Testing'06
Level 2 Markov Chain Submodel for the Main Window A Andrews - Software Engineering: Software Testing'06
Data Dictionary – Level 2 Arc Label Input to be Applied Comments/Notes invoke Invoke the clock software • Main window displayed in full • Invocation may require that the software be calibrated by issuing either an options. analog or an options. digital input • Tester should verify window appearance, setting, and ensure that it accepts no illegal input options. change Select the “Change Time/Date. . . ” item from the “Options” menu • All window features must be displayed in order to execute this command • The change window should appear and be given the focus • Tester should verify window appearance and modality and ensure that it accepts no illegal input options. info Select the “Info. . . ” item from the “Options” menu • The title bar must be on to apply this input • The info window should appear and be given the focus • Tester should verify window appearance and modality and ensure that it accepts no illegal input options. exit Select the “Exit” option from the “Options” menu • The software will terminate, end of test case end Choose any action (cancel or • The change window will disappear and the main window will be change the time/date) and given the focus A Andrews - Software return to the main window • Note: this action may require that the software be calibrated by Engineering: Software issuing either an options. analog or an options. digital input Testing'06
Data Dictionary – Level 2 Arc Label Input to be Applied Comments/Notes ok Press the ok button on the info window • The info window will disappear and the main window will be given the focus • Note: this action may require that the software be calibrated by issuing either an options. analog or an options. digital input options. analog Select the “Analog” item from the “Options” menu • The digital display should be replaced by an analog display options. digital Select the “Digital” item from the “Options” menu • The analog display could be replaced by a digital display options. clockonly Select the “Clock Only” item from the “Options” menu • The clock window should be replace by a display containing only the face of the clock, without a title, menu or border options. seconds Select the “Seconds” item from the “Options” menu • The second hand/counter should be toggled either on or off depending on its current status options. date Select the “Date” item from the • The date should be toggled either on or off depending “Options” menu on its current status A left mouse Double click, using the Andrews - Software face should be replaced by the entire clock • The clock Engineering: Software button, on the face of the clock window Testing'06 double-click
Level 2 Markov Chain Submodel for the Change Window A Andrews - Software Engineering: Software Testing'06
Data Dictionary Arc Label Input to be Applied Comments/Notes for Tester options. change Select the “Change Time/Date. . . ” item from the “Options” menu • All window features must be displayed in order to execute this command • The change window should appear and be given the focus • Tester should verify window appearance and modality and ensure that it accepts no illegal input end Choose either the “Ok” button or hit the cancel icon and return to the main window • The change window will disappear and the main window will be given the focus • Note: this action may require that the software be calibrated by issuing either an options. analog or an options. digital input move Hit the tab key to move the cursor to the other input field or use the mouse to select the other field • Tester should verify cursor movement and also verify both options for moving the cursor edit time Change the time in the “new time” field or enter an invalid time • The valid input format is shown on the screen edit date Change the date in the “new date” field • The valid input format is shown on the screen A Andrews - Software or enter an invalid date Engineering: Software Testing'06
Software Reliability ¨Techniques ¨ Markov models ¨ Reliability growth models A Andrews - Software Engineering: Software Testing'06
Dimensions of dependability A Andrews - Software Engineering: Software Testing'06
Costs of increasing dependability Co st Dependability Lo w Medium High Very Softwarehigh A Andrews Engineering: Software Testing'06 Ultrah igh
Availability and reliability ¨Reliability ¨ The probability of failure-free system operation over a specified time in a given environment for a given purpose ¨Availability ¨ The probability that a system, at a point in time, will be operational and able to deliver the requested services Both of these attributes can be expressed A Andrews - Software Engineering: Software quantitatively Testing'06
Reliability terminology A Andrews - Software Engineering: Software Testing'06
Usage profiles / Reliability Removing X% of the faults in a system will not necessarily improve the reliability by X%! A Andrews - Software Engineering: Software Testing'06
Reliability achievement ¨Fault avoidance ¨ Minimise the possibility of mistakes ¨ Trap mistakes ¨Fault detection and removal ¨ Increase the probability of detecting and correcting faults ¨Fault tolerance ¨ Run-time techniques A Andrews - Software Engineering: Software Testing'06
Reliability quantities ¨Execution time ¨ is the CPU time that is actually spent by the computer in executing the software ¨Calendar time ¨ is the time people normally experience in terms of years, months, weeks, etc ¨Clock time ¨ is the elapsed time from start to end of computer execution in running the software A Andrews - Software Engineering: Software Testing'06
Reliability metrics A Andrews - Software Engineering: Software Testing'06
Nonhomogeneous Poisson Process (NHPP) Models ¨N(t) follows a Poisson distribution. The probability that N(t) is a given integer n is: ¨m(t) = (t) is called mean value function, it describes the expected cumulative number of failures in [0, t) A Andrews - Software Engineering: Software Testing'06
The Goel-Okumoto (GO) model Assumptions ¨The cumulative number of failures detected at time t follows a Poisson distribution ¨All failures are independent and have the same chance of being detected ¨All detected faults are removed immediately and no new faults are introduced ¨ The failure process is modelled by an NHPP model with mean value function (t) given by: A Andrews - Software Engineering: Software Testing'06
Goel-Okumoto The shape of the mean value function ( (t)) and the intensity function ( (t)) of the GO model (t) t A Andrews - Software Engineering: Software Testing'06
S-shaped NHPP model ¨ (t)=a[1 -(1+bt)e-bt], b>0 (t) t A Andrews - Software Engineering: Software Testing'06
The Jelinski-Moranda (JM) model Assumptions 1. Times between failures are independent, exponential distributed random quantities 2. The number of initial failures is an unknown but fixed constant 3. A detected fault is removed immediately and no new fault is introduced 4. All remaining faults contribute the same amount of the software failure intensity A Andrews - Software Engineering: Software Testing'06
Next weeks ¨Next week (April 11) ¨ Lab 1 – Black-box testing A Andrews - Software Engineering: Software Testing'06


