Скачать презентацию Advanced Software Engineering Software Testing COMP 3702 L 2 Скачать презентацию Advanced Software Engineering Software Testing COMP 3702 L 2

eca01ee0c5aeb6818dbbaa0dba2a5376.ppt

  • Количество слайдов: 87

Advanced Software Engineering: Software Testing COMP 3702(L 2) Anneliese Andrews Sada Narayanappa A Andrews Advanced Software Engineering: Software Testing COMP 3702(L 2) Anneliese Andrews Sada Narayanappa A Andrews - Software Thomas Thelin Engineering: Software Testing'06

News & Project ¨News ¨ Updated course program ¨ Reading instructions ¨ The book, News & Project ¨News ¨ Updated course program ¨ Reading instructions ¨ The book, deadline 23/3 ¨Project Option 1 ¨ IMPORTANT to read the project description thoroughly ¨ Schedule, Deadlines, Activities ¨ Requirements (7 -10 papers), project areas ¨ Report, template, presentation ¨Project Option 2: date! A Andrews - Software Engineering: Software Testing'06

Lecture ¨Some more testing fundamentals ¨Chapter 4 (Lab 1) ¨ Black-box testing techniques ¨Chapter Lecture ¨Some more testing fundamentals ¨Chapter 4 (Lab 1) ¨ Black-box testing techniques ¨Chapter 12 (Lab 2) ¨ Statistical testing ¨ Usage modelling ¨ Reliability A Andrews - Software Engineering: Software Testing'06

Terminology ¨ Unit testing: testing a procedure, function, or class. ¨ Integration testing: testing Terminology ¨ Unit testing: testing a procedure, function, or class. ¨ Integration testing: testing connection between units and components. ¨ System testing: test entire system. ¨ Acceptance testing: testing to decide whether to purchase the software. A Andrews - Software Engineering: Software Testing'06

Terminology (2) ¨Alpha testing: system testing by a user group within the developing organization. Terminology (2) ¨Alpha testing: system testing by a user group within the developing organization. ¨Beta testing: system testing by select customers. ¨Regression testing: retesting after a software modification. A Andrews - Software Engineering: Software Testing'06

Test Scaffolding Allows us to test incomplete systems. ¨ Test drivers: test components. ¨ Test Scaffolding Allows us to test incomplete systems. ¨ Test drivers: test components. ¨ Stubs: test a system when some components it uses are not yet implemented. Often a short, dummy program --- a method with an empty body. A Andrews - Software Engineering: Software Testing'06

Test Oracles ¨ Determine whether a test run completed with or without errors. ¨ Test Oracles ¨ Determine whether a test run completed with or without errors. ¨ Often a person, who monitors output. ¨ Not a reliable method. ¨ Automatic oracles check output using another program. ¨ Requires some kind of executable specification. A Andrews - Software Engineering: Software Testing'06

Testing Strategies: Black Box Testing ¨Test data derived solely from specifications. Also called “functional Testing Strategies: Black Box Testing ¨Test data derived solely from specifications. Also called “functional testing”. ¨Statistical testing. Used for reliability measurement and prediction. A Andrews - Software Engineering: Software Testing'06

Testing Theory: Why Is Testing So Difficult? ¨Theory often tells us what we can’t Testing Theory: Why Is Testing So Difficult? ¨Theory often tells us what we can’t do. ¨Testing theory main result: perfect testing is impossible. A Andrews - Software Engineering: Software Testing'06

An Abstract View of Testing ¨ Let program P be a function with an An Abstract View of Testing ¨ Let program P be a function with an input domain D (i. e. , set of all integers). ¨ We seek test data T, which will include selected inputs of type D. ¨ T is a subset of D. ¨ T must be of finite size. Why? A Andrews - Software Engineering: Software Testing'06

We Need a Test Oracle ¨ Assume the best possible oracle --- the specification We Need a Test Oracle ¨ Assume the best possible oracle --- the specification S, which is function with input domain D. ¨ On a single test input i, our program passes the test when P(i) = S(i) Or if we think of a spec as a Boolean function that compares the input to the output: S(i, P(i)) A Andrews - Software Engineering: Software Testing'06

Requirement For Perfect Testing [Howden 76] 1. If all of our tests pass, then Requirement For Perfect Testing [Howden 76] 1. If all of our tests pass, then the program is correct. x[x T P(x) = S(x)] y[y D P(y) = S(y)] • • If for all tests t in test set T, P(t) = S(t), then we are sure that the program will work correctly for all elements in D. If any tests fail we look for a bug. A Andrews - Software Engineering: Software Testing'06

Requirement For Perfect Testing 2. We can tell whether the program will eventually halt Requirement For Perfect Testing 2. We can tell whether the program will eventually halt and give a result for any t in our test set T. x[x T “ a computable procedure for determining if P halts on input x”] A Andrews - Software Engineering: Software Testing'06

But, Both Requirements Are Impossible to Satisfy. ¨ 1 st requirement can be satisfied But, Both Requirements Are Impossible to Satisfy. ¨ 1 st requirement can be satisfied only if T= D. We test all elements of the input domain. ¨ 2 nd requirement depends on a solution to the halting problem, which has no solution. We can demonstrate the problem with Requirement 1 [Howden 78]. A Andrews - Software Engineering: Software Testing'06

Other Undecidable Testing Problems ¨Is a control path feasible? Can I find data to Other Undecidable Testing Problems ¨Is a control path feasible? Can I find data to execute a program control path? ¨Is some specified code reachable by any input data? These questions cannot, in general, be answered. A Andrews - Software Engineering: Software Testing'06

Software Testing Limitations ¨There is no perfect software testing. ¨Testing can show defects, but Software Testing Limitations ¨There is no perfect software testing. ¨Testing can show defects, but can never show correctness. We may never find all of the program errors during testing. A Andrews - Software Engineering: Software Testing'06

Why test techniques? ¨Exhaustive testing (use of all possible inputs and conditions) is impractical Why test techniques? ¨Exhaustive testing (use of all possible inputs and conditions) is impractical ¨ must use a subset of all possible test cases ¨ want must have high probability of detecting faults ¨Need processes that help us selecting test cases ¨ Different people – equal probability to detect faults ¨Effective testing – detect more faults ¨ Focus attention on specific types of fault ¨ Know you’re testing the right thing ¨Efficient testing – detect faults with less effort ¨ Avoid duplication Andrews ¨ Systematic techniques are. Ameasurable Software Engineering: Software Testing'06

Dimensions of testing ¨Testing combines techniques that focus on ¨ Testers – who does Dimensions of testing ¨Testing combines techniques that focus on ¨ Testers – who does the testing ¨ Coverage – what gets tested ¨ Potential problems – why you're testing (risks / quality) ¨ Activities – how you test ¨ Evaluation – how to tell whether the test passed or failed ¨All testing should involve all five dimensions A Andrews ¨ Testing standards Engineering: Software (e. g. IEEE) Software Testing'06

Black-box testing A Andrews - Software Engineering: Software Testing'06 Black-box testing A Andrews - Software Engineering: Software Testing'06

Equivalence partitioning mouse picks on menu Partitioning is based on input conditions user queries Equivalence partitioning mouse picks on menu Partitioning is based on input conditions user queries numerical data output format requests responses to prompts command key input A Andrews - Software Engineering: Software Testing'06

Equivalence partitioning Input condition: ¨is a range ¨ one valid and two invalid classes Equivalence partitioning Input condition: ¨is a range ¨ one valid and two invalid classes are defined ¨requires a specific value ¨ one valid and two invalid classes are defined ¨is a boolean ¨ one valid and one invalid class are defined A Andrews - Software Engineering: Software Testing'06

Test Cases ¨Which test cases have the best chance of successfully uncovering faults? ¨ Test Cases ¨Which test cases have the best chance of successfully uncovering faults? ¨ as near to the mid-point of the partition as possible ¨ the boundaries of the partition and ¨Mid-point of a partition typically represents the “typical values” ¨Boundary values represent the atypical or unusual values ¨Usually identify equivalence partitions based on specs and experience A Andrews - Software Engineering: Software Testing'06

Equivalence Partitioning Example ¨Consider a system specification which states that a program will accept Equivalence Partitioning Example ¨Consider a system specification which states that a program will accept between 4 and 10 input values (inclusive), where the input values must be 5 digit integers greater than or equal to 10000 ¨What are the equivalence partitions? A Andrews - Software Engineering: Software Testing'06

Example Equivalence Partitions A Andrews - Software Engineering: Software Testing'06 Example Equivalence Partitions A Andrews - Software Engineering: Software Testing'06

Boundary value analysis mouse picks on menu output domain user queries numerical data output Boundary value analysis mouse picks on menu output domain user queries numerical data output format requests responses to prompts command key input A Andrews - Software Engineering: Software Testing'06

Boundary value analysis ¨Range a. . b a, b, just above a, just below Boundary value analysis ¨Range a. . b a, b, just above a, just below b ¨Number of values: max, min, just below min, just above max ¨Output bounds should be checked ¨Boundaries of externally visible data structures shall be checked (e. g. arrays) A Andrews - Software Engineering: Software Testing'06

Some other black-box techniques ¨Risk-based testing, random testing ¨Stress testing, performance testing ¨Cause-and-effect graphing Some other black-box techniques ¨Risk-based testing, random testing ¨Stress testing, performance testing ¨Cause-and-effect graphing ¨State-transition testing A Andrews - Software Engineering: Software Testing'06

Black Box Testing: Random Testing ¨Generate tests randomly. ¨“Poorest methodology of all” [Meyers]. ¨Promoted Black Box Testing: Random Testing ¨Generate tests randomly. ¨“Poorest methodology of all” [Meyers]. ¨Promoted by others. ¨Statistical testing: ¨ Test inputs from an operational profile. ¨ Based on reliability theory. ¨ Adds discipline to random testing. A Andrews - Software Engineering: Software Testing'06

Black Box Testing: Cause-Effect Analysis ¨Rely on pre-conditions and post-conditions and dream up cases. Black Box Testing: Cause-Effect Analysis ¨Rely on pre-conditions and post-conditions and dream up cases. ¨Identify impossible combinations. ¨Construct decision table between input and output conditions. Each column corresponds to a test case. A Andrews - Software Engineering: Software Testing'06

Error guessing ¨Exploratory testing, happy testing, . . . ¨Always worth including ¨Can detect Error guessing ¨Exploratory testing, happy testing, . . . ¨Always worth including ¨Can detect some failures that systematic techniques miss ¨Consider ¨ Past failures (fault models) ¨ Intuition ¨ Experience ¨ Brain storming ¨ ”What is the craziest thing we can do? ” ¨ Lists in literature A Andrews - Software Engineering: Software Testing'06

Black Box Testing: Error Guessing ¨“Some people have a knack for ‘smelling out’ errors” Black Box Testing: Error Guessing ¨“Some people have a knack for ‘smelling out’ errors” [Meyers]. ¨Enumerate a list of possible errors or error prone situations. ¨Write test cases based on the list. Depends upon having fault models, theories on the causes and effects of program faults. A Andrews - Software Engineering: Software Testing'06

Usability testing Characteristics Environments ¨ Accessibility ¨ Free form tasks ¨ Responsiveness ¨ Procedure Usability testing Characteristics Environments ¨ Accessibility ¨ Free form tasks ¨ Responsiveness ¨ Procedure scripts ¨ Efficiency ¨ Paper screens ¨ Comprehensibility ¨ Mock-ups ¨ Field trial A Andrews - Software Engineering: Software Testing'06

Specification-based testing ¨Formal method ¨Test cases derived from a (formal) specification (requirements or design) Specification-based testing ¨Formal method ¨Test cases derived from a (formal) specification (requirements or design) Specification Model (state chart) Test case generation A Andrews - Software Engineering: Software Testing'06 Test execution

Model-based Testing Model Usag e Requirements VALIDATION Specification Top-level Design Integration Detailed Design Unit Model-based Testing Model Usag e Requirements VALIDATION Specification Top-level Design Integration Detailed Design Unit Test phase Coding A Andrews - Software Engineering: Software Testing'06

Need For Test Models ¨ Testing is a search problem. ¨ Search for specific Need For Test Models ¨ Testing is a search problem. ¨ Search for specific input & state combinations that cause failures. ¨ These combinations are rare. ¨ Brute force cannot be effective. Brute force can actually lead to overconfidence. ¨ Must target & test specific combinations. ¨ Targets based on fault models. ¨ Testing is automated to insure repeatable coverage of targets: “target coverage”. A Andrews - Software Engineering: Software Testing'06

Model-Based Coverage ¨ We cannot enumerate all state/input combinations for the implementation under test Model-Based Coverage ¨ We cannot enumerate all state/input combinations for the implementation under test (IUT). ¨ We can enumerate these combinations for a model. ¨ Models allow automated target testing. “Automated testing replaces the tester’s slingshot with a machine gun. ” “The model paints the target & casts the bullets. ” A Andrews - Software Engineering: Software Testing'06

Test Model Elements ¨ Subject: the IUT. ¨ Perspective: focus on aspects likely to Test Model Elements ¨ Subject: the IUT. ¨ Perspective: focus on aspects likely to be buggy based on a fault model. ¨ Representation: often graphs (one format is UML) or checklists. ¨ Technique: method for developing a model and generate tests to cover model targets. A Andrews - Software Engineering: Software Testing'06

The Role of Models in Testing ¨ Model validation: does it model the right The Role of Models in Testing ¨ Model validation: does it model the right thing? ¨ Verification: implementation correct? Informal: checklist Formal: proof ¨ Consistency checking: is representation instance of meta model? ¨ Meta-model: UML, graphs, etc + technique ¨ Instance model: representation constructed A Andrews - Software Engineering: Software Testing'06

Models Roles in Testing ¨ Responsibility-based testing. Does behavior conform to the model representation? Models Roles in Testing ¨ Responsibility-based testing. Does behavior conform to the model representation? ¨ Implementation-based testing. Does behavior conform to a model of the implementation. ¨ Product validation. Does behavior conform to a requirements model, for example, Use Case models? A Andrews - Software Engineering: Software Testing'06

Models That Support Testability ¨Represent all features to be tested. ¨Limit detail to reduce Models That Support Testability ¨Represent all features to be tested. ¨Limit detail to reduce testing costs, while preserving essential detail. ¨Represents all state events so that we can generate them. ¨Represents all state & state actions so that we can observe them. A Andrews - Software Engineering: Software Testing'06

Model Types ¨Combinational models: ¨ Model combinations of input conditions. ¨ Based on decision Model Types ¨Combinational models: ¨ Model combinations of input conditions. ¨ Based on decision tables. ¨State Machines: ¨ Output depends upon current & past inputs. ¨ Based on finite state machines. ¨UML models: model OO structures. A Andrews - Software Engineering: Software Testing'06

Combinational Model - Spin Lock [Binder Fig. 6. 1] 8 5 2 7 4 Combinational Model - Spin Lock [Binder Fig. 6. 1] 8 5 2 7 4 0 6 3 9 A Andrews - Software Engineering: Software Testing'06

Combinational Model: Use a Decision Table ¨ 1 of several responses selected based on Combinational Model: Use a Decision Table ¨ 1 of several responses selected based on distinct input variables. ¨Cases can be modeled as mutually exclusive Boolean expressions of input variables. ¨Response does not depend upon prior input or output. A Andrews - Software Engineering: Software Testing'06

Decision Table Construction Combinational Models 1. Identify decision variables & conditions. 2. Identify resultant Decision Table Construction Combinational Models 1. Identify decision variables & conditions. 2. Identify resultant actions to be selected. 3. Identify the actions to be produced in response to condition combinations. 4. Derive logic function. A Andrews - Software Engineering: Software Testing'06

Combinational Models Auto Insurance Model ¨ Renewal decision table: Table 6. 1. ¨ Column-wise Combinational Models Auto Insurance Model ¨ Renewal decision table: Table 6. 1. ¨ Column-wise format: Table 6. 2. ¨ Decision tree: Fig. 6. 2. ¨ Truth table format: Fig. 6. 3. ¨ Tractability note: ¨ Decision table with n conditions has a maximum of 2 n variants. ¨ Some variants are implausible. A Andrews - Software Engineering: Software Testing'06

Insurance Renewal Decision Table [Binder Table 6. 1] Condition Sec. Variant # Claims Age Insurance Renewal Decision Table [Binder Table 6. 1] Condition Sec. Variant # Claims Age Action Section Prem. Incr. Send Warning Cancel 1 0 <26 50 no no 2 0 >25 25 no no 3 1 <26 100 yes no 4 1 >25 50 no no 5 2 to 4 <26 400 yes no 6 2 to 4 >25 200 yes no 7 5 or > Any 0 yes A Andrews - Software Engineering: Software Testing'06

Insurance Renewal Decision Table Column-wise Format Variant Yes Condition Section 1 0 Insured age Insurance Renewal Decision Table Column-wise Format Variant Yes Condition Section 1 0 Insured age Action Section Number of Claims 2 0 3 4 5 6 5 or more 1 1 2 to 4 25 or 26 or younger older 25 or younger 26 or older 25 or 26 or younger older Any Premium increase($) 50 25 100 50 400 200 0 Send warning No No Yes Yes No Cancel No No No Yes A Andrews - Software Engineering: Software Testing'06 2 to 4 7

Implicit Variants that can be inferred, but not given. ¨ In the insurance example, Implicit Variants that can be inferred, but not given. ¨ In the insurance example, we don’t care about age if there are five or more claims. The action is to cancel for any age. ¨ Other implicit variants are those that cannot happen --one cannot be both under 25 yrs old and over 25. A Andrews - Software Engineering: Software Testing'06

Test Strategies All explicit variants at least once – When decision boundaries are systematically Test Strategies All explicit variants at least once – When decision boundaries are systematically exercised – weak, if can’t happen conditions or undefined boundaries result in implicit variants A Andrews - Software Engineering: Software Testing'06

Non binary Variable Domain Analysis ¨Exactly 0 claims, age 16 -25 ¨Exactly 0 claims, Non binary Variable Domain Analysis ¨Exactly 0 claims, age 16 -25 ¨Exactly 0 claims, age 26 -85 ¨Exactly 1 claim, age 16 -25 ¨Exactly 1 claim, age 26 -85 ¨Two, 3, or 4 claims, age 16 -25 ¨Two, 3, or 4 claims, age 26 -85 ¨Five to 10 claims, age 16 -85 A Andrews - Software Engineering: Software Testing'06

Test Cases Variant 1 Boundaries ==0 Number Of Claims 1 On 2 3 4 Test Cases Variant 1 Boundaries ==0 Number Of Claims 1 On 2 3 4 5 0 0 0 Off (above) 1 Off (below) -1 Typical In 0 >= 16 On 16 Insured Age Off <= 25 15 On 25 Off Typical 26 20 20 20 Accept V 3 Reject Premium increase 50 100 Send warning No Yes Cancel No No No A Andrews - Software Engineering: Software Testing'06 Expected result In 0 Accept Reject Accept V 2 50 50 25 No No No

Additional Heuristics ¨Vary order of input variables ¨Scramble test order ¨Purposely corrupt inputs A Additional Heuristics ¨Vary order of input variables ¨Scramble test order ¨Purposely corrupt inputs A Andrews - Software Engineering: Software Testing'06

Statistical testing / Usage based testing Usage specification Test case generation Test execution A Statistical testing / Usage based testing Usage specification Test case generation Test execution A Andrews - Software Engineering: Software Testing'06 Failure logging Reliability estimation

Usage specification models Algorithmic models Grammar model <test_case> : : = <no_commands> @ <command> Usage specification models Algorithmic models Grammar model : : = @