611f3d9c2a350fc19062f4034cdcd27e.ppt
- Количество слайдов: 35
CS 4723 Software Validation and Quality Assurance Lecture 02 Overview of Software Testing
Approach to remove bugs Ø Testing Ø Ø Feed input to software and run it to see whether its behavior is as expected Limitations Ø Ø Ø Impossible to cover all cases Test oracles (what is expected) Static checking Ø Ø Identify specific problems (e. g. , memory leak) in the software by scanning the code or all possible paths Limitations Ø Ø 2 Limited problem types False positives
Approach to remove bugs Ø Formal Proof Ø Ø Formally prove that the program implements the specification Limitations Ø Ø Ø Difficult to have a formal specification The proof cost a lot of human efforts Inspection Ø Manually review the code to detect faults Ø Limitations: Ø Ø 3 Hard to evaluate Sometime hard to get progress
Answer is testing, why? Ø “ 50% of my employees are testers, and the rest spends 50% of their time testing” ---- Bill Gates, 1995 Ø More reliable than inspection, relatively cheap Ø Ø Actually in the old days, when testing is expensive, inspection was the major answer You get what you pay (linear rewards) Ø Compared to other 3 approaches Ø 4 Inspection, static checking, formal proof
Testing: Concepts Ø Ø Test oracle Ø Test suite Ø Test script Ø Test driver Ø 5 Test case Test coverage
Testing: Concepts Ø Test case Ø Ø An execution of the software with a given list of input values Include: Ø Ø Ø Input values, sometimes fed in different steps Expected outputs Test oracle Ø The expected outputs of software by feeding in a list of input values Ø Ø 6 A part of test cases Hardest problem in auto-testing: test oracle problem
Testing: Concepts: Example 7
Testing: Concepts Ø Test suite Ø Ø A collection of test cases Usually these test cases share similar pre-conditions and configuration Ø Usually can be run together in sequence Ø Different test suites for different purposes Ø Ø Test Script Ø 8 Smoke test, Certain platforms, Certain feature, performance, … A script to run a sequence of test cases or a test suite automatically
Testing: Concepts Ø Test Driver Ø Ø Ø A software framework that can load a collection of test cases or a test suite It can usually handle the configuration and comparison between expected outputs and actual outputs Test Coverage Ø A measurement to evaluate how well the testing is done Ø The measure can be based on multiple elements Ø Ø Input combinations Ø 9 Code Specifications
Granularity of Testing: V-model 10
Granularity of testing Ø Unit / Module Testing Ø Ø Integration Testing Ø Ø Test the system as a whole, by developers on test cases Acceptance Testing Ø 11 Test the interaction between modules System Testing Ø Ø Test of a single module Validate the system against user requirements, by customers with no formal test cases
Stage of software Testing Ø Development-time testing Ø Ø Before-release testing Ø Ø System testing, Acceptance Testing User testing Ø 12 Unit testing, Integration Testing Actual usage -> field bugs & patches
Types of testing by how they are designed Ø Black box testing Ø Ø Ø The tester are just like normal users They just try to cover input space and corner cases White box testing Ø Ø They knows where the bugs are more probably be Ø 13 The tester knows everything about the implementation They can exercise paths in the code
Black Box Testing: General Guidelines Ø Divide value range and cover each part Ø Cover boundary values Ø Try to reach all error messages Ø Try to trigger potential exceptions Ø Feed invalid inputs Ø Ø Ø 14 wrong formats, too long, too short, empty, … Try combinations of all above Repeat same and use different inputs for many times if the input is a sequence
Black Box Testing Techniques Ø Boundary value testing l Boundary value analysis l Robustness testing l Worst case testing Ø Ø 15 Equivalence class testing Decision table based testing
Boundary Value Analysis Ø Ø Ø Errors tend to occur near the extreme values of an input variables Boundary value analysis focuses on the boundary of the input space to identity test cases Boundary value analysis selects input variable values at their l l Just above the minimum l A nominal value l 16 Minimum Just below the maximum l Maximum
Example of Boundary Value Analysis Ø Assume a program accepting two inputs y 1 and y 2, such that a < y 1< b and c < y 2 < d . . . . 17 . .
Single Fault Assumption for Boundary Value Analysis Ø Boundary value analysis is also augmented by the single fault assumption principle “Failures occur rarely as the result of the simultaneous occurrence of two (or more) faults” Ø In this respect, boundary value analysis test cases can be obtained by holding the values of all but one variable at their nominal values, and letting that variable assume its extreme values 18
Generalization of Boundary Value Analysis Ø The basic boundary value analysis can be generalized in two ways: l By the number of variables - (4 n +1) test cases for n variables l By the kinds of ranges of variables : map to order l l Sequences l 19 Strings Complex Data Structures, e. g. , trees
Application Scenario of Boundary Value Analysis Ø Several independent variables that represent bounded physical quantities Ø No consideration of the function of the program, nor of the semantic meaning of the variables Ø Good news: We can distinguish between physical and other types of variables 20
Robustness Testing Ø Ø Ø 21 A simple extension of boundary value analysis In addition to the five boundary value analysis values of variables, we add values slightly greater that the maximum (max+) and a value slightly less than the minimum (min-) The main value of robustness testing is to force attention on exception handling
Example of Robustness Testing . . 22 … UTSA CS 3773
Worst Case Testing Ø Ø No single fault assumption: error happens when more than one variable has an extreme value Considering n inputs in boundary analysis, we take the Cartesian product of the five values for 1, 2, 3, … n variables We can have 5 n test cases for n input variables The more interactions on inputs -> more on worse case testing Ø 23 Input partitions: Length & Width vs. Length & price
Example of Worst Case Testing . . 24 . . . .
Equivalence Class Testing Ø Divide the value range of an input to a number of subsets Ø Ø Ø Subsets are disjoint The union of the subset if the value range Values in one subset does not make difference for the software concerned Ø Ø 25 Water temp in a car: <32, 32 – 212, >=212 Normal colors vs. Metallic colors
Example of Equivalence Class Testing 26
Equivalence Class Testing Ø The use of equivalence class testing has two motivations: l Sense of complete testing l l Avoid redundancy l Ø The disjointness assures a form of non-redundancy Note l l 27 The entire set is represented provides a form of completeness Also check boundaries Combinations of inputs also follows the rule: more interaction -> more combinations
Equivalent Class for non-numeric inputs Ø Feature extraction Ø For string and structure inputs Ø Split the possible value set with a certain feature Ø Example: String passwd => {contains space}, {no space} Ø It is possible to extract multiple features from one input Ø Example: String name => {capitalized first letter}, {not} => {contains space}, {not} => {length >10}, {2 -10}, {1}, {0} One test case may cover multiple features 28
Decision Table Ø Make it easy to observe that all possible conditions are accounted for Ø Decision tables can be used for: l l 29 Specifying complex program logic Generating test cases with oracles
Example of Decision Table Printer does not print Y Y Y N N A red light is flashing Y Y N N Printer is unrecognized Conditions Y Y N Y N X X Check the power cable X Check the printer-computer cable X Ensure printer software is installed X X Check/replace ink Actions X X Check for paper jam 30 Printer Troubleshooting X X X
Decision Table Usage Ø The use of the decision-table model is applicable when: l Specification is given or can be converted to a decision table l The order in which the predicates are evaluated does not affect the interpretation of resulting action Ø Note: l 31 Decision table needs not cover all combinations
White Box Testing: General Guidelines Ø Try to cover all branches Ø Ø Test more on complex modules Ø 32 Study the relationship between input value and branch logic Measure complexities of modules by code size, number of branches and loops, number of calls and recursions
White Box Testing: Techniques Ø More difficult than black box testing Ø Seldom done manually Ø Automatic support Ø Ø 33 Symbolic execution Complexity measurement and Defect prediction
Review: Test overview Ø Ø Test is the practical choice: the best affordable approach Concepts: test case, test oracle, test suite, test driver, test script, test coverage Granularity: unit, integration, system, acceptance Type by design principle: black-box, white-box Ø 34 Ø Black-box-testing: boundary, equivalence, decision table White-box-testing: branch coverage, complexity
A short review Ø Software bugs affect every aspect of our life Ø Software Bugs may cause severe consequences Ø Software Bugs are prevalent Ø Aspects of software quality Ø Ø Approaches to enhance software quality Ø 35 Dependability, Efficiency, Usability, Maintainability (Bad smells -> bugs) Bug prevention, Bug removal, Bug avoiding


