Скачать презентацию Welcome to CS 119 l Reliable Software Testing Скачать презентацию Welcome to CS 119 l Reliable Software Testing

f44dfcfe0cb296f801ac8d9e7193824c.ppt

  • Количество слайдов: 55

Welcome to CS 119 l Reliable Software: Testing and Monitoring l First half of Welcome to CS 119 l Reliable Software: Testing and Monitoring l First half of lectures will cover testing • • • Taught by me ([email protected] com) http: //www. cs. cmu. edu/~agroce/CS 119 l Last half of class will focus on monitoring • Taught by Klaus Havelund “In runtime verification a software component, an observer, monitors the execution of a program and checks its conformity with a requirement specification. ” - Klaus 1

Today l Some general background • Topics of the class • Testing project l Today l Some general background • Topics of the class • Testing project l Basic definitions l Black box testing (FSM) algorithms • Why is testing difficult, in theory and practice? 2

Before we start l What do I know about testing, anyway? • I’ve written Before we start l What do I know about testing, anyway? • I’ve written programs and tested them • • • So have most of you, I would bet Split my time at JPL between model checking & testing research E. g. , testing the file systems that will be used in the Mars Science Laboratory – JPL’s next big Mars mission 3

they turn the file system off during EDL (Entry, Descent, and Landing), which helps they turn the file system off during EDL (Entry, Descent, and Landing), which helps me sleep at night 4

Topics in Testing We’ll Cover l Black box (Finite State Machine) testing l Design Topics in Testing We’ll Cover l Black box (Finite State Machine) testing l Design for testability l Coverage measures l Random testing l Constraint-based testing l Debugging and test case minimization l Using model checkers for testing l Coverage revisited (“small model property”) 5

Read All About It l No textbook for this class, only papers l Books Read All About It l No textbook for this class, only papers l Books I like that have something important to say about testing (though none of these are about testing): • The Practice of Programming, Kernighan and Pike • Programming Pearls, Bentley • Why Programs Fail: A Guide to Systematic Debugging, Zeller • Code Complete, Mc. Connell • The Mythical Man-Month, Brooks 6

Read All About It l Book about testing: • Introduction to Software Testing, Ammons Read All About It l Book about testing: • Introduction to Software Testing, Ammons and Offutt • • I like it myself Recommended by colleagues who’ve taught classes on testing (and are first-rate testing researchers) Book is thorough and cleverly organized, provokes some real thought about how to test programs I might just follow this book if doing a whole class on testing • • • As it is, we’ll take more of a “hit the highlights” approach – More concentrated on automated techniques: random testing, constraint-based testing, and model checking Won’t stop me from using some of their slides for areas they cover well 7

Testing Project l Start with source code for YAFFS (Yet Another Flash File System): Testing Project l Start with source code for YAFFS (Yet Another Flash File System): an open source flash file system • Plus a set of (buggy) variations of the YAFFS code • A (minimal) automated testing framework l Project: • Write a better automated tester • • Must be efficient: spend no more than 45 minutes to test a version of YAFFS Write a test report on the YAFFS versions 8

Testing Project l You will also turn in two new buggy variations of YAFFS Testing Project l You will also turn in two new buggy variations of YAFFS • • Make sure your tester finds the bug Produce a test case Make sure the test case succeeds for original YAFFS! You get to debug programs in lots of classes and in real life – in this class you get to bug a program intentionally, for once in your life l I will apply all of your testers to these bugs plus another (top secret) set of mutations of YAFFS that I generate l Let me know any time you think you find a bug in the original YAFFS! 9

Testing Project l Grading criteria • Design/implementation of the tester • Effectiveness of the Testing Project l Grading criteria • Design/implementation of the tester • Effectiveness of the tester • Quality of the test report • • Can I figure out how you tested YAFFS? Can I figure out what wasn’t tested? Can I figure out how reliable you think YAFFS is, and how buggy the various versions are? How “interesting” and hard-to-find your new bugs are 10

Testing Project l Expectations • • “he who learns to play the harp learns Testing Project l Expectations • • “he who learns to play the harp learns to play by playing it” - Aristotle, Metaphysics, Book IX You can program in C You can use makefiles / build system l Hopes • • Maybe we’ll find some previously unknown bugs in YAFFS – that would be cool I hope you’ll help me make sure MSL figures out if there was life on Mars l Get started with YAFFS at http: //www. yaffs. net 11

Basic Definitions: Testing l What is software testing? • • Running a program In Basic Definitions: Testing l What is software testing? • • Running a program In order to find faults • • • a. k. a. defects a. k. a. errors a. k. a. flaws a. k. a. faults a. k. a. BUGS 12

Bugs “an analyzing process must equally have been performed in order to furnish the Bugs “an analyzing process must equally have been performed in order to furnish the Analytical Engine with the necessary operative data; and that herein may also lie a possible source of error. Granted that the actual mechanism is unerring in its processes, the cards may give it wrong orders. ” – Ada, Countess Lovelace (notes on Babbage’s Analytical Engine) Hopper’s “bug” (moth stuck in a relay on an early machine) “It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that 'Bugs'—as such little faults and difficulties are called—show themselves and months of intense watching, study and labor are requisite. . . ” – Thomas Edison 13

Testing l What isn’t software testing? • Purely static analysis: examining a program’s source Testing l What isn’t software testing? • Purely static analysis: examining a program’s source code or binary in order to find bugs, but not executing the program • Good stuff, and very important, but it’s not testing • Fuzzy borderline: if we only symbolically execute the program • For this class, we’ll stick to testing where the program actually runs (but maybe in a virtual machine) 14

Why Testing? l Ideally: we prove code correct, using formal mathematical techniques (with a Why Testing? l Ideally: we prove code correct, using formal mathematical techniques (with a computer, not chalk) • Extremely difficult: for some trivial programs (100 lines) and many small (5 K lines) programs • Simply not practical to prove correctness in most cases – often not even for safety or mission critical code 15

Why Testing? l Nearly ideally: use symbolic or abstract model checking to prove the Why Testing? l Nearly ideally: use symbolic or abstract model checking to prove the system correct • • Automatically extracts a mathematical abstraction from a system Proves properties over all possible executions • • In practice, can work well for very simple properties (“this program never crashes in this particular way”), but can’t handle complex properties (“this is a working file system”) Doesn’t work well for programs with complex data structures (like a file system) 16

As a last resort… l … we can actually run the program, to see As a last resort… l … we can actually run the program, to see if it works l This is software testing • Always necessary, even when you can prove correctness – because the proof is seldom directly tied to the actual code that runs “Beware of bugs in the above code; I have only proved it correct, not tried it” – Knuth 17

Why Does Testing Matter? l NIST report, “The Economic Impacts of Inadequate Infrastructure for Why Does Testing Matter? l NIST report, “The Economic Impacts of Inadequate Infrastructure for Software Testing” (2002) • • Inadequate software testing costs the US alone between $22 and $59 billion annually Better approaches could cut this amount in half l Major failures: Ariane 5 explosion, Mars Polar Lander, Intel’s Pentium FDIV bug l Insufficient testing of safety-critical software can cost lives: THERAC-25 radiation machine: 3 dead Ariane 5: exception-handling bug : forced self destruct on maiden flight (64 -bit to 16 -bit conversion: about 370 million $ lost) Mars Polar Lander crash site? l We want our programs to be reliable • Testing is how, in most cases, we find out if they are THERAC-25 design 18

Testing and Monitoring l In this first half of the class, we’ll look at Testing and Monitoring l In this first half of the class, we’ll look at which executions of a program to run • I’ll call this problem “the” testing problem l Second problem: how do we know if an execution reveals a bug? • • Key question when monitoring deployed programs to handle faults or send in bug reports from the field I’ll (mostly) take this for granted: we have a reference model or assertions to check 19

Example: File System Testing l File system is a library, called by other components Example: File System Testing l File system is a library, called by other components of the flight software l Accepts a fixed set of operations that manipulate files: Operation Result mkdir (“/eng”, …) SUCCESS mkdir (“/data”, …) SUCCESS creat (“/data/image 01”, …) SUCCESS creat (“/eng/fsw/code”, …) ENOENT mkdir (“/data/telemetry”, …) SUCCESS unlink (“/data/image 01”) SUCCESS File system / /eng /data image 01 /telemetry 20

Example: File System Testing l Easy to detect many errors: we have access to Example: File System Testing l Easy to detect many errors: we have access to many working file systems, and can just compare results Choose operation F Perform F on Tested FS Perform F on Reference (if applicable) Compare return values (in this unusual case, the problem Klaus will discuss is not much of a problem) (inject a fault? ) Compare error codes Compare file systems Check invariants 21

Example: File System Testing l How hard would it be to just try “all” Example: File System Testing l How hard would it be to just try “all” the possibilities? l Consider only core 7 operations (mkdir, rmdir, creat, open, close, read, write) • • • Most of these take either a file name or a numeric argument, or both Even for a “reasonable” (but not provably safe) limitation of the parameters, there are 26610 executions of length 10 to try Not a realistic possibility (unless we have 1012 years to test) 22

The Testing Problem l This is the topic of the first half of the The Testing Problem l This is the topic of the first half of the class: what “questions” do we pose to the software, i. e. , • How do we select a small set of executions out of a very large set of executions? • Fundamental problem of software testing research and practice • An open (and essentially unsolvable, in the general case) problem 23

The Testing Problem / Terms l This is not a class in the management The Testing Problem / Terms l This is not a class in the management or even the basic practices of testing • • Hard, important problem But not the focus of this class l This class is going to focus on state-of-the-art automated approaches • • Using tools To catch the bugs that you don’t catch with basic practices l I will briefly cover some basic terms of testing and testing management today, then we’ll mostly dive into “How To Test It” at a more technical level 24

Terms: Verification and Validation l These two terms appear a lot, often in vague Terms: Verification and Validation l These two terms appear a lot, often in vague or sloppy ways, in the literature • Verification is checking that a program matches a specification • Validation is making sure it meets the original requirements – satisfies customers, operates ok onboard the spacecraft, etc. l Verification: “you built it right” (our focus, for the most part) l Validation: “you built the right thing” 25

Terms: Unit, Integration, System Testing l Stages of testing • Unit testing is the Terms: Unit, Integration, System Testing l Stages of testing • Unit testing is the first phase, done by developers of modules • Integration testing combines unit tested modules and tests how they interact • System testing tests a whole program to make sure it meets requirements • “Design testing” is testing prototypes or very abstract models before implementation – seldom mentioned, but when possible it can save your bacon • Exhaustive model checking may be possible at this stage 26

Terms: Functional Testing l Functional testing is a related term • • Tests a Terms: Functional Testing l Functional testing is a related term • • Tests a program from a “user’s” perspective – does it do what it should? Opposed to unit testing, which often proceeds from the perspective of other parts of the program • • Module spec/interface, not user interaction Sort of a fuzzy line – consider a file system – how different is the use by a program and use of UNIX commands at a prompt by a user? Building inspector does “unit testing”; you, walking through the house to see if its livable, perform “functional testing” Kick the tires vs. take it for a spin? 27

Terms: Regression Testing l Regression testing • Changes can break code, reintroduce old bugs Terms: Regression Testing l Regression testing • Changes can break code, reintroduce old bugs • • • Things that used to work may stop working (e. g. , because of another “fix”) – software regresses Usually a set of cases that have failed (& then succeeded) in the past Finding small regressions is an ongoing research area – analyze dependencies “. . . as a consequence of the introduction of new bugs, program maintenance requires far more system testing. . Theoretically, after each fix one must run the entire batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way. In practice, such regression testing must indeed approximate this theoretical idea, and it is very costly. " - Brooks, The Mythical Man-Month 28

Terms: The Oracle Problem (Klaus) l The oracle problem • • • How to Terms: The Oracle Problem (Klaus) l The oracle problem • • • How to know if a test fails If the oracle says every execution is good, why bother running the program? Some obvious, easily automated approaches: • • • The program probably shouldn’t crash Assertions shouldn’t be violated Automatable, but more difficult to apply: • • (oracle: a magical source of truth, often cryptic, given by the gods) Differential testing (Mc. Keeman, etc. ) – when you have another program, likely correct, that does the same thing, just compare outputs over same inputs Last resort, not automatable: • Hand inspection of executions 29

Terms: Test (Case) vs. Test Suite l Test (case): one execution of the program, Terms: Test (Case) vs. Test Suite l Test (case): one execution of the program, that may expose a bug l Test suite: a set of executions of a program, grouped together • A test suite is made of test cases l Tester: a program that generates tests l Line gets blurry when testing functions, not programs – especially with persistent state 30

Terms: Black Box Testing l Black box testing • • • Treats a program Terms: Black Box Testing l Black box testing • • • Treats a program or system as a That is, testing that does not look at source code or internal structure of the system Send a program a stream of inputs, observe the outputs, decide if the system passed or failed the test Abstracts away the internals – a useful perspective for integration and system testing Sometimes you don’t have access to source code, and can make little use of object code • True black box? Access only over a network 31

Terms: White Box Testing l White box testing • Opens up the box! • Terms: White Box Testing l White box testing • Opens up the box! • (also known as glass box, clear box, or structural testing) • Use source code (or other structure beyond the input/output spec. ) to design test cases • Brings us to the idea of coverage 32

Terms: Coverage l Coverage measures or metrics • • • Abstraction of “what a Terms: Coverage l Coverage measures or metrics • • • Abstraction of “what a test suite tests” in a structural sense Best explained by giving examples Common measures: • Statement coverage • • • Decision coverage • • A. k. a line coverage or basic block coverage Which statements execute in a test suite Which boolean expressions in control structures evaluated to both true and false during suite execution Path coverage • Which paths through a program’s control flow graph are taken in the test suite 33

Terms: Coverage Measures l In general, used to measure the quality of a test Terms: Coverage Measures l In general, used to measure the quality of a test suite • • • Even in cases where the suite was designed for some other purpose (such as testing lots of different use scenarios) Not always a very good measure of suite quality, but “better than nothing” We “open the box” in white box testing partly in order to look at (and design tests to achieve) coverage l We’ll coverage in much more detail 34

Terms: Mutation Testing l A mutation of a program is a version of the Terms: Mutation Testing l A mutation of a program is a version of the program with one or more random changes l Mutation testing is another way to measure the quality of a test suite • Amman and Offutt call it syntax-based coverage l Idea: generate a large number of mutants • Run the test suite on these • • Difficulties • • • If few mutants are detected, the test suite may not be very good Cost of testing many versions of a program How to generate mutants (operators) In principle, can subsume many other forms of coverage 35

Black Box (Finite State Machine) Testing 36 Black Box (Finite State Machine) Testing 36

Black box(FSM) testing l Let’s step back from software testing l Let’s look at Black box(FSM) testing l Let’s step back from software testing l Let’s look at a simpler model • Finite state machines • Software is a finite state machine • What? Software is a Turing machine, right? Only with an infinite tape. That is, only if your software has access to infinite memory. Lego “Turing machine” 37

Black box (FSM) testing l With static memory allocation or with limited dynamic allocation Black box (FSM) testing l With static memory allocation or with limited dynamic allocation nothing is infinite • Even if you add in disk or network storage • We don’t have infinite electrons, much less memory l So software systems are finite state machines, in reality l Don’t you feel better now? • there are only ~1079 of these little guys, y’know? No more late nights worrying about the halting problem! 38

Black box (FSM) testing l Theoretical issues aside, why do we care about testing Black box (FSM) testing l Theoretical issues aside, why do we care about testing finite state machines? • Abstraction: designs can often be best understood as finite-state machines • • String processing/searching Protocols – communication, cache coherence, etc. Control component of any discrete system Automatic abstraction: • Tools that take systems and produce (coarse) finite state abstractions 39

Black box (FSM) testing l Useful for modeling aspects of many designs FD = Black box (FSM) testing l Useful for modeling aspects of many designs FD = open (“/foo”) close(FD) read(FD, buf, nbytes) write(FD, buf, nbytes) 40

Very Simple FSM Model l FSM is a tuple, <S, , T, I> • Very Simple FSM Model l FSM is a tuple, • • S is a set of states is the input alphabet T is the transition relation • T: S x x S I S is the initial state a a b d a c c 0 b a 1 l Further assume: • Machine is deterministic • • T is a (partial) function S x S Given an input from , machine either • • Outputs 0 (if no transition) Or outputs 1 and takes the transition to s’ 41

Conformance Testing l How do we test finite state machines? l Let’s say we Conformance Testing l How do we test finite state machines? l Let’s say we have • Known FSM A • • a b d a c Unknown FSM B (same alphabet) • • Know all states and transitions a Can only perform experiments How do we tell if A = B? l Known as the conformance testing or equivalence testing problem • As stated, we cannot solve the problem • Why? 42

Combination Lock Machine l How many states does B have? • If we don’t Combination Lock Machine l How many states does B have? • If we don’t know, we can never be sure it is the same machine as A Machine B Machine A b u a, c-z g a-f, h-z a-t, v-z a-z • B is a combination lock: looks like A unless we input exact sequence “b u g” – in which case it deadlocks 43

Combination Lock Machine B Machine A b u a, c-z g a-f, h-z a-t, Combination Lock Machine B Machine A b u a, c-z g a-f, h-z a-t, v-z a-z l Even if we know upper limit n on B’s size, for alphabet of size | | • It takes n| | tests to check equivalence to this particular A • This pathological case imposes some limits on conformance testing in general 44

Conformance Testing (VC Algorithm) l Algorithm due to Vasilevskii and Chow for conformance testing Conformance Testing (VC Algorithm) l Algorithm due to Vasilevskii and Chow for conformance testing • Assumptions • • • A is minimized, has m states B has no more than n states A, B both have a reliable reset • We can start from initial state at will • Worst-case complexity: O(n 2 m | |n-m+1) • I’ll cover this quickly and informally, skipping over the sub-algorithms 45

Conformance Testing (VC Algorithm) l First, we find a path to each state of Conformance Testing (VC Algorithm) l First, we find a path to each state of A l Typically, we compute a spanning tree • For example, by a depth first search (DFS) Read the paths off of the tree: a a a d a a b c a b aa aad d ab l Call this set P 46

Conformance Testing (VC Algorithm) l Next, compute a characterizing (or distinguishing) set for A Conformance Testing (VC Algorithm) l Next, compute a characterizing (or distinguishing) set for A l Set W of input sequences such that • s, s’ S • s s’ • Exists w W • • Output for w from s not equal to output from s’ i. e. , we can use W to tell what state we’re in 47

Conformance Testing (VC Algorithm) l Next, compute a characterizing (or distinguishing) set for A Conformance Testing (VC Algorithm) l Next, compute a characterizing (or distinguishing) set for A l For example, W for A might be: • {aa, b} a a a d b • aa: 11, 10, 01, 00, 10 • • c Distinguishes all but these two states Which are distinguished by b (1 vs. 0) l Can we find another (better? ) set? 48

Conformance Testing (VC Algorithm) l Now we can compute Z: • W U • Conformance Testing (VC Algorithm) l Now we can compute Z: • W U • 2 W U • … • m-n W l To test B for conformance with A • Run the tests produced by taking cross-product of P and Z on both A and B 49

Conformance Testing (VC Algorithm) a l P: {<>, a, aad, ab} l W: {aa, Conformance Testing (VC Algorithm) a l P: {<>, a, aad, ab} l W: {aa, b} a a d b c l Let’s say we know B has no more than 6 states l The complete testing sequence (with reset before each test on each machine) is: • {aa, b, a aa, a b, aa aa, aa b, aad aa, aad b, ab aa, ab b, a aa, b aa, c aa, d aa, a b, b b, c b, d b, a a aa, a b aa, a c aa, a d aa, a a b, a b b, a c b, a d b, aa a aa, aa b aa, aa c aa, aa d aa, aa a b, aa b b, aa c b, aa d b, aad a aa, aad b aa, aad c aa, aad d aa, aad a b, aad b b, aad c b, aad d b, ab a aa, ab b aa, ab c aa, ab d aa, ab a b, ab b b, ab c b, ab d b} 50

Conformance Testing (VC Algorithm) l As this small example shows, exhaustive tests can be Conformance Testing (VC Algorithm) l As this small example shows, exhaustive tests can be very expensive • In general, we cannot computationally afford to perform complete testing • We will always face the risk of missing errors • Even when we reduce our problem to the simplest model • • The complexity of testing full equivalence to a reference model is simply too high Exhaustion is exhausting 51

From FSM Testing to the Big Picture l Testing (almost always) is an attempt From FSM Testing to the Big Picture l Testing (almost always) is an attempt to • Cover some measure of a structure • • Nodes of a graph (e. g. , VC’s spanning tree) Inputs that give different outputs (e. g. , VC’s distinguishing set) All possible inputs (e. g. , m-n) Logical expression evaluations Predicates over program variables Pairs of where a variable is defined and where it is used (data flow) Usually, we can’t even guarantee that coverage directly correlates to more bugs found 52

Basic Definitions: Testing l What is software testing? • • Running a program In Basic Definitions: Testing l What is software testing? • • Running a program In order to find faults • • • But also, in order to • • • a. k. a. defects a. k. a. errors a. k. a. flaws a. k. a. faults a. k. a. BUGS Increase our confidence that the program has high quality and low risk Because we can never be sure we caught all bugs How does a set of executions increase confidence? • • Sometimes, by algorithmic argument (VC) Sometimes by less formal arguments (coverage in general) 53

FSM Testing: Further Reading l Yannakakis and Lee, “Principles and Methods of Testing Finite FSM Testing: Further Reading l Yannakakis and Lee, “Principles and Methods of Testing Finite State Machines: A Survey” l Chow, “Testing Software Design Modeled by Finite-State Machines” l Links on the class website 54

“Assignment 0” l Download the YAFFS tarball from the class website l cd direct “Assignment 0” l Download the YAFFS tarball from the class website l cd direct l make l look at yaffscfg 2 k. c l run. /directtest 2 k l write a very simple test (e. g. , open a file, write something, and read it back) and add it to the main l get it to compile and run, and email the test case (and results) to me at [email protected] com 55