5bb3a1faf78841e2c8738d9e8c038c15.ppt
- Количество слайдов: 18
March 25, 2012 Organizing committee: Hana Chockler IBM Daniel Kroening Oxford Natasha Sharygina USI Leonardo Mariani Giovanni Denaro Uni. Mi. B
Program 09: 15 – 09: 45 Software Upgrade Checking Using Interpolation-based Function Summaries Ondrej Sery 09: 45 – 10: 30 Finding Races in Evolving Concurrent Programs Through Check-in Driven Analysis Alastair Donaldson coffee 11: 00 – 11: 45 Sendoff: Leveraging and Extending Program Verification Techniques for Comparing Programs Shuvendu K. Lahiri 11: 45 – 12: 30 Regression Verification for Multi-Threaded Programs Ofer Strichman lunch 14: 00 – 14: 45 Empirical analysis of Evolution of Vulnerabilities Fabio Massacci • 14: 45 – 15: 30 Testing Evolving Software • Alex Orso, coffee • 16: 00 – 16: 45 Automated Continuous Evolutionary Testing • Peter M. Kruse 2
Motivation: Challenges of validation of evolving software § Large software systems are usually built incrementally: § Maintenance (fixing errors and flaws, hardware changes, etc. ) § Enhancements (new functionality, improved efficiency, extension, new regulations, etc. ) § Changes are done frequently during the lifetime of most systems and can introduce new software errors or expose old errors § Upgrades are done gradually, so the old and new versions have to coexist in the same system § Changes often require re-certification of the system, especially for mission-critical systems "Upgrading a networked system is similar to upgrading software of a car while the car's engine is running, and the car is moving on a highway. Unfortunately, in networked systems we don't have the option of shutting the whole system down while we upgrade and verify a part of it. “ source: ABB 3
What does it mean to validate a change in a software system? • Equivalence checking – when the new version should be equivalent to the previous version in terms of functionality • Changes in the underlying hardware • Optimizations • No crashes – when several versions need to co-exist in the same system, and we need to ensure that the update will not crash the system • When there is no correctness specification, this is often the only thing we can check • Checking that a specific bug was fixed • A counterexample trace can be viewed as a specification of a behavior that needs to be eliminated in the new version • Validation of the new functionality • If a correctness specification for the change exists, we can check whether the new (or changed) behaviors satisfy this specification 4
Why is it validation of evolving software different from standard software validation? • Software systems are too large to be formally verified or exhaustively tested at once • Even if it is feasible to validate the whole system, often the process is too long and expensive and does not fit into the schedule of small frequent changes • When validating the whole system, there is a danger of overlooking the change How can we use the fact that we are validating evolving software? • If the previous version was validated in some way, we can assume that it is correct and not re-validate the parts that were not changed • If the results of previous validation exist, we can use them as a basis for the current validation – especially useful when there are many versions that differ from each other only slightly • The previous version can be used as a specification 5
PINCETTE Project – Validating Changes and Upgrades in Networked Software Methodology book Front end Static Analysis Component Checking for crushes 6 Using function summaries Dynamic Analysis Component Verifying only the change Black box testing White box testing
PINCETTE: exchange of information between static analysis and dynamic analysis techniques • Using static slicer as a preprocessing step to the dynamic analysis tools • The slicer reduces the size of the program so that only the parts relevant to the change remain • The resulting slice is then extended to an executable program • Specification mining: obtaining candidate assertions from dynamic analysis and using them in static analysis Static Analysis Component 7 Dynamic Analysis Component
Slicing procedure Program 7 16 6 2 14 8 Control Flow Graph (CFG) 9 5 Prog. Dep. Graph (PDG) 15 10 11 12 13 ©Ajitha Rajan, Oxford
Forward Slicing from Changes • Compute the nodes corresponding to changed statements in the PDG, and • Compute a transitive closure over all forward dependencies (control + data) from these nodes. + Backward Slicing from Assertions • Identify the assertions to be rechecked after the changes • Compute a transitive closure of backward dependencies (control +data) from these assertions ©Ajitha Rajan, Oxford
Example int a, b int main() { int a, b; if (a>=0) b = a; else b = -a; assert(b >= 0); return 0; } PDG Control Dep. If (a>=0) Data Dep. b=-a b=a assert(b>=0) return 0 Backward Slice Forward Slice Depth first traversal from Node b = -a; If (a>=0) b=-a assert(b>=0) ©Ajitha Rajan, Oxford b=a b=-a assert(b>=0)
Slicing procedure goto-cc GOTO Program Control Flow Graph (CFG) Forward Slice Program Prog. Dep. Graph (PDG) Backward Slice ©Ajitha Rajan, Oxford
Slicing procedure goto-cc GOTO Program Control Flow Graph (CFG) Forward Slice Program Prog. Dep. Graph (PDG) Merged Slice ©Ajitha Rajan, Oxford Backward Slice
Slicing procedure goto-cc Residual Nodes and edges Control Flow Graph (CFG) Prog. Dep. Graph (PDG) Merged Slice Program Slice ©Ajitha Rajan, Oxford GOTO Program Forward Slice Program Backward Slice executable
Static Pre-Pruning Static Slicer Constrain Inputs . . . ©Ajitha Rajan, Oxford 14 Dynamic Analyser
Dynamically Discovering Assertions to Support Formal Verification Motivation: • “Gray-box” components (such as OTS components) – poor specifications, partial view of internal details • Lack of specification complicates validation and debugging • Lack of description of the correct behavior complicates integration Idea: Analyze gray-box components by dynamic analysis techniques: • Monitor system executions by observing interactions at the component interface level and inside components • Derive models of the expected behavior from the observed events • Mark the model violations as symptoms of faults ©Leonardo Mariani, Uni. Mi. B 15
Dynamically Discovering Assertions at BCT • Combining dynamic analysis and model-based monitoring • Combining classic dynamic analysis techniques (Daikon) with incremental finite state generation techniques (k. Behavior) to produce I/O models and interaction models • FSA are produced and refined based on subsequent executions • Extracting information about likely causes of failures by automatically relating the detected anomalies • Filtering false positives in two steps: • Identify and eliminate false positives by comparing failing and successful executions with heuristics already experienced in other contexts • Rank the remaining anomalies according to their mutual correlation and use this information to push the related likely false positives far from the top anomalies ©Leonardo Mariani, Uni. Mi. B 16
“User in the Middle” Strategy Dynamic Analyser Executions candidate assertions Static Analysis true assertions (no user intervention) user System Under Test approved assertions Static Analysis ©Leonardo Mariani, Uni. Mi. B 17 Dynamic Analysis upgrade
PINCETTE Project – Validating Changes and Upgrades in Networked Software Methodology book Next talk Front end Static Analysis Component Checking for crushes 18 Using function summaries Dynamic Analysis Component Verifying only the change Black box testing Concolic testing
5bb3a1faf78841e2c8738d9e8c038c15.ppt