
8efc08b66e3667075cfecf1886ffbdf5.ppt
- Количество слайдов: 13
Report from the Working Group on “Methods in HCSS” Azer Bestavros
Participants • • Azer Bestavros, BU Matthew Dwyer, UNL Allen Goldberg, Kestrel Paul Jones, FDA Martha Matzke, NCO Paul Miner, NASA Cesar Munoz, NIA • • • David von Oheimb, Siemens Calton Pu, GTech John Rushby, SRI Lui Sha, UIUC Bill Spees, FDA Reinhard Wilhelm, Saarland
What do we mean by “Methods”? • A “method” is a systematic process for specification, analysis, verification, development, and validation of a system Formal Generic Domain-Specific Informal
Participants Backgrounds Generic Domain-Specification 0 5 4 1 Analysis 1 6 7 0 Verification 0 4 5 2 Implementation 2 1 3 2 Validation 1 1 1 2 Formal
Know thy method’s limit • A method is as good as the model of the system on which it is built – this is the “Heisenberg principle” at work… • Claims of correctness must always have a disclaimer about the limitations of the underlying model and assumptions
Domain knowledge is key • Successful methods are those that are well-integrated with an application domain • Domain knowledge informs and enriches both the models and the methods • It is rather hard to perform cross-domain analysis – e. g. , how could timing and power analysis leverage each other
Cost/Benefit Proposition • Integrating verification and certification processes with development processes reduces costs considerably • Also, it results in better developed systems and practices • The application of verification methods must be seamless – barrier to entry should be sufficiently low
Complexity Reduction • Models need to be rich (complex) enough to be able to deal with the “frame” problem – Models should be “resource aware” – Interfaces should reflect assumptions about behaviors (e. g. , dynamics) • Models need to be simple enough for scalability of verification
Complexity Reduction • Need to adopt an “hour-glass” paradigm for verification and validation – apply methods to intermediate abstract models of systems • Need to figure out what are these “right” abstract models • Interactions involving critical components must be simple
Manageability of “Bugs” • Distinguish between manageable versus unmanageable residual bugs • Verify absence of unmanageable bugs and develop strategies for dealing with manageable bugs – Bugs that are visible through an established interface may be manageable, whereas those that open up new “interfaces” are unmanageable
Challenges ~ 3 years • Many specific challenges – Automated methods fit for deployment – Timing validation of IMA (single board) – Leverage byproducts of method application • Develop methods and frameworks for integrating disparate analyses and verification methodologies • Develop better methods/architectures for establishing/enabling non-interference
Which is the right perspective? • “Design only what you can analyze” versus “Resilience against the unknowns” • “Make COTS systems certifiable” versus “Make certifiable systems COTS” • Aviation software is the tip of the iceberg and is trivial compared to complex medical systems of the future…
Education • Recognize the professional nature of the practice • Need to create demand for new programs (e. g. , masters) • “Aviation software” may be too narrow to create enough demand – solicit consortium of broader industries in “highconfidence software and systems”