
b82085882dfd2000cb0372eeef8f669d.ppt
- Количество слайдов: 40
Automatic Trust Management for Adaptive Survivable Systems (ATM for ASS’s) Howard Shrobe MIT AI Lab Jon Doyle MIT Lab for Computer Science
The Core Thesis Survivable systems make careful judgments about the trustworthiness of their computational environment and they make rational resource allocation decisions based on their assessment of trustworthiness.
The Thesis In Detail: Trust Model • It is crucial to estimate to what degree and for what purposes a computational resource may be trusted. • This influences decisions about: – What tasks should be assigned to which resources. – What contingencies should be provided for, – How much effort to spend watching over the resources. • The trust estimate depends on having a model of the possible ways in which a computational resource may be compromised.
The Thesis in Detail: Adaptive Survivable Systems • The application itself must be capable of self-monitoring and diagnosis – It must know the purposes of its components – It must check that these are achieved – If these purposes are not achieved, it must localize and characterize the failure • The application itself must be capable of adaptation so that it can best achieve its purposes within the available infrastructure. – It must have more than one way to effect each critical computation – It should choose an alternative approach if the first one failed – It should make its initial choices in light of the trust model
The Thesis in Detail: Rational Resource Allocation • This depends on the ability of the application, monitoring, and control systems to engage in rational decision making about what resources they should use to achieve the best balance of expected benefit to risk. • The amount of resources dedicated to monitoring should vary with the threat level • The methods used to achieve computational goals and the location of the computations should vary with the threat • Somewhat compromised systems will sometimes have to be used to achieve a goal • Sometimes doing nothing will be the best choice
The Active Trust Management Architecture Perpetual Analytical Monitoring Trust Model: Trustworthiness Compromises Attacks Self Adaptive Survivable Systems Rational Decision Making Trend Templates Other Information Sources: Intrusion Detectors System Models & Domain Architecture Rational Resource Allocation
Tiers of a Trust Model • Attack Level: history of “bad” behaviors – penetration, denial of service, unusual access, Flooding • Compromise Level: state of mechanisms that provide: – – – Privacy: stolen passwords, stolen data, packet snooping Integrity: parasitized, changed data, changed code Authentication: changed keys, stolen keys Non-repudiation: compromised keys, compromised algorithms Qo. S: slow execution Command Control Properties: compromises to the monitoring infrastructure • Trust Level: degree of confidence in key properties – Compromise states – Intent of attackers – Political situation
Adaptive Survivable Systems Rational Selection Component Asset Base Foo 1 2 A 1 2 Method 3 Is most Attractive 3 1 2 Diagnostic Service To: Execute Foo B 3 Diagnosis & Recovery 1 2 3 3 Repair Plan Selector Super routines alerts Layer 1 Layer 2 Self Monitoring Layer 3 Plan Structures Foo B A C Synthesized Sentinels A Post Condition 1 of Foo Because Post Cond 2 of B And Post Cond 1 of C Pre. Req 1 of B Because Post Cond 1 of A Development Environment Rollback Designer B Resource Allocator Condition-1 Enactment Runtime Environment
Context for the Project: The Intelligent Room (E 21) • The Intelligent Room is an Integrated Environment for Multi-modal HCI. It has Eyes and Ears. – The room provides speech input – The room has deep understanding of natural language utterances – The room has a variety of machine vision systems that enable it to: • • Track motion and maintain the position of people Recognize gestures Recognize body postures Identify faces (eventually) Track pointing devices (e. g. laser pointer) Select Optimal Camera for Remote Viewers Steer Cameras to track focus of attention • Meta. Glue is a lightweight, distributed agent infrastructure for integrating and dynamically (un)connecting new HCI components. Meta Glue is the Brains of the Room.
The E 21 Maps Abstract Services into Plans • Users request abstract services from the E 21 – “I want to get a textual message to a system wizard” • The E 21 has many plans for how to render each abstract service – “Locate a wizard, project on a wall near her” – “Locate a wizard, use a voice synthesizer and a speaker near her” – “Print the message and page the Wizards to go to the printer” • Each plan requires certain resources (and other abstract services) – Some resources are more valuable than others (higher cost) – Some resources are more useful for this plan than others (higher benefit) – The resources may be otherwise committed • They may be preempted (but at a high cost) • The resource manager picks a set of resources which is (nearly) optimal
Each Method Binds the Settings of The Control Parameters in a Different Way Service Control Parameters User’s Utility Function Resource 1, 1 Method 1 Abstract Service The binding of parameters has a value to the user Each Method Requires Different Resources User Requests A Service with certain parameters Resource 1, 2 Method 2 Resource Cost Function Resource 1, j Methodn Each Service can be Provided by Several Methods The Resources Used by the Method Have a cost Net Benefit The System Selects the Method Which Maximizes Net Benefit
Recovering From Failures • The E 21 renders services by translating them into plans involving physical resources – Physical resources have know failure modes • Each plan step accomplishes sub-goal conditions needed by succeeding steps – Each condition has some way of monitoring whether it has been accomplished – These monitoring steps are also inserted into the plan • If a sub-goal fails to be accomplished, model-based diagnosis isolates and characterizes the failure • A recovery is chosen based on the diagnosis – It might be as simple as “try it again”, we had a network glitch – It might be “try it again, but with a different selection of resources” – It might be as complex as “clean up and try a different plan”
Access Policies Each Method Binds the Settings of The Control Parameters in a Different Way Service Control Parameters User’s Utility Function The binding of parameters has a value to the user Each Method Requires Different Resources Resource 1, 1 Method 1 Abstract Service User Requests A Service with certain parameters Resource 1, 2 Method 2 Resource Cost Function Resource 1, j Methodn Each Service can be Provided by Several Methods Net Benefit Access Policies Naturally Fit Within the Model The Resources Used by the Method Have a cost
Model Based Troubleshooting For Trust Model Updating
Model Based Diagnosis for Survivable Systems • Extension of previous work on Model-based Diagnosis – Shrobe & Davis, Williams and de. Kleer – Previous work dealt with hardware failures – Previous work ignored common-mode failures • Focus is on diagnosing failure of Computations in order to assess the health of the underlying resources • Given: – Plan Structure of the Computation describing expected behavior including Qo. S – Observation of actual behavior that deviates from expectations • Produce: – Localization: which component(s) failed – Characterization: what did they do wrong – Inferences about the compromise state of the computational resources involved. – Inferences about what attacks enabled the compromise to occur
Ontology of the Diagnostic Task • Computations utilize a set of resources (e. g. the computation uses hosts, binary executable file, databases etc. ) • Individual resources have vulnerabilities • Vulnerabilities enable attacks • An attack on a instance of a particular type of resource can cause that resource to enter a compromised state • A computation that utilizes a compromised resource may exhibit a misbehavior, I. e. it may behave in a manner other than would be predicted by its design. • Misbehaviors are the symptoms which initiate diagnostic activity, leading to updated assessments of: – The compromised states of the resources used in the computation – The likelihood of attacks having succeeded – The likelihood that other resources have been compromised
The Space of Intrusion Detection UNSUPERVISED LEARNING FROM NORMAL RUNS Statistical Profile Structural Model/Pattern Anomaly Symptom Suspicious SUPERVISED LEARNING FROM ATTACK RUNS Violation Model of Expected Behavior. Discrepancy from Good Match to Bad HANDCODED STRUCTURAL MODELS OF ATTACKS A symptom may indicate an attack or a compromise
Model Based Troubleshooting Constraint Suspension 15 3 Times Plus 40 40 5 25 5 Times 5 25 20 Plus 35 10 Times 3 15 40 Consistent Diagnosis: Broken takes inputs 25 and 15 Produces Output 35 Consistent Diagnosis: Broken takes inputs 5 and 3 No Consistent Diagnosis: Produces Output 10 Conflict between 25 & 20
Multiple Faults and the General Diagnostic Engine (GDE) • Each component is modeled by multi-directional constraints representing the normal behavior • As a value is propagated through a component model, it is labeled with the assumption that this component works – The propagated label is the set union of the labels of the inputs to the model plus a token for the current model • A conflict is detected at any place to which inconsistent values are propagated – – It’s inconsistent to believe two inconsistent values at once The union of the labels of these values imply that you should believe both At least one element in this union must be false. A Nogood is the set union of the labels of the conflicting values. • A diagnosis is a set of assumptions which form a covering set of all Nogoods (i. e. includes at least 1 assumption in each nogood) • Goal is to find all minimum diagnoses
Model Based Troubleshooting GDE 15 3 Times 40 Plus 5 5 25 Times 5 25 20 Plus 3 40 Times 15 40 35 Conflicts: Blue or Violet Broken Green Broken, Red with compensating fault Green Broken, Yellow with masking fault Diagnoses:
Applying MBT to Qo. S Issues Time: 3, 5 Component 2 Time: 3, 7 Time: 9, 15 Delay: 2, 4 Time: 0 Component 1 Delay: 1, 3 Time: 9, 17 Component 4 Time: 1, 3 Time: 1, 1 Delay: 5, 10 Observed Time: 27 Time: 9, 17 Time: 4, 7 Component 3 Delay: 3, 4 Time: 4, 5 Conflicts: Component 5 Delay: 1, 2 Diagnoses: Blue broken Violet broken Red broken, Yellow broken Red broken, Green broken, Yellow broken Time: 5, 9 Observed Time: 6 Broken How? !
Adding Failure Models • In addition to modeling the normal behavior of each component, we can provide models of known abnormal behavior Delay: 2, 4 • Each Model can have an associated probability • A “leak Model” covering unknown failures/compromises covers residual probabilities. • Diagnostic task becomes, finding most likely set(s) of models (one model for each component) consistent with the observations. • Search process is best first search with joint probability as the metric Normal: Delay: 2, 4 Probability 90% Component 2 Delayed: Delay 4, +inf Probability 9% Accelerated: Delay -inf, 4 Probability 1%
Applying Failure Models B IN 0 H 6 2 30 MID Low = 3 High = 6 P. 7. 1. 2 L H P Normal: 2 4 0. 9 Fast: -30 1. 04 Slow: 5 30. 06 OUT 2 C A B C MID Low Normal Slow 3 Slow Fast Normal 7 Fast Normal Slow 1 Normal Fast Slow 4 Fast Slow -30 Slow Fast 13 5 Low = 5 High = 10 OUT 1 L H P Normal: 5 10 0. 8 Fast: -30 4. 03 Slow: 11 30. 07 A L Normal: 3 Fast: -30 Slow: 7 Observed: Predicted: Observed: 17 Predicted: Low = 8 High =16 Consistent Diagnoses MID Prob Explanation High 3. 04410 C is delayed 12. 00640 A Slow, B Masks runs negative! 2. 00630 A Fast, C Slower 6. 00196 B not too fast, C slow 0. 00042 A Fast, B Masks, C slow 30. 00024 A Slow, B Masks, C not masking fast
Modeling Underlying Resources • The model can be augmented with another level of detail showing the dependence of computations on resources • Each resource has models of its state of compromise – They can be abstract • node has cycle stealing, • network segment is being overloaded • The modes of the resource models imply the modes of the computational models – E. g. if a computation resides on a node which is losing cycles, then the computation model must be the retarded model. Normal: Delay: 2, 4 Normal: Probability 90% Delayed: Delay 4, +inf Parasite: Probability 9% Accelerated: Delay -inf, 2 Other: Probability 1% Has models Component 1 Node 17 Located On
Moving to a Bayesian Framework • The model has levels of detail specifying computations, the underlying resources and the mapping of computations to resources • Each resource has models of its state of compromise • The modes of the resource models are linked to the modes of the computational models by conditional probabilities • The Model forms a Bayesian Network Normal: Delay: 2, 4 Delayed: Delay 4, +inf Accelerated: Delay -inf, 2 Conditional probability =. 4 Conditional probability =. 3 Normal: Probability 90% Parasite: Probability 9% Other: Probability 1% Has models Component 1 Node 17 Located On
Computational Models are Coupled through Resource Models Node 1 Time: 3, 5 Node 2 Component 2 Time: 3, 7 Time: 9, 15 Delay: 2, 4 Component 1 Delay: 1, 3 Time: 1, 1 Time: 9, 17 Component 4 Delay: 5, 10 Observed Time: 27 Time: 9, 17 Time: 4, 7 Component 3 Delay: 3, 4 Time: 4, 5 Component 5 Conflicts: Diagnoses: Delay: 1, 2 Blue delayed Violet delayed Red delayed, Yellow Negative Time Red delayed, Green Negative Time Green delayed, Yellow Negative Time: 5, 9 Observed Time: 6 Precluded because physicality requires red green and yellow to all be delayed or all be accelerated
An Example System Description N Normal. 6 Peak. 1 Off Peak. 3 H. 15. 80. 05 N Normal. 8 Slow. 2 A N Normal. 50 Fast. 25 Slow. 25 H. 3. 7 B N Normal. 60 Slow. 25 Slower. 15 H. 05. 45. 50 C N Normal. 50 Fast. 25 Slow. 25 D Normal Hacked . 9. 1 H. 05. 45. 50 E Host 2 Host 1 H. 05. 45. 50 Host 3 Normal Hacked . 85. 15 Normal Hacked Host 4. 7. 3 Normal Hacked . 8. 2
Bayesian Networks • Bayesian Networks are a technique for representing complex problems involving evidential reasoning • Reduces the need to state an exponential number of conditional probabilities • Model involves nodes and links – Nodes represent statistical variables – Links represent conditional dependence between variables (I. e. causation) – Links not present represent independence • Bayesian Solvers compute joint probability of some nodes given the probability (or observation) of others. Alarm Earth quake Quake T T F F Burglar T F Alarm. 97. 65. 55. 03 No Alarm. 03. 35. 45. 97
System Description as a Bayesian Network • The Model can be viewed as a Two-Tiered Bayesian Network – Resources with modes – Computations with modes – Conditional probabilities linking the modes N Normal. 6 Peak. 1 Off Peak. 3 H. 15. 80. 05 N Normal. 8 Slow. 2 A N Normal. 50 Fast. 25 Slow. 25 H. 3. 7 B N Normal. 60 Slow. 25 Slower. 15 H. 05. 45. 50 C N Normal. 50 Fast. 25 Slow. 25 D Normal Hacked . 9. 1 H. 05. 45. 50 E Host 2 Host 1 H. 05. 45. 50 Host 3 Normal Hacked . 85. 15 Normal Hacked Host 4. 7. 3 Normal Hacked . 8. 2
System Description as a MBT Model • The Model can also be viewed as a MBT model with multiple models per device – Each model has behavioral description • Except the models have conditional probabilities N Normal. 6 Peak. 1 Off Peak. 3 H. 15. 80. 05 N Normal. 8 Slow. 2 A N Normal. 50 Fast. 25 Slow. 25 H. 3. 7 B N Normal. 60 Slow. 25 Slower. 15 D H. 05. 45. 50 C N Normal. 50 Fast. 25 Slow. 25 E H. 05. 45. 50
Integrating MBT and Bayesian Reasoning • Start with each behavioral model in the “normal” state • Repeat: Check for Consistency of the current model • If inconsistent, – Add a new node to the Bayesian network • This node represents the logical-and of the nodes in the conflict. • It’s truth-value is pinned at FALSE. – Prune out all possible solutions which are a super-set of the conflict set. – Pick another set of models from the remaining solutions • If consistent, Add to the set of possible diagnoses • Continue until all inconsistent sets of models are found • Solve the Bayesian network N Normal. 6 Peak. 1 Off Peak. 3 H. 15. 80. 05 N Normal. 8 Slow. 2 A N Normal. 50 Fast. 25 Slow. 25 H. 3. 7 B N Normal. 60 Slow. 25 Slower. 15 D H. 05. 45. 50 Discrepancy Observed Here C N Normal. 50 Fast. 25 Slow. 25 E H. 05. 45. 50 Conflict: A = NORMAL B = NORMAL C = NORMAL Least Likely Member of Conflict Most Likely Alternative is SLOW
Adding the Conflict to the Bayesian Network Truth Value =False Conditional Probability Table A=N Br=N C=N T T 1 T T F 0 T F T 0 T F F 0 F T T 0 F T F 0 F F T 0 F F F 0 No. Good 1 Conflict: A = NORMAL B = NORMAL C = NORMAL N Normal. 6 Peak. 1 Off Peak. 3 H. 15. 80. 05 N Normal. 8 Slow. 2 A N Normal. 50 Fast. 25 Slow. 25 H. 3. 7 B N Normal. 60 Slow. 25 Slower. 15 H. 05. 45. 50 C N Normal. 50 Fast. 25 Slow. 25 D Normal Hacked . 9. 1 H. 05. 45. 50 E Host 2 Host 1 H. 05. 45. 50 Host 3 Normal Hacked . 85. 15 Normal Hacked Host 4. 7. 3 Normal Hacked . 8. 2 F 0 1 1 1 1
Integrating MBT and Bayesian Reasoning (2) • Repeat Finding all conflicts and adding them to the Bayesian Net. • Solve the network again. – The posterior probabilities of the underlying resource models tell you how likely each model is. – These probabilities should inform the trust-model and lead to Updated Priors and guide resource selection. – The Posterior probabilities of the computational models tell you how likely each model is. This should guide recovery. • All remaining non-conflicting combination of models are possible diagnoses – Create a conjunction node for each possible diagnosis and add the new node to the Bayesian Network (call this a diagnosis node) • Finding most likely diagnoses: – Bias selection of next component model by current model probabilities
The Final Bayesian Network Value =False No. Good 1 Conflict: A = NORMAL B = NORMAL C = NORMAL Value =False No. Good 2 Off-Peak. 028 Peak. 541 Normal. 432 Slow. 590 Fast. 000 Normal. 410 Slow. 738 Normal. 262 A Conflict: A = NORMAL B = NORMAL C = SLOW B C Slower. 516 Slow. 339 Normal. 145 D A = SLOW B = SLOW C = NORMAL D = NORMAL E = PEAK Slow. 612 Fast. 065 Normal. 323 E Diagnosis-1 Host 1 Hacked=. 267 Normal =. 733 Host 2 Hacked=. 450 Normal =. 550 Host 3 Hacked=. 324 Normal =. 676 Host 4 Hacked=. 207 Normal =. 793 Diagnosis-50
Final Model Probabilities Hacked Resource Host 1 Host 2 Host 3 Host 4 Computation A B C D E Hacked Posterior. 324. 207. 450. 267 Hacked Prior. 300. 200. 150. 100 Mode Off-Peak Normal Slower Slow Normal Slow Fast Normal Posterior. 676. 793. 550. 733 Normal Prior. 700. 850. 900 Probability. 028. 541. 432. 738. 262. 516. 339. 145. 590. 000. 410. 612. 065. 323
Adding Attack Models • An Attack Model specifies the set of attacks that are believed to be possible in the environment – Each resource has a set of vulnerabilities – Vulnerabilities enable attacks on that resource – We map attacks x resource-type to behavioral modes of the resource – This is given as a set of conditional probabilities • If this attack succeeded on a resource of this type then the likelihood that the resource is in mode-x is P – This now forms a three tier Bayesian network Host 1 Hasvulerability Buffer-Overflow Enables Overflow-Attack Resource-type Unix-Family Causes . 5 Normal . 7 Slow
Three Tiered Model
Example Final Data
Effect of Attack Model APriori No Attack Buffer Packet Overflow Flood Both Host 1 . 291 . 491 . 668 . 741 Host 2 . 15 . 397 . 543 . 680 . 770 Host 3 . 206 . 202 . 574 . 476 Host 4 . 3 . 298 . 296 . 576 . 480 Buffer. 4 Overflow Packet Flood . 5 . 754 . 567. 832 . 693
Summary • Diagnostic process goes from observations of computational behavior to underlying trust model assessments • Three tiered model: – Vulnerabilities and Attacks – Compromised States of Resources – Non-Standard behavior of computation • New synthesis of Bayesian and Model-Based reasoning • Next Steps – Realistic ontology of attacks, compromise states, etc – Resource selection in light of diagnosis • Challenges: – Realistic Attacks Models may swamp Bayesian net computation – How to handle unknown attacks
b82085882dfd2000cb0372eeef8f669d.ppt