Скачать презентацию Making Simple Decisions Chapter 16 Topics Скачать презентацию Making Simple Decisions Chapter 16 Topics

2ec4b349fc4ee8eb62376714705b3e1c.ppt

  • Количество слайдов: 16

Making Simple Decisions Chapter 16 Making Simple Decisions Chapter 16

Topics • Decision making under uncertainty – Expected utility – Utility theory and rationality Topics • Decision making under uncertainty – Expected utility – Utility theory and rationality – Utility functions – Decision networks – Value of information

Uncertain Outcome of Actions • Some actions may have uncertain outcomes – Action: spend Uncertain Outcome of Actions • Some actions may have uncertain outcomes – Action: spend $10 to buy a lottery which pays $10, 000 to the winner – Outcome: {win, not-win} • Each outcome is associated with some merit (utility) – Win: gain $9990 – Not-win: lose $10 • There is a probability distribution associated with the outcomes of this action (0. 0001, 0. 9999). • Should I take this action?

Expected Utility • Random variable X with n values x 1, …, xn and Expected Utility • Random variable X with n values x 1, …, xn and distribution (p 1, …, pn) – X is the outcome of performing action A (i. e. , the state reached after A is taken) • Function U of X – U is a mapping from states to numerical utilities (values) • The expected utility of performing action A is EU[A] = Si=1, …, n p(xi|A)U(xi) Probability of each outcome Utility of each outcome

One State/One Action Example s 0 EU(A 1) = 100 x 0. 2 + One State/One Action Example s 0 EU(A 1) = 100 x 0. 2 + 50 x 0. 7 + 70 x 0. 1 = 20 + 35 + 7 = 62 A 1 s 1 0. 2 100 s 2 0. 7 50 s 3 0. 1 70

One State/Two Actions Example s 0 A 1 s 1 0. 2 100 s One State/Two Actions Example s 0 A 1 s 1 0. 2 100 s 2 0. 7 0. 2 50 • EU(A 1) = 62 • EU(A 2) = 74 A 2 s 3 0. 1 70 s 4 0. 8 80

MEU Principle • Decision theory: A rational agent should choose the action that maximizes MEU Principle • Decision theory: A rational agent should choose the action that maximizes the agent’s expected utility • Maximizing expected utility (MEU) is a normative criterion for rational choices of actions • Must have complete model of: – Actions – Utilities – States

Decision networks • Extend Bayesian nets to handle actions and utilities – a. k. Decision networks • Extend Bayesian nets to handle actions and utilities – a. k. a. influence diagrams • Make use of Bayesian net inference • Useful application: Value of Information

Decision network representation • Chance nodes: random variables, as in Bayesian nets • Decision Decision network representation • Chance nodes: random variables, as in Bayesian nets • Decision nodes: actions that decision maker can take • Utility/value nodes: the utility of the outcome state.

Airport example Airport example

Airport example II Airport example II

Evaluating decision networks • Set the evidence variables for the current state. • For Evaluating decision networks • Set the evidence variables for the current state. • For each possible value of the decision node (assume just one): – Set the decision node to that value. – Calculate the posterior probabilities for the parent nodes of the utility node, using BN inference. – Calculate the resulting expected utility for the action. • Return the action with the highest expected utility.

Exercise: Umbrella network take/don’t take P(rain) = 0. 4 Umbrella Weather Lug umbrella P(lug|take) Exercise: Umbrella network take/don’t take P(rain) = 0. 4 Umbrella Weather Lug umbrella P(lug|take) = 1. 0 P(~lug|~take)=1. 0 Happiness U(lug, rain) = -25 U(lug, ~rain) = 0 U(~lug, rain) = -100 U(~lug, ~rain) = 100 Forecast f w p(f|w) sunny rain 0. 3 rainy rain 0. 7 sunny no rain 0. 8 rainy no rain 0. 2

Value of Perfect Information (VPI) • How much is it worth to observe (with Value of Perfect Information (VPI) • How much is it worth to observe (with certainty) a random variable X? • Suppose the agent’s current knowledge is E. The value of the current best action is: EU(α | E) = max. A ∑i U(Resulti(A)) p(Resulti(A) | E, Do(A)) • The value of the new best action after observing the value of X is: EU(α’ | E, X) = max. A ∑i U(Resulti(A)) p(Resulti(A) | E, X, Do(A)) • …But we don’t know the value of X yet, so we have to sum over its possible values • The value of perfect information for X is therefore: VPI(X) = ( ∑k p(xk | E) EU(αxk | xk, E)) – EU (α | E) Probability of each value of X Expected utility of the best action given that value of X Expected utility of the best action if we don’t know X (i. e. , currently)

VPI exercise: Umbrella network What’s the value of knowing the weather forecast before leaving VPI exercise: Umbrella network What’s the value of knowing the weather forecast before leaving home? take/don’t take P(rain) = 0. 4 Umbrella Weather Lug umbrella P(lug|take) = 1. 0 P(~lug|~take)=1. 0 Happiness U(lug, rain) = -25 U(lug, ~rain) = 0 U(~lug, rain) = -100 U(~lug, ~rain) = 100 Forecast f w p(f|w) sunny rain 0. 3 rainy rain 0. 7 sunny no rain 0. 8 rainy no rain 0. 2

Information gathering agent • Using VPI we can design an agent that gathers information Information gathering agent • Using VPI we can design an agent that gathers information (greedily) function INFORMATION-GATHERING-AGENT (percept) return an action Persistent D a decision network integrate percept into D j = the value that maximizes VPI(Ej) / Cost(Ej) if VPI(Ej) > Cost(Ej) return REQUEST(Ej) else return the best action from D // or VPI(Ej) - Cost(Ej)