
2344027e75699831d110c136c6c7c1c2.ppt
- Количество слайдов: 39
第 9 章 Decision Analysis 1
What is Decision Analysis? • Decision Analysis (DA, also known as Decision Theory) represents a rational approach to decision making. 2
Decision Theory Process 1. Identify possible future conditions called states of nature 2. Develop a list of ALL possible alternatives, one of which may be to do nothing 3. Estimate the payoff associated with each alternative for every future condition 4. If possible, determine the likelihood of each possible future condition 5. Evaluate alternatives according to some decision criterion and select the best alternative 3
Decision Environments • Certainty - Environment in which relevant parameters have known values • Risk (probabilistic decision problem)Environment in which certain future events have probable outcomes • Uncertainty (nonprobabilistic decision problem) - Environment in which it is impossible to assess the likelihood of various future events 4
Payoff Table For single-stage decisions (i. e. one uncertain event), the information on the expected outcomes and their payoffs can be summarized in a Payoff Table This helps facilitate comparison in the decision making process Possible future demand* *Present value in $ millions So which facility should be built? • If future demand is certain, it is clear cut. • Otherwise, it depends on our method and the probabilities of these outcomes (if probabilities are known) 5
Decision Making under Uncertainty Maximin - Choose the alternative with the best of the worst possible payoffs (guaranteed minimum) Worst payoff: S, M, L = 10, 7, -4 -> build Small facility Maximax - Choose the alternative with the best possible payoff Best payoff: S, M, L = 10, 12, 16 -> build Large facility Laplace - Choose the alternative with the best average payoff (Implicitly assigns equal probabilities to all outcomes) S: . 33*10 +. 33*10 = 10. 00 M: . 33 * 7 +. 33*12 = 10. 33 -> build Medium facility L: . 33 * -4 +. 33 * 2 +. 33*16 = 4. 67 Minimax Regret - Choose the alternative that has the least of 6 the worst regrets…
Opportunity Losses • Opportunity Losses (regrets) are the potential lost from not picking the best choice (i. e. selling all your Amazon stock in 1999 instead of holding on to it) • For this example: – If demand is moderate we would have been best off with the medium facility (payoff = $12). Had we built the small facility, our payoff is only $10, so we have an opportunity loss of $2 – Filling out the rest of the table shows that the Medium facility has the lowest of the worst regrets. Alternatives Low Moderate High Worst Small facility $0 $2 $6 $6 Medium facility $3 $0 $4 $4 $10 $0 $14 Large facility 7
How to Decide on a Decision Method • Decision making under Certainty: Select the alternative with the Maximum payoff • Decision making under Uncertainty: As we know nothing about probability, we need to use one of the 4 methods described previously – Which method? It depends- Those willing to take more risk for greater upside might chose Maximax. More cautious managers would probably use Maximin or, (slightly more risky but usually considered better) Minimax Regret • Decision making under Risk: Although we know (or can estimate) the probability of occurrence of each outcome (i. e. We know the odds for Lotto but we don’t know the winning lotto number), we can calculate the Expected Monetary Value Criterion , a. k. a. Expected Value (EV) 8
Decision Making Under Risk • Assume for the previous example that we can now estimate probabilities: Pr(Low) =. 3, Pr(Med. ) =. 5 and Pr(High) =. 2 Remember that probabilities of all outcomes sum to 1! • We can modify the decision table to include probabilities, and then compute the EVs for each decision alternative. EV(small) =. 3(10) +. 5(10) +. 2(10) = $10 EV(medium) =. 3(7) +. 5(12) +. 2($12) = $10. 5 EV(Large) =. 3(-4) +. 5(2) +. 2($16) = $3 • Choose the medium plant as it has the highest EV 9
Sensitivity to Risk • Expected monetary value approach most appropriate when decision maker is Risk Neutral (風險中立). This is typical within large corporations – Note: the EV is not an actual payoff • In previous example no states of nature yielded 10. 5 M for a medium facility! – Expected Value can be seen as long-run average payoff • Aside: in reality people exhibit complex behavior towards risk – Gambling is inherently a risk-seeking (追逐風險) activity, but many people who gamble are otherwise risk-adverse (風險趨避) (i. e. buying insurance, choosing lower-yield but less risky investments, etc. ) 10
Examples • A small building contractor has experienced 2 years where demand > capacity. They must make a decision whether to expand next year. A decision table with the 3 options and 2 states of nature is shown below. Which decision should they make if they follow: A) Maximax B) Maximin C) Laplace D) Minimax Regret? Next Year’s Demand (profit in $K) Alternative Low High Do nothing $50 $60 Expand $20 $80 Subcontract $40 $70 11
Examples • Now the contractor from the previous problem can estimate the probabilities of High and low demand. Assume Pr(H) =. 7 a) Determine the expected profit of each alternative. Which is best and why? b) Modify the Decision table to show Expected profit of each alternative Next Year’s Demand (profit in $K) P(L) = P(H) = Alternative Low High Do nothing $50 $60 Expand $20 $80 Subcontract $40 $70 EV 12
More Complex Decisions • The decision table works well when there is a single decision and a single unknown event • Often the decision maker is faced with a more complex situation 1. Probabilities of outcomes dependent on alternative selected And/Or 2. Multiple decisions that must be made in a fixed sequential order after multiple events • In these cases, we use a decision tree 13
Format of a Decision Tree Decision Point 1 Chance Event ature n te of Sta se oo h ’ A State ho C 1 ture f na o ’ 2 A State of nat Payoff 2 Choose A’ 2 Payoff 3 ’ ose A 3 Cho Payoff 4 Payoff 5 2 e os State ’ 1 2 re 2 B 1 e. A Choose of nat u C Payoff 1 ure 2 A’ 4 Payoff 6 Here decision 1 is made before an event. Decision 2 is made after 14 the event occurs (and outcome is known. )
How to Create and Evaluate a Decision Tree 1. Diagram all decisions and outcomes in their proper sequential (temporal) order 2. Fill in the probabilities and payoffs for each final outcomes (the end branches) 3. Calculate EVs for the events 4. At decision nodes take the EV of the best alternative – In the example below, we would opt to play the game as it has a greater EV. We can then ignore the EV for “Do nothing” 5. At the end we will be left with a final EV and an optimal decision sequence ($2 is the EV, decision sequence is to “play game”) 15
Example #1 • Sean sells shoes on commission (佣金, 10% of the shoe’s sale price. ) He’s been doing this for years so he has first pick of the shoppers and knows the chances of their buying something or not (and leaving him without a commission) • He can chose to help 3 different shoppers – Shopper 1 is considering purchasing an expensive ($300) pair of designer pumps(舞鞋), but has only a 20% chance of buying them. – Shopper 2 is certain to buy a $40 pair of sandals. – Shopper 3 is likely (50%) to buy a $100 pair of boots. • Construct a decision tree to determine which shopper Sean should help if he wishes to maximize his expected value ($$$) • How does the problem change if we also consider the fact that Shopper #1 is also 50% likely to return any of her purchases later (and thus Sean would lose his commission. (Shoppers #2 and #3 are less fickle (易變的)) 16
Example #2 • Applying Decision Analysis to everyday life – Decision: To Study for a test or not to study • Studying for the test decreases probability of failing test P(F) =. 1 but costs $10 in lost time • Not studying costs nothing, but P(F) =. 5 • Assume passing the test is worth $100 to you and an F is worth nothing • Construct a decision tree and figure out the best decision and expected value 17
Example Continued • It is still possible to receive an F after studying, and the payoff from this outcome is worse (-$10) than if we had failed and not studied ($0) • Likewise the “best” outcome, at least by how we’ve defined the problem, is to pass without studying. ($100) • However, as you will see from drawing the decision tree the expected outcome from studying ($80) is better than from not studying ($50) • Moral of the story: Good decisions do not guarantee good outcomes, but good outcomes are much more probable with good decisions! 18
Example of a Multiple-Decision Tree: Russian Hill Roulette • When I get home I need to park my car on Russian Hill. I can either try to find a legal parking space, go to a nearby garage (and pay $15) or park illegally • I estimate that I have a 50% chance of finding a space in 10 minutes of searching. If I do, I’m happy. If not, I’ve experienced 10 minutes of frustration (which I would have paid $10 to avoid) and I still need to figure out what to do with my car- either park it in a garage or park illegally (at this point, I’m not going to circle for another 10 minutes. ) • Assume I can always find an illegal parking space, but that if I get a ticket, it costs me -on average- $50 – DPT only visits my neighborhood occasionally. After several mornings of observing whether illegally parked cars have tickets or not, I conclude the chances of getting a ticket are 1 in 4. • What should I do if I’m a rational, risk-neutral person? 19
Example: Setting up the Decision Tree • How many potential decisions do I have to make? – Is there a defined order to the decisions? – Do I make the same number of decisions • What are the states of nature that I may end up dealing with and the possible outcomes for each? – Do I end up experiencing all states of nature with every alternative (at least with respect to my payoff)? • What is my optimal sequence of decisions and what is my expected value from them? 20
Example: Decision Tree Notation Note: Yellow represents a Decision, Blue an Event 21
Example: Setting up the Decision Tree • Based on the payoffs and the probabilities given, my optimal strategy is to: 1. ) First look for legal parking 2. ) Then, if I have no luck finding legal parking, park illegally • The expected value (EV) for this strategy is ($11. 25) – i. e. I expect to have to pay, on average, $11. 25/night for parking! • While there is no actual payoff of ($11. 25) from the tree, if I follow this strategy on a nightly basis, I can expect that my long term payoff is ($11. 25) /night – Some nights I’ll be lucky and pay nothing, but other nights I’ll pay out $10 or $60 – Remember to include the additional ($10) to the final branches for “looked for parking with no luck. ” So the ($60) on the final branch represents the combined costs from frustration in not finding legal parking AND in getting a ticket after parking illegally 22
What Happens if Probabilities Change? Referring to the Russian Hill Parking Example… • Assume the city is trying to get more revenues from parking tickets and thus is implementing more frequent patrols! – The chance for a ticket for illegal parking increases to 1 in 3 • How does this affect my decision and expected payoff? – Modify the decision tree with the new probabilities and resolve… 23
Example: Decision Tree With More DPT Patrols Notation Note: Yellow represents a Decision, Blue an Event 24
Small Probability Changes Can Shift the Decision Strategy! • With these revised probabilities, my decision sequence changes to: 1. 2. First look for legal parking Then, if I have no luck finding parking, go to the pay garage • My expected payoff decreases (expected costs increase) from ($11. 25) to ($12. 50) • The effect of probability changes can be more formally studied with Sensitivity Analysis 25
What if We Had Perfect Information? • If we could perfectly predict the outcome of an event, we would be able to remove the risk. – What if Sean knew in advance who would buy shoes? – What if I knew in advance whether DPT would be patrolling Russian Hill that night? • The expected payoff with perfect information will always be higher! • How much would this perfect information be worth? – i. e. Ignoring ethical concerns, what’s the highest I should pay an insider at DPT for the patrol schedule? 26
Expected Value of Perfect Information (EVPI) • What are expected payoffs under certainty and under risk? EVPI = Expected Outcome with Perfect Information Expected Outcome without Perfect Information = EVw. PI – Max EMV • How to find EVw. PI? We can re-draw the decision tree to determine the Expected Value with Perfect Information – Move the event before the decision – Although technically the event happens after, our advanced knowledge of the outcome lets us make the decision as if the event had already happened 27
Simple EVPI Example: Pharma Investment Strategy • Assume that you have 100 K available for short term investment. Although the market is bearish(熊市 ) right now, you have heard about a potential new cancer drug that the FDA is about to approve or reject. This drug has been developed by 2 companies a large, established pharmaceutical (EP) in partnership with a small biotech (BT) firm – Assume that if the FDA approves the drug, the value of both companies increases. BT stock would double in value while EP stock goes up 40%. – If the FDA rejects the drug, EP stock will drop by 20% in value and BT will go bankrupt! – Experts put the probability of FDA approval is 40% 1. What should how should we invest our money if we don’t have any additional knowledge? (Assume that not investing is an option, but we yield no additional return on our 100 K) 2. What do we do if we have accurate knowledge of whether the FDA will approve the drug or reject the drug? 3. How much is this advanced knowledge worth? 28
Example: EMV – or Act, then Learn • • Notation Note: Yellow branches are decisions, blue are events. Here the investment policy with highest payoff is EP, investing in the established pharmaceutical, at $104 K 29
Example: EV w/ PI, Learn, then Act • Now assume that we have advance knowledge of the FDA’s decision (we can’t force a decision to “approve”, which is preferred over “reject”, but we know what it would be before we have to invest) 30
EVPI is Dependent on Probabilities 31
Example: EVPI • We can evaluate EVPI for a more complex decision tree as well, and we can also evaluate the value of PI for specific events • Back to the multi-stage decision tree of the original example of Russian Hill Roulette: – Assume P(ticket) =. 25, and P(find parking) =. 5 – Now assume that while I can get advanced information from an acquaintance at DPT whether they will patrol Russian Hill that night. (I still don’t know whether legal parking is available). How much is this information worth to me? 32
Example: Original Decision Tree, EMV Notation Note: Yellow represents a Decision, Blue an Event 33
Example: Revised Decision Tree, with PI for Parking Tickets Notation Note: Yellow represents a Decision, Blue an Event 34
Example: Value of Perfect Information Illustrated • Decision Strategy without PI on ticketing: 1) look for parking 2) if unsuccessful, park illegally Expected payoff (11. 25) • Decision Strategy with PI on ticketing: 1) If they aren’t ticketing, park illegally 2) if they are ticketing, look for parking, 3) then, if unsuccessful, pay for the garage Expected payoff (3. 125) • The value of Perfect Information on ticketing is: = EVw. PI – EMV = (3. 125) - (11. 25) = $8. 12 For a month of parking, that’s worth over $240! 35
2 nd Example: More on PI • • Can also evaluate value of perfect information on knowing both the ticketing and parking availability For simplicity, I used a compound probability with 4 outcomes (OK as they are independent) and picked the obvious decision for each • So here EVPI = -1. 875 - (-11. 25) = $9. 38 • In this case, total omniscience is not worth that much more than just pre-determining whether DPT is on patrol 36
Additional Information on Perfect Information • EVPI can be used as an indicator as to whether we should attempt to gather more information before making a decision. (Assuming we can do so in time render the decision!) – EVPI should not be used as an excuse not to make an active decision • Global warming example: for P(m) =. 5 don’t wait until it’s too late to do anything but Business-as-Usual! • This often happens in vendor selection/supply chain implementations • For most real-world decisions, rarely can we obtain Perfect Information. But we can often get Imperfect Information (marketing surveys, weather predictions, financial forecasting models) – Parking example- observing cycle of DPT patrols might yield a predictive pattern to when drive around Russian Hill – Imperfect information inherently is worth less than perfect information • EVPI can be used as an upper bound on calculating the worth of Imperfect Information: We would never pay more for such research than the EVPI 37
More Practice with Decision Trees: HW/test Problem • HW problem and past midterm (Fall 2002) problem: Yonni is considering expanding his Yuppie Yoga franchise by building a studio in Russian Hill that will cost 6 million dollars to build and operate. Customer demand is uncertain, however. There is a 60% probability it will be high, yielding him 10 million in revenues, but if it is low he earns only 5 million in revenues from a half-full club. If demand is low, he does have the option of trying to raise it through a marketing campaign. If successful, this raises his club's revenue's to 8 million dollars. However, the campaign will cost 2 million and only has a 50% chance of success, and if unsuccessful, club revenues remain unchanged. Also, Yonni can choose not to build (and his profit would be 0). – Draw a decision tree to show Yonni's decision process, showing all final payoffs and intermediate expected values. (Reminder: profit = revenuescosts. All costs that are relevant to the problem are mentioned explicitly!) – Assuming Yonni is a risk-neutral, rational decision maker, what is his sequence of decisions? What is the expected value of this decision tree? – Would his decision sequence change if the ad campaign cost only $1 million? 38
More Examples Problem from (someone else’s) Midterm • A PCB manufacturer needs to run tests on 2 separate components of a board, R & Q. These can be tested in either order and if either test fails the board is scrapped (Board is worthless if R works but Q doesn’t, and vice-versa) • Given the costs and probabilities of failure, construct the decision tree and determine the expected cost. (You can assume that (P(R pass)) is independent of P(Q pass)) – Also you have to pay for the test, even if the component fails What is the preferred test order? Construct a decision tree test R Q cost 6. 50 8. 00 P(failure). 15. 1 39
2344027e75699831d110c136c6c7c1c2.ppt