Скачать презентацию Decision Tree Learning Brought to you by Chris Скачать презентацию Decision Tree Learning Brought to you by Chris

ee24fee9c3142bbd2f52844f4a94c0dd.ppt

  • Количество слайдов: 39

Decision Tree Learning Brought to you by Chris Creswell Decision Tree Learning Brought to you by Chris Creswell

Why learn about decision trees? • A practical way to get AI to adapt Why learn about decision trees? • A practical way to get AI to adapt to player – a simple form of user modeling – Enhances replayability – Player’s bot allies can be more effective – Opponent bots can learn player’s tactics, player can’t repeat the same strategy over and over

What we’ll learn • What is a decision tree • How do we build What we’ll learn • What is a decision tree • How do we build a decision tree • What has been done with decision trees in games – What else can we do with them

What is a decision tree • Decision Tree Learning (DTL) is a form of What is a decision tree • Decision Tree Learning (DTL) is a form of inductive learning task, meaning it has the following objective: use a training set of examples to create a hypothesis that makes general conclusions

What is a decision tree – terms/concepts • Attribute: a variable that we take What is a decision tree – terms/concepts • Attribute: a variable that we take into account in making a decision • Target attribute: the attribute that we want to take on a certain value, we’ll decide based on it

What is a decision tree – an example Example Hour Weather Accident Stall Target What is a decision tree – an example Example Hour Weather Accident Stall Target -Commute D 1 8 AM Sunny No No Long D 2 8 AM Cloudy No Yes Long D 3 10 AM Sunny No No Short D 4 9 AM Rainy Yes No Long D 5 9 AM Sunny Yes Long D 6 10 AM Sunny No No Short D 7 10 AM Cloudy No No Short D 8 9 AM Rainy No No Medium D 9 9 AM Sunny Yes No Long D 10 10 AM Cloudy Yes Long D 11 10 AM Rainy No No Short D 12 8 AM Cloudy Yes No Long D 13 9 AM Sunny No No Medium

What is a decision tree – an example Hour 10 AM 8 AM 9 What is a decision tree – an example Hour 10 AM 8 AM 9 AM Stall Long Accident No Yes Short Long Medium Long

What is a decision tree – how to use it • Given a set What is a decision tree – how to use it • Given a set of circumstances (values of attributes), use it to traverse the tree from root to leaf • The leaf node is a decision

Why is this useful • The hypothesis formed from the training set can be Why is this useful • The hypothesis formed from the training set can be used to draw conclusions about sets of circumstances not present in the training set – it will generalize

How do we construct a decision tree? • Guiding principle of inductive learning: – How do we construct a decision tree? • Guiding principle of inductive learning: – Occam’s razor – choose the simplest possible hypothesis that is consistent with the provided examples • General idea: recursively classify the examples based on one of the attributes until all examples have been used • Here’s the algorithm:

node Learn. Tree(examples, target. Attribute, attributes) examples is the training set target. Attribute is node Learn. Tree(examples, target. Attribute, attributes) examples is the training set target. Attribute is what to learn attributes is the set of available attributes returns a tree node begin if all the examples have the same target. Attribute value, return a leaf with that value else if the set of attributes is empty return a leaf with the most common target. Attribute value among examples else begin A = the “best” attribute among attributes having a range of values v 1, v 2, …, vk Partition examples according to their value for A into sets S 1, S 2, …, Sk Create a decision node N with attribute A for i = 1 to k begin Attach a branch B to node N with test Vi if Si has elements (is non-empty) Attach B to Learn. Tree(Si, target. Attribute, attributes – {A}); else Attach B to a leaf node with most common target. Attribute end return decision node N end

This is how we construct a decision tree • This very simple pseudo-code basically This is how we construct a decision tree • This very simple pseudo-code basically • • implements the construction of a decision tree, except for one key thing that is abstracted away, this is … Key step in the algorithm: choosing the “best” attribute to classify on One algorithm for doing this is ID 3 (used in Black and White) – We’ll get to the algorithm in a bit

This is how we construct a decision tree – pseudo-code walkthrough • First, Learn. This is how we construct a decision tree – pseudo-code walkthrough • First, Learn. Tree is called with all examples, the target. Attribute, and all attributes to classify on • It chooses the “best” (we’ll get to that) attribute to split on, creates a decision node for it, then recursively calls Learn. Tree for each partition of the examples

This is how we construct a decision tree – pseudo-code walkthrough • Recursion stops This is how we construct a decision tree – pseudo-code walkthrough • Recursion stops when: – All examples have the same value – There are no more attributes – There are no more examples • The first two need some explanation, the third one is trivial – all examples have been classified

This is how we construct a decision tree – pseudo-code walkthrough • Recursion stops This is how we construct a decision tree – pseudo-code walkthrough • Recursion stops when all examples have the same value, when does this happen? – When ancestor attributes and corresponding branch values, as well as the target attribute and value, are the same across examples

This is how we construct a decision tree – pseudo-code walkthrough • Recursion stops This is how we construct a decision tree – pseudo-code walkthrough • Recursion stops when there are no more attributes – This happens when training set is inconsistent, e. g. there are 2 or more examples having the same values for all but the target attribute – The way our pseudo-code is written, it guesses when this happens, it picks the most popular target attribute value – This is a decision left up to the implementer – This is a weakness of the algorithm • It doesn’t handle “noise” in its training set well

This is how we construct a decision tree – pseudo-code walkthrough • Let’s watch This is how we construct a decision tree – pseudo-code walkthrough • Let’s watch the algorithm in action … • http: //www. cs. ualberta. ca/~aixplore/learni ng/Decision. Trees/Inter. Article/2 Decision. Tree. html

ID 3 algorithm • Picks the best attribute to classify on in a call ID 3 algorithm • Picks the best attribute to classify on in a call of Learn. Tree, does so by quantifying how useful an attribute will be w/respect to the remaining examples • How? Using Shannon’s Information theory, pick the attribute that favors the best reduction in entropy

ID 3 algorithm – Shannon’s Information Theory • Choose an attribute that favors the ID 3 algorithm – Shannon’s Information Theory • Choose an attribute that favors the best • • reduction in entropy Entropy quantifies the variation in a set of examples with respect to the target attribute values A set of ex’s with mostly the same target. Attr value has very low entropy (that’s good) A set of ex’s with many varying target. Attr values will have high entropy (bad) Ready? Here come some equations …

ID 3: Shannon’s Information Theory • In the following, S is the set of ID 3: Shannon’s Information Theory • In the following, S is the set of examples, Si is a subset of S with value Vi under the target Attribute

ID 3: Shannon’s Information Theory • Expected entropy of candidate attribute A is • ID 3: Shannon’s Information Theory • Expected entropy of candidate attribute A is • weighted sum of subset In the following, k is the size of range of attribute A:

ID 3: Shannon’s Information Theory • What we really want is to maximize information ID 3: Shannon’s Information Theory • What we really want is to maximize information gain, defined:

ID 3: Shannon’s Information Theory • Entropy of the commute time example: The thirteens ID 3: Shannon’s Information Theory • Entropy of the commute time example: The thirteens are because there are thirteen examples. The fours, twos, and sevens come from how many short, medium, and long commutes there are, respectively.

ID 3: Shannon’s Information Theory Attribute Expected Entropy Info Gain Hour 0. 65110 0. ID 3: Shannon’s Information Theory Attribute Expected Entropy Info Gain Hour 0. 65110 0. 768449 Weather 1. 28884 0. 130719 Accident 0. 92307 0. 496479 Stall 1. 17071 0. 248842

ID 3: Drawbacks • Does not guarantee the smallest possible decision tree – Selects ID 3: Drawbacks • Does not guarantee the smallest possible decision tree – Selects classifying attribute based on best expected information gain, not always right • Not very good with continuous values, best with symbolic data – When given lots of distinct continuous values, ID 3 will create very “bushy” trees – 1 or 2 levels deep, lots and lots of leaves – We can make this less serious, but it’s still a drawback

Decision Trees in games • First successful use of a decision tree was in Decision Trees in games • First successful use of a decision tree was in • • “Black and White” (Lionhead studios, 2001) http: //www. gameai. com/blackandwhite. html “In Black & White you can be the god you want to be. Will you rule with a fair hand, making life better for your people? Or will you be evil and scare them into prayer and submission? No one can tell you which way to be. You, as a god, can play the game any way you choose. ”

Decision Trees in games • “And as a god, you get to own a Decision Trees in games • “And as a god, you get to own a Creature. Chosen by you from magical, special animals, your Creature will copy you, you will teach him and he will learn by himself. He will grow, ultimately to 30 metres, and can do anything you can do in the game. Your Creature can help the people or can kill and eat them. He can cast Miracles to bring rain to their crops or he can drown them in the sea. Your Creature is your physical manifestation in the world of Eden, He is whatever you want him to be. . And the game also boasts a new level of artificial intelligence. Your Creature is almost a living, breathing. He learns, remembers and makes connections. His huge range of abilities and decisions is born of a groundbreakingly powerful and complex AI system. ”

Decision Trees in games • So you teach your creature by giving it feedback Decision Trees in games • So you teach your creature by giving it feedback – it learns to perform actions that get it the highest feedback • Problem: feedback is a continuous variable • We have to make it discrete • We do so using K-means clustering

Decision Trees in games • In K-means clustering, we find out how many • Decision Trees in games • In K-means clustering, we find out how many • clusters we want to create, then use an algorithm to successively associate or dissociate instances with clusters until associations stabilize around k clusters The author’s reference for this is from a computer vision textbook – I wasn’t about to go buy it • Not important to know clustering algorithm

Decision Trees in games • Example from B&W: should your creature attack a town Decision Trees in games • Example from B&W: should your creature attack a town • Examples: Example Allegiance Defense Tribe Feedback D 1 Friendly Weak Celtic -1. 0 D 2 Enemy Weak Celtic 0. 4 D 3 Friendly Strong Norse -1. 0 D 4 Enemy Strong Norse -0. 2 D 5 Friendly Weak Greek -1. 0 D 6 Enemy Medium Greek 0. 2 D 7 Enemy Strong Greek -0. 4 D 8 Enemy Medium Aztec 0. 0 D 9 Friendly Weak Aztec -1. 0

Decision Trees in games • If we ask for 4 clusters, K-means clustering will Decision Trees in games • If we ask for 4 clusters, K-means clustering will create clusters around -1, 0. 4, 0. 1, -0. 3. The memberships in these clusters will be {D 1, D 3, D 5, D 9}, {D 2}, {D 6, D 8}, {D 4, D 7} respectively. • The tree ID 3 will create using these examples and clusters:

Decision Trees in games Allegiance Friendly Enemy -1. 0 Defense Weak Medium Strong 0. Decision Trees in games Allegiance Friendly Enemy -1. 0 Defense Weak Medium Strong 0. 4 0. 1 -0. 3

Decision Trees in games • So in this case, the tree the creature learned Decision Trees in games • So in this case, the tree the creature learned can be reduced to a nice compact logical expression: • ((Allegiance = Enemy) AND (Defense = weak)) OR ((Allegiance = Enemy) AND (Defense = Medium)) • This happens sometimes • Makes it easier and more efficient to apply

An Extension to ID 3 to better handle continuous values • Seems simple, use An Extension to ID 3 to better handle continuous values • Seems simple, use an inequality, right? • Not that simple – need to pick cut points • Cut points are the boundaries we create for our inequalities, where do they go? • Key insight: optimal cut points must always reside at boundary points • Okay, so what are boundary points?

An Extension to ID 3 to better handle continuous values • If we sort An Extension to ID 3 to better handle continuous values • If we sort the list of examples according to their • values of the candidate attribute, a boundary point is a value in this list between 2 adjacent instances having different attribute values of the target attribute. In the worst case, the number of boundary points is about equal to the number of instances – This happens if the target attribute oscillates back and forth between good and bad

Example software on CD • Show an example made using the software on the Example software on CD • Show an example made using the software on the CD

Conclusions • Decision Trees are an elegant way of learning – it is easy Conclusions • Decision Trees are an elegant way of learning – it is easy to expose their logic and understand what it has learned • Decision Trees are not always the best way to learn – they have some weaknesses • But it also has its own set of strengths

Conclusions • Decision Trees work best for symbolic, discrete values • Can be extended Conclusions • Decision Trees work best for symbolic, discrete values • Can be extended to work with continuous values • B&W had to do some clustering of feedback values to use decision trees

Conclusions • Up to now, the only use of Decision Trees in games has Conclusions • Up to now, the only use of Decision Trees in games has • been in B&W What are they good for? – User modeling -- teaching the computer how to react to the player, enhances replayability – Can be used to make bots that are the player’s allies more effective as in B&W – Could also make enemies more intelligent – the player would be forced to come up with new strategies • How else can they be used? – This is relatively unexplored territory people – if you think you have a great idea, go for it