Скачать презентацию Classification 1 Classification vs Prediction Classification Скачать презентацию Classification 1 Classification vs Prediction Classification

fc15c4deba8ebc40b264880a11baa96b.ppt

  • Количество слайдов: 100

Classification 1 Classification 1

Classification vs. Prediction • Classification – predicts categorical class labels (discrete or nominal) – Classification vs. Prediction • Classification – predicts categorical class labels (discrete or nominal) – classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data • Prediction – models continuous-valued functions, i. e. , predicts unknown or missing values • Typical applications – Credit approval – Target marketing – Medical diagnosis – Fraud detection 16 March 2018 Data Mining: Concepts and Techniques 2

Classification—A Two-Step Process • Model construction: describing a set of predetermined classes – Each Classification—A Two-Step Process • Model construction: describing a set of predetermined classes – Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute – The set of tuples used for model construction is training set – The model is represented as classification rules, decision trees, or mathematical formulae • Model usage: for classifying future or unknown objects – Estimate accuracy of the model • The known label of test sample is compared with the classified result from the model • Accuracy rate is the percentage of test set samples that are correctly classified by the model • Test set is independent of training set, otherwise over-fitting will occur – If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known 16 March 2018 Data Mining: Concepts and Techniques 3

Process (1): Model Construction Training Data Classification Algorithms Classifier (Model) 16 March 2018 Data Process (1): Model Construction Training Data Classification Algorithms Classifier (Model) 16 March 2018 Data Mining: Concepts and Techniques IF rank = ‘professor’ OR years > 6 THEN tenured = ‘yes’ 4

Process (2): Using the Model in Prediction Classifier Testing Data Unseen Data (Jeff, Professor, Process (2): Using the Model in Prediction Classifier Testing Data Unseen Data (Jeff, Professor, 4) Tenured? 16 March 2018 Data Mining: Concepts and Techniques 5

Supervised vs. Unsupervised Learning • Supervised learning (classification) – Supervision: The training data (observations, Supervised vs. Unsupervised Learning • Supervised learning (classification) – Supervision: The training data (observations, measurements, etc. ) are accompanied by labels indicating the class of the observations – New data is classified based on the training set • Unsupervised learning (clustering) – The class labels of training data is unknown – Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data 16 March 2018 Data Mining: Concepts and Techniques 6

Decision Tree: Outline • • Decision tree representation ID 3 learning algorithm Entropy, information Decision Tree: Outline • • Decision tree representation ID 3 learning algorithm Entropy, information gain Overfitting Babu Ram Dawadi 7

Defining the Task • Imagine we’ve got a set of data containing several types, Defining the Task • Imagine we’ve got a set of data containing several types, or classes. – E. g. information about customers, and class=whether or not they buy anything. • Can we predict, i. e classify, whether a previously unseen customer will buy something? Babu Ram Dawadi 8

An Example Decision Tree Attributen vn 1 vn 3 vn 2 Attributek Attributem vm An Example Decision Tree Attributen vn 1 vn 3 vn 2 Attributek Attributem vm 1 vm 2 Class 1 vk 2 Attributel vl 1 Class 2 vl 2 Class 1 Class 2 Class 1 We create a ‘decision tree’. It acts like a function that can predict and output given an input 9

Decision Trees • The idea is to ask a series of questions, starting at Decision Trees • The idea is to ask a series of questions, starting at the root, that will lead to a leaf node. • The leaf node provides the classification. Babu Ram Dawadi 10

Algorithm for Decision Tree Induction • Basic algorithm – Tree is constructed in a Algorithm for Decision Tree Induction • Basic algorithm – Tree is constructed in a top-down recursive divide-and-conquer manner – At start, all the training examples are at the root – Attributes are categorical (if continuous-valued, they are discredited in advance) – Examples are partitioned recursively based on selected attributes – Test attributes are selected on the basis of a heuristic or statistical measure (e. g. , information gain) • Conditions for stopping partitioning – All samples for a given node belong to the same class – There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf – There are no samples left 16 March 2018 Data Mining: Concepts and Techniques 11

Classification by Decision Tree Induction Decision Tree - A flowchart like tree structure a Classification by Decision Tree Induction Decision Tree - A flowchart like tree structure a flow - branch represents an outcome of the test - leaf node represent class labels or class distribution Two Phases of Tree Generation -Tree Construction -at start all the training examples are at the root - partition examples recursively based on selected attributes -Tree Pruning - identify and remove branches that reflect noise or outliers n Once the tree is build n Use of decision tree: Classifying an unknown sample 12

Decision Tree for Play. Tennis Outlook Sunny Humidity High No Overcast Rain Yes Normal Decision Tree for Play. Tennis Outlook Sunny Humidity High No Overcast Rain Yes Normal Yes Wind Strong No Weak Yes 13

Decision Tree for Play. Tennis Outlook Sunny Humidity High No Overcast Rain Each internal Decision Tree for Play. Tennis Outlook Sunny Humidity High No Overcast Rain Each internal node tests an attribute Normal Yes Each branch corresponds to an attribute value node Each leaf node assigns a classification 14

Decision Tree for Play. Tennis Outlook Temperature Humidity Wind Play. Tennis Sunny Hot High Decision Tree for Play. Tennis Outlook Temperature Humidity Wind Play. Tennis Sunny Hot High Weak ? No Outlook Sunny Humidity High No Overcast Rain Yes Normal Yes Wind Strong No Weak Yes 15

Decision Trees Consider these data: A number of examples of weather, for several days, Decision Trees Consider these data: A number of examples of weather, for several days, with a classification ‘Play. Tennis. ’ 16

Decision Tree Algorithm Building a decision tree 1. Select an attribute 2. Create the Decision Tree Algorithm Building a decision tree 1. Select an attribute 2. Create the subsets of the example data for each value of the attribute 3. For each subset • if not all the elements of the subset belongs to same class repeat the steps 1 -3 for the subset 17

Building Decision Trees Let’s start building the tree from scratch. We first need to Building Decision Trees Let’s start building the tree from scratch. We first need to decide which attribute to make a decision. Let’s say we selected “humidity” Humidity high D 1, D 2, D 3, D 4 D 8, D 12, D 14 normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 Babu Ram Dawadi 18

Building Decision Trees Now lets classify the first subset D 1, D 2, D Building Decision Trees Now lets classify the first subset D 1, D 2, D 3, D 4, D 8, D 12, D 14 using attribute “wind” Humidity high D 1, D 2, D 3, D 4 D 8, D 12, D 14 normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 19

Building Decision Trees Subset D 1, D 2, D 3, D 4, D 8, Building Decision Trees Subset D 1, D 2, D 3, D 4, D 8, D 12, D 14 classified by attribute “wind” Humidity high wind strong D 2, D 14 weak normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 D 1, D 3, D 4, D 8 20

Building Decision Trees Now lets classify the subset D 2, D 14 using attribute Building Decision Trees Now lets classify the subset D 2, D 14 using attribute “outlook” Humidity high wind strong D 2, D 14 weak normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 D 1, D 3, D 4, D 8 21

Building Decision Trees Subset D 2, D 14 classified by “outlook” Humidity high wind Building Decision Trees Subset D 2, D 14 classified by “outlook” Humidity high wind strong D 2, D 14 weak normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 D 1, D 3, D 4, D 8 22

Building Decision Trees subset D 2, D 14 classified using attribute “outlook” Humidity high Building Decision Trees subset D 2, D 14 classified using attribute “outlook” Humidity high wind strong weak normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 outlook D 1, D 3, D 4, D 8 Sunny Rain Overcast No No Yes 23

Building Decision Trees Now lets classify the subset D 1, D 3, D 4, Building Decision Trees Now lets classify the subset D 1, D 3, D 4, D 8 using attribute “outlook” Humidity high wind strong weak normal D 5, D 6, D 7, D 9 D 10, D 11, D 13 outlook D 1, D 3, D 4, D 8 Sunny Rain Overcast No No Yes 24

Building Decision Trees subset D 1, D 3, D 4, D 8 classified by Building Decision Trees subset D 1, D 3, D 4, D 8 classified by “outlook” Humidity normal high wind strong D 5, D 6, D 7, D 9 D 10, D 11, D 13 weak outlook Sunny Rain Overcast No No Yes Yes 25

Building Decision Trees Now classify the subset D 5, D 6, D 7, D Building Decision Trees Now classify the subset D 5, D 6, D 7, D 9, D 10, D 11, D 13 using attribute “outlook” Humidity normal high wind strong D 5, D 6, D 7, D 9 D 10, D 11, D 13 weak outlook Sunny Rain Overcast No No Yes Yes 26

Building Decision Trees subset D 5, D 6, D 7, D 9, D 10, Building Decision Trees subset D 5, D 6, D 7, D 9, D 10, D 11, D 13 classified by “outlook” Humidity normal high wind strong outlook weak Sunny Rain Overcast Yes D 5, D 6, D 10 Yes outlook Sunny Rain Overcast No No Yes Yes 27

Building Decision Trees Finally classify subset D 5, D 6, D 10 by “wind” Building Decision Trees Finally classify subset D 5, D 6, D 10 by “wind” Humidity normal high wind strong outlook weak Sunny Rain Overcast Yes D 5, D 6, D 10 Yes outlook Sunny Rain Overcast No No Yes Yes 28

Building Decision Trees subset D 5, D 6, D 10 classified by “wind” Humidity Building Decision Trees subset D 5, D 6, D 10 classified by “wind” Humidity high wind strong normal outlook weak Sunny Rain Overcast Yes wind outlook Sunny Rain Overcast weak strong Yes No No Yes 29

Decision Trees and Logic (humidity=high wind=strong outlook=overcast) (humidity=high wind=weak outlook=overcast) (humidity=normal outlook=sunny) The decision Decision Trees and Logic (humidity=high wind=strong outlook=overcast) (humidity=high wind=weak outlook=overcast) (humidity=normal outlook=sunny) The decision tree can be expressed as (humidity=normal outlook=overcast) an expression or if-then-else sentences: (humidity=normal outlook=rain wind=weak) ‘Yes’ Humidity high wind strong normal outlook weak Sunny Rain Overcast Yes wind outlook Sunny Rain Overcast weak strong Yes No No Yes 30

Using Decision Trees Now let’s classify an unseen example: <sunny, hot, normal, weak>=? Humidity Using Decision Trees Now let’s classify an unseen example: =? Humidity high wind strong normal outlook weak Sunny Rain Overcast Yes wind outlook Sunny Rain Overcast weak strong Yes No No Yes 31

Using Decision Trees Classifying: <sunny, hot, normal, weak>=? Humidity high wind strong normal outlook Using Decision Trees Classifying: =? Humidity high wind strong normal outlook weak Sunny Rain Overcast Yes wind outlook Sunny Rain Overcast weak strong Yes No No Yes 32

Using Decision Trees Classification for: <sunny, hot, normal, weak>=Yes Humidity high wind strong normal Using Decision Trees Classification for: =Yes Humidity high wind strong normal outlook weak Sunny Rain Overcast Yes wind outlook Sunny Rain Overcast weak strong Yes No No Yes 33

A Big Problem… Here’s another tree from the same training data that has a A Big Problem… Here’s another tree from the same training data that has a different attribute order: Which attribute should we choose for each branch? 34

Choosing Attributes • We need a way of choosing the best attribute each time Choosing Attributes • We need a way of choosing the best attribute each time we add a node to the tree. • Most commonly we use a measure called entropy. • Entropy measure the degree of disorder in a set of objects. 35

Entropy • In our system we have – 9 positive examples – 5 negative Entropy • In our system we have – 9 positive examples – 5 negative examples • The entropy, E(S), of a set of examples is: – E(S) = -pi log pi c • • • P+ = 9/14 P- = 5/14 E = - 9/14 log 2 9/14 - 5/14 log 2 5/14 • E = 0. 940 - In a homogenous (totally ordered) – Where c =i=1 of classes and pi system, the entropy is 0. no = ratio of the number of - In a totally heterogeneous system examples of this value over the total number of examples. (totally disordered), all classes have equal numbers of instances; the entropy is 1 36

Entropy • We can evaluate each attribute for their entropy. – E. g. evaluate Entropy • We can evaluate each attribute for their entropy. – E. g. evaluate the attribute “Temperature” – Three values: ‘Hot’, ‘Mild’, ‘Cool. ’ • So we have three subsets, one for each value of ‘Temperature’. Shot={D 1, D 2, D 3, D 13} Smild={D 4, D 8, D 10, D 11, D 12, D 14} Scool={D 5, D 6, D 7, D 9} We will now find: E(Shot) E(Smild) E(Scool) 37

Entropy Shot= {D 1, D 2, D 3, D 13} Scool={D 5, D 6, Entropy Shot= {D 1, D 2, D 3, D 13} Scool={D 5, D 6, D 7, D 9} Examples: 2 positive 2 negative Smild= {D 4, D 8, D 10, D 11, D 12, D 14} Examples: 4 positive 2 negative Totally heterogeneous + disordered therefore: p+= 0. 5 p-= 0. 5 Proportions of each class in this subset: p+= 0. 666 p-= 0. 333 Proportions of each class in this subset: p+= 0. 75 p-= 0. 25 Entropy(Shot), = -0. 5 log 20. 5 = 1. 0 Entropy(Smild), = -0. 666 log 20. 666 -0. 333 log 20. 333 = 0. 918 Entropy(Scool), = -0. 25 log 20. 25 -0. 75 log 20. 75 = 0. 811 Examples: 3 positive 1 negative 38

Gain Now we can compare the entropy of the system before we divided it Gain Now we can compare the entropy of the system before we divided it into subsets using “Temperature”, with the entropy of the system afterwards. This will tell us how good “Temperature” is as an attribute. The entropy of the system after we use attribute “Temperature” is: (|Shot|/|S|)*E(Shot) + (|Smild|/|S|)*E(Smild) + (|Scool|/|S|)*E(Scool) (4/14)*1. 0 + (6/14)*0. 918 + (4/14)*0. 811 = 0. 9108 This difference between the entropy of the system before and after the split into subsets is called the gain: E(before) E(afterwards) Gain(S, Temperature) = 0. 940 - 0. 9108 = 0. 029 39

Decreasing Entropy Has a cross? no yes 7 red class 7 pink class: E=1. Decreasing Entropy Has a cross? no yes 7 red class 7 pink class: E=1. 0 Both subsets E=-2/7 log 2/7 – 5/7 log 5/7 Has a ring? From the initial state, Where there is total disorder… Has a ring? …to the final state where all subsets contain a single class no yes All subset: E=0. 0 40

Tabulating the Possibilities Attribute=val |+| ue |-| E E after dividing by attribute A Tabulating the Possibilities Attribute=val |+| ue |-| E E after dividing by attribute A Gain Outlook=sun ny 2 3 -2/5 log 2/5 – 3/5 log 3/5 = 0. 9709 0. 6935 0. 2465 Outlook=o’c ast 4 0 -4/4 log 4/4 – 0/4 log 0/4 = 0. 0 Outlook=rain 3 2 -3/5 log 3/5 – 2/5 log 2/5 = 0. 9709 Temp’=hot 2 2 -2/2 log 2/2 – 2/2 log 2/2 = 1. 0 Temp’=mild 4 2 -4/6 log 4/6 – 2/6 log 2/6 = 0. 9183 Temp’=cool 3 1 -3/4 log 3/4 – 1/4 log 1/4 = 0. 8112 0. 9108 0. 0292 Etc… … etc This shows the entropy calculations… 41

Table continued… E for each subset of Weight by A proportion of total E Table continued… E for each subset of Weight by A proportion of total E after A is the sum Gain = (E before of the weighted dividing by A) – (E values after A) -2/5 log 2/5 – 3/5 log 3/5 = 0. 9709 x 5/14 0. 6935 = 0. 34675 -4/4 log 4/4 – 0/4 log 0/4 = 0. 0 x 4/14 = 0. 0 -3/5 log 3/5 – 2/5 log 2/5 = 0. 9709 x 5/14 = 0. 34675 -2/2 log 2/2 – 2/2 log 2/2 = 1. 0 x 4/14 0. 9109 = 0. 2857 -4/6 log 4/6 – 2/6 log 2/6 = 0. 9183 x 6/14 = 0. 3935 -3/4 log 3/4 – 1/4 log 1/4 = 0. 8112 0. 2465 0. 8112 x 4/14 = 0. 2317 0. 0292 …and this shows the gain calculations 42

Gain • We calculate the gain for all the attributes. • Then we see Gain • We calculate the gain for all the attributes. • Then we see which of them will bring more ‘order’ to the set of examples. • • Gain(S, Outlook) = 0. 246 Gain(S, Humidity) = 0. 151 Gain(S, Wind) = 0. 048 Gain(S, Temp’) = 0. 029 • The first node in the tree should be the one with the highest value, i. e. ‘Outlook’. 43

ID 3 (Decision Tree Algorithm: (Quinlan 1979)) • ID 3 was the first proper ID 3 (Decision Tree Algorithm: (Quinlan 1979)) • ID 3 was the first proper decision tree algorithm to use this mechanism: Building a decision tree with ID 3 algorithm 1. 2. 3. Select the attribute with the most gain Create the subsets for each value of the attribute For each subset 1. if not all the elements of the subset belongs to same class repeat the steps 1 -3 for the subset Main Hypothesis of ID 3: The simplest tree that classifies training examples will work best on future examples (Occam’s Razor) 44

ID 3 (Decision Tree Algorithm) • Function Decision. Ttree. Learner(Examples, Target. Class, Attributes) create ID 3 (Decision Tree Algorithm) • Function Decision. Ttree. Learner(Examples, Target. Class, Attributes) create a Root node for the tree • if all Examples are positive, return the single-node tree Root, with label = Yes • if all Examples are negative, return the single-node tree Root, with label = No • if Attributes list is empty, • return the single-node tree Root, with label = most common value of Target. Class in Examples • else • A = the attribute from Attributes with the highest information gain with respect to Examples • Make A the decision attribute for Root • for each possible value v of A: • add a new tree branch below Root, corresponding to the test A = v • let Examplesv be the subset of Examples that have value v for attribute A • if Examplesv is empty then • add a leaf node below this new branch with label = most common value of Target. Class in Examples • else • add the subtree DTL(Examplesv, Target. Class, Attributes - { A }) • end if • end 45 • return Root

The Problem of Overfitting • Trees may grow to include irrelevant attributes • Noise The Problem of Overfitting • Trees may grow to include irrelevant attributes • Noise may add spurious nodes to the tree. • This can cause overfitting of the training data relative Hypothesis H overfits the data if there exists to test data. H’ with greater error than H, over training examples, but less error than H over entire 46 distribution of instances.

Fixing Over-fitting Two approaches to pruning Prepruning: Stop growing tree during the training when Fixing Over-fitting Two approaches to pruning Prepruning: Stop growing tree during the training when it is determined that there is not enough data to make reliable choices. Postpruning: Grow whole tree but then remove the branches that do not contribute good overall performance. 47

Rule Post-Pruning Rule post-pruning • prune (generalize) each rule by removing any preconditions (i. Rule Post-Pruning Rule post-pruning • prune (generalize) each rule by removing any preconditions (i. e. , attribute tests) that result in improving its accuracy over the validation set • sort pruned rules by accuracy, and consider them in this order when classifying subsequent instances • IF (Outlook = Sunny) ^ (Humidity = High) THEN Play. Tennis = No • Try removing (Outlook = Sunny) condition or (Humidity = High) condition from the rule and select whichever pruning step leads to the biggest improvement in accuracy on the validation set (or else neither if no improvement results). • converting to rules improves readability 48

Advantage and Disadvantages of Decision Trees • Advantages: – – Easy to understand map Advantage and Disadvantages of Decision Trees • Advantages: – – Easy to understand map nicely to a production rules Suitable for categorical as well as numerical inputs No statistical assumptions about distribution of attributes Generation and application to classify unknown outputs is very fast • Disadvantages: – Output attributes must be categorical – Unstable: slight variations in the training data may result in different attribute selections and hence different trees – Numerical input attributes leads to complex trees as attribute splits are usually binary 49

Assignment Given the training data set, to identify whether a customer buys computer or Assignment Given the training data set, to identify whether a customer buys computer or not, Develop a Decision Tree using ID 3 technique. 50

Association Rules • Example 1: a female shopper buys a handbag is likely to Association Rules • Example 1: a female shopper buys a handbag is likely to buy shoes • Example 2: when a male customer buys beer, he is likely to buy salted peanuts • It is not very difficult to develop algorithms that will find this associations in a large database – The problem is that such an algorithm will also uncover many other associations that are of very little value. 51

Association Rules • It is necessary to introduce some measures to distinguish interesting associations Association Rules • It is necessary to introduce some measures to distinguish interesting associations from non-interesting ones • Look for associations that have a lots of examples in the database: support of an association rule • May be that a considerable group of people who read all three magazines but there is a much larger group that buys A & B, but not C; association is very weak here although support might be very high. 52

Associations…. • Percentage of records for which C holds, within the group of records Associations…. • Percentage of records for which C holds, within the group of records for which A & B hold: confidence • Association rules are only useful in data mining if we already have a rough idea of what is we are looking for. • We will represent an association rule in the following way: – MUSIC_MAG, HOUSE_MAG=>CAR_MAG – Somebody that reads both a music and a house magazine is also very likely to read a car magazine 53

Associations… • Example: shopping Basket analysis Transactions Chips Rasbari Samosa Coke T 1 X Associations… • Example: shopping Basket analysis Transactions Chips Rasbari Samosa Coke T 1 X X T 2 X X T 3 X X Babu Ram Dawadi Tea X 54

Example… • 1. find all frequent Itemsets: • (a) 1 -itemsets – K= [{Chips}C=1, Example… • 1. find all frequent Itemsets: • (a) 1 -itemsets – K= [{Chips}C=1, {Rasbari}C=3, {Samosa}C=2, {Tea}C=1] • (b) extend to 2 -itemsets: – L=[{Chips, Rasbari}C=1, {Rasbari, Samosa}C=2, {Rasbari, Tea}C=1, {Samosa, Tea}C=1] • (c) Extend to 3 -Itemsets: – M=[{Rasbari, Samosa, Tea}C=1 55

Examples. . • Match with the requirements: – – Min. Support is 2 (66%) Examples. . • Match with the requirements: – – Min. Support is 2 (66%) (a) >> K 1={{Rasbari}, {Samosa}} (b) >>L 1={Rasbari, Samosa} (c) >>M 1={} • Build All possible rules: – (a) no rule – (b) >> possible rules: • Rasbari=>Samosa • Samosa=>Rasbari – (c) No rule • Support: given the association rule X 1, X 2…Xn=>Y, the support is the Percentage of records for which X 1, X 2…Xn and Y both hold true. 56

Example. . • Calculate Confidence for b: – Confidence of [Rasbari=>Samosa] • {Rasbari, Samosa}C=2/{Rasbari}C=3 Example. . • Calculate Confidence for b: – Confidence of [Rasbari=>Samosa] • {Rasbari, Samosa}C=2/{Rasbari}C=3 • =2/3 • 66% – Confidence of Samosa=> Rasbari • {Rasbari, Samosa}C=2/{Samosa}C=2 • =2/2 • 100% • Confidence: Given the association rule X 1, X 2…. Xn=>Y, the confidence is the percentage of records for which Y holds within the group of records for which X 1, X 2…Xn holds true. 57

What Is Frequent Pattern Analysis? • Frequent pattern: a pattern (a set of items, What Is Frequent Pattern Analysis? • Frequent pattern: a pattern (a set of items, subsequences, substructures, etc. ) that occurs frequently in a data set • First proposed by Agrawal, Imielinski, and Swami [AIS 93] in the context of frequent itemsets and association rule mining • Motivation: Finding inherent regularities in data – What products were often purchased together? — Beer and diapers? ! – What are the subsequent purchases after buying a PC? – What kinds of DNA are sensitive to this new drug? – Can we automatically classify web documents? • Applications – Basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. 16 March 2018 Data Mining: Concepts and Techniques 58

Why Is Freq. Pattern Mining Important? • Discloses an intrinsic and important property of Why Is Freq. Pattern Mining Important? • Discloses an intrinsic and important property of data sets • Forms the foundation for many essential data mining tasks – Association, correlation, and causality analysis – Sequential, structural (e. g. , sub-graph) patterns – Pattern analysis in spatiotemporal, multimedia, time-series, and stream data – Classification: associative classification – Cluster analysis: frequent pattern-based clustering – Data warehousing: iceberg cube and cube-gradient – Semantic data compression: fascicles – Broad applications 16 March 2018 Data Mining: Concepts and Techniques 59

Basic Concepts: Frequent Patterns and Association Rules Transaction-id Items bought 10 A, B, D Basic Concepts: Frequent Patterns and Association Rules Transaction-id Items bought 10 A, B, D 20 A, C, D 30 A, D, E 40 B, E, F 50 B, C, D, E, F Customer buys both Customer buys beer 16 March 2018 Customer buys diaper • Itemset X = {x 1, …, xk} • Find all the rules X Y with minimum support and confidence – support, s, probability that a transaction contains X Y – confidence, c, conditional probability that a transaction having X also contains Y Let supmin = 50%, confmin = 50% Freq. Pat. : {A: 3, B: 3, D: 4, E: 3, AD: 3} Association rules: A D (60%, 100%) D A (60%, 75%) Data Mining: Concepts and Techniques 60

The A-Priori Algorithm • Set the threshold for support rather high – to focus The A-Priori Algorithm • Set the threshold for support rather high – to focus on a small number of best candidates, Observation: Ifor a set of items X has support s, then each subset of X must also have support at least s. ( if a pair {i, j} appears in say, 1000 baskets, then we know there at least 1000 baskets with item i and at least 1000 baskets with item j ) • Algorithm: 1) Find the set of candidate items – those that appear in a sufficient number of baskets by themselves 2) Run the query on only the candidate items 61

Apriori Algorithm Begin Initialise the candidate Item-sets as single items in database. Scan the Apriori Algorithm Begin Initialise the candidate Item-sets as single items in database. Scan the database and count the frequency of the candidate item-sets, then Large Item-sets are decided based on the user specified min_sup. Any new Large Item-sets? NO YES Based on the Large Item-sets, expand them with one more item to generate new Candidate item-sets. Stop 62

Apriori: A Candidate Generation-and-test Approach • Any subset of a frequent itemset must be Apriori: A Candidate Generation-and-test Approach • Any subset of a frequent itemset must be frequent – if {beer, diaper, nuts} is frequent, so is {beer, diaper} – Every transaction having {beer, diaper, nuts} also contains {beer, diaper} • Apriori pruning principle: If there is any itemset which is infrequent, its superset should not be generated/tested! • The performance studies show its efficiency and scalability 63

The Apriori Algorithm — An Example Database TDB Tid 10 20 30 40 L The Apriori Algorithm — An Example Database TDB Tid 10 20 30 40 L 2 C 1 Items A, C, D B, C, E A, B, C, E B, E Itemset {A, C} {B, E} {C, E} C 3 1 st scan C 2 sup 2 2 3 2 Itemset {B, C, E} Itemset {A} {B} {C} {D} {E} Itemset {A, B} {A, C} {A, E} {B, C} {B, E} {C, E} 3 rd scan sup 2 3 3 1 3 sup 1 2 3 2 L 3 L 1 Itemset {A} {B} {C} {E} C 2 2 nd scan Itemset {B, C, E} sup 2 3 3 3 Itemset {A, B} {A, C} {A, E} {B, C} {B, E} {C, E} 64

The Apriori Algorithm • Pseudo-code: Ck: Candidate itemset of size k Lk : frequent The Apriori Algorithm • Pseudo-code: Ck: Candidate itemset of size k Lk : frequent itemset of size k L 1 = {frequent items}; for (k = 1; Lk != ; k++) do begin Ck+1 = candidates generated from Lk; for each transaction t in database do increment the count of all candidates in Ck+1 that are contained in t Lk+1 = candidates in Ck+1 with min_support end return k Lk; 16 March 2018 Data Mining: Concepts and Techniques 65

Important Details of Apriori • How to generate candidates? – Step 1: self-joining Lk Important Details of Apriori • How to generate candidates? – Step 1: self-joining Lk – Step 2: pruning • How to count supports of candidates? • Example of Candidate-generation – L 3={abc, abd, ace, bcd} – Self-joining: L 3*L 3 • abcd from abc and abd • acde from acd and ace – Pruning: • acde is removed because ade is not in L 3 – C 4={abcd} 16 March 2018 Data Mining: Concepts and Techniques 66

Problems with A-priori Algorithms • It is costly to handle a huge number of Problems with A-priori Algorithms • It is costly to handle a huge number of candidate sets. For example if there are 104 large 1 -itemsets, the Apriori algorithm will need to generate more than 107 candidate 2 -itemsets. Moreover for 100 -itemsets, it must generate more than 2100 1030 candidates in total. • The candidate generation is the inherent cost of the Apriori Algorithms, no matter what implementation technique is applied. • To mine a large data sets for long patterns – this algorithm is NOT a good idea. • When Database is scanned to check Ck for creating Lk, a large number of transactions will be scanned even they do not contain any k-itemset. 67

Mining Frequent Patterns Without Candidate Generation • Grow long patterns from short ones using Mining Frequent Patterns Without Candidate Generation • Grow long patterns from short ones using local frequent items – “abc” is a frequent pattern – Get all transactions having “abc”: DB|abc – “d” is a local frequent item in DB|abc abcd is a frequent pattern 16 March 2018 Data Mining: Concepts and Techniques 68

Construct FP-tree from a Transaction Database TID 100 200 300 400 500 1. 2. Construct FP-tree from a Transaction Database TID 100 200 300 400 500 1. 2. 3. Items bought (ordered) frequent items {f, a, c, d, g, i, m, p} {f, c, a, m, p} {a, b, c, f, l, m, o} {f, c, a, b, m} {b, f, h, j, o, w} {f, b} {b, c, k, s, p} {c, b, p} {a, f, c, e, l, p, m, n} {f, c, a, m, p} Header Table Scan DB once, find frequent 1 itemset (single item pattern) Sort frequent items in frequency descending order, f-list Scan DB again, construct FP-tree 16 March 2018 Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 F-list=f-c-a-b-m-p Data Mining: Concepts and Techniques min_support = 3 {} f: 4 c: 3 c: 1 b: 1 a: 3 b: 1 p: 1 m: 2 b: 1 p: 2 m: 1 69

Benefits of the FP-tree Structure • Completeness – Preserve complete information for frequent pattern Benefits of the FP-tree Structure • Completeness – Preserve complete information for frequent pattern mining – Never break a long pattern of any transaction • Compactness – Reduce irrelevant info—infrequent items are gone – Items in frequency descending order: the more frequently occurring, the more likely to be shared – Never be larger than the original database (not count node-links and the count field) 16 March 2018 Data Mining: Concepts and Techniques 70

Partition Patterns and Databases • Frequent patterns can be partitioned into subsets according to Partition Patterns and Databases • Frequent patterns can be partitioned into subsets according to f-list – F-list=f-c-a-b-m-p – Patterns containing p – Patterns having m but no p –… – Patterns having c but no a nor b, m, p – Pattern f • Completeness and non-redundency 16 March 2018 Data Mining: Concepts and Techniques 71

Find Patterns Having P From P-conditional Database • Starting at the frequent item header Find Patterns Having P From P-conditional Database • Starting at the frequent item header table in the FP-tree • Traverse the FP-tree by following the link of each frequent item p • Accumulate all of transformed prefix paths of item p to form p’s conditional pattern base {} Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 16 March 2018 f: 4 c: 1 Conditional pattern bases b: 1 a: 3 b: 1 p: 1 cond. pattern base c f: 3 a fc: 3 b c: 3 item fca: 1, f: 1, c: 1 m: 2 b: 1 m fca: 2, fcab: 1 p: 2 m: 1 p fcam: 2, cb: 1 Data Mining: Concepts and Techniques 72

From Conditional Pattern-bases to Conditional FP-trees • For each pattern-base – Accumulate the count From Conditional Pattern-bases to Conditional FP-trees • For each pattern-base – Accumulate the count for each item in the base – Construct the FP-tree for the frequent items of the pattern base Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 m-conditional pattern base: fca: 2, fcab: 1 {} f: 4 c: 3 c: 1 b: 1 a: 3 b: 1 p: 1 {} f: 3 b: 1 c: 3 p: 2 16 March 2018 m: 2 m: 1 All frequent patterns relate to m m, fm, cm, am, fcm, fam, cam, fcam a: 3 m-conditional Data Mining: Concepts and Techniques FP-tree 73

FP-Growth vs. Apriori: Scalability With the Support Threshold Data set T 25 I 20 FP-Growth vs. Apriori: Scalability With the Support Threshold Data set T 25 I 20 D 10 K 16 March 2018 Data Mining: Concepts and Techniques 74

FP-Growth vs. Tree-Projection: Scalability with the Support Threshold Data set T 25 I 20 FP-Growth vs. Tree-Projection: Scalability with the Support Threshold Data set T 25 I 20 D 100 K 16 March 2018 Data Mining: Concepts and Techniques 75

Why Is FP-Growth the Winner? • Divide-and-conquer: – decompose both the mining task and Why Is FP-Growth the Winner? • Divide-and-conquer: – decompose both the mining task and DB according to the frequent patterns obtained so far – leads to focused search of smaller databases • Other factors – no candidate generation, no candidate test – compressed database: FP-tree structure – no repeated scan of entire database – basic ops—counting local freq items and building sub FPtree, no pattern search and matching 16 March 2018 Data Mining: Concepts and Techniques 76

Artificial Neural Network: Outline • Perceptrons • Multi-layer networks • Backpropagation n n n Artificial Neural Network: Outline • Perceptrons • Multi-layer networks • Backpropagation n n n Neuron switching time : > 10 -3 secs Number of neurons in the human brain: ~1011 Connections (synapses) per neuron : ~104– 105 Face recognition : 0. 1 secs High degree of parallel computation Distributed representations Babu Ram Dawadi 77

Human Brain • Computers and the Brain: A Contrast – – – Arithmetic: 1 Human Brain • Computers and the Brain: A Contrast – – – Arithmetic: 1 brain = 1/10 pocket calculator Vision: 1 brain = 1000 super computers Memory of arbitrary details: computer wins Memory of real-world facts: brain wins A computer must be programmed explicitly The brain can learn by experiencing the world Shashidhar Ram Joshi 78

Definition • “. . . Neural nets are basically mathematical models of information processing. Definition • “. . . Neural nets are basically mathematical models of information processing. . . ” • “. . . (neural nets) refer to machines that have a structure that, at some level, reflects what is known of the structure of the brain. . . ” • “A neural network is a massively parallel distributed processor. . . “ Shashidhar Ram Joshi 79

Properties of the Brain • Architectural – 80, 000 neurons per square mm – Properties of the Brain • Architectural – 80, 000 neurons per square mm – 1011 neurons - 1015 connections – Most axons extend less than 1 mm (local connections) • Operational – Highly complex, nonlinear, parallel computer – Operates at millisecond speeds Shashidhar Ram Joshi 80

Interconnectedness • Each neuron may have over a thousand synapses • Some cells in Interconnectedness • Each neuron may have over a thousand synapses • Some cells in cerebral cortex may have 200, 000 connections • Total number of connections in the brain “network” is astronomical—greater than the number of particles in known universe Shashidhar Ram Joshi 81

Brain and Nervous System • Around 100 billion neurons in the human brain. • Brain and Nervous System • Around 100 billion neurons in the human brain. • Each of these is connected to many other neurons (typically 10000 connections) • Regions of the brain are (somewhat) specialised. • Some neurons connect to senses (input) and muscles (action).

Detail of a Neuron Detail of a Neuron

The Question Humans find these tasks relatively simple We learn by example The brain The Question Humans find these tasks relatively simple We learn by example The brain is responsible for our ‘computing’ power If a machine were constructed using the fundamental building blocks found in the brain could it learn to do ‘difficult’ tasks ? ? ? Shashidhar Ram Joshi 84

Basic Ideas in Machine Learning • Machine learning is focused on inductive learning of Basic Ideas in Machine Learning • Machine learning is focused on inductive learning of hypotheses from examples. • Three main forms of learning: – Supervised learning: Examples are tagged with some “expert” information. – Unsupervised learning: Examples are placed into categories without guidance; instead, generic properties such as “similarity” are used. – Reinforcement learning: Examples are tested, and the results of those tests used to drive learning.

Neural Network: Characteristics • Highly parallel structure; hence a capability for fast computing • Neural Network: Characteristics • Highly parallel structure; hence a capability for fast computing • Ability to learn and adapt to changing system parameters • High degree of tolerance to damage in the connections • Ability to learn through parallel and distributed processing 86

Neural Networks • A neural Network is composed of a number of nodes, or Neural Networks • A neural Network is composed of a number of nodes, or units, connected by links. Each link has a numeric weight associated with it. • Each unit has a set of input links from other units, a set of output links to other units, a current activation level, and a means of computing the activation level at the next step in time. 87

 • Linear treshold unit (LTU) x 1 x 2 . . . xn • Linear treshold unit (LTU) x 1 x 2 . . . xn w 1 w 2 x 0=1 w 0 wn Input Unit o i=0 n wi xi Activation Unit o(xi)= Output Unit 1 if i=0 n wi xi >0 -1 otherwise { 88

Layered network • Single layered • Multi layered I 1 w 13 w 14 Layered network • Single layered • Multi layered I 1 w 13 w 14 I 2 w 24 H 3 w 35 w 23 O 5 H 4 w 45 Two layer, feed forward network with two inputs, two hidden nodes and one output node. 89

Perceptrons • A single-layered, feed-forward network can be taken as a perceptron. Single Perceptron Perceptrons • A single-layered, feed-forward network can be taken as a perceptron. Single Perceptron Ij Wj, i Oi Ij Wj O 90

Perceptron Learning Rule wi = wi + wi wi = (t - o) xi Perceptron Learning Rule wi = wi + wi wi = (t - o) xi t=c(x) is the target value o is the perceptron output Is a small constant (e. g. 0. 1) called learning rate • If the output is correct (t=o) the weights wi are not changed • If the output is incorrect (t o) the weights wi are changed such that the output of the perceptron for the new weights is closer to t. >> Homework: BACKPROPAGAION Algorithm 91

Genetic Algorithm • Derived inspiration from biology • The most fertile area for exchange Genetic Algorithm • Derived inspiration from biology • The most fertile area for exchange of views between biology and computer science is ‘evolutionary computing’ • This area evolved from three stages or less independent development: – Genetic algorithms – Evolutionary programming – Evolution strategies 94

GA. . • The investigators began to see a strong relationship between these areas, GA. . • The investigators began to see a strong relationship between these areas, and at present, genetic algorithms are consideered to be among the most successful machinelearning techniques. • In the “origin of species”, Darwin described theory of evolution, with the ‘natural selection’ as the central notion. – Each species has an overproduction of individuals and in a tough struggle for life, only those individuals that are best adapted to the environment survive. • The long DNA molecules, consisting of only four building blocks, suggest that all the heriditary information of a human individual, or of any living creature, has been laid down in a language of only four letters (C, G, A & T in language of genetics) 95

How large is the decision space? • If we were to look at every How large is the decision space? • If we were to look at every alternative, what would we have to do? Of course, it depends. . . • Think: enzymes – – – Catalyze all reactions in the cell Biological enzymes are composed of amino acids There are 20 naturally-occurring amino acids Easily, enzymes are 1000 amino acids long 20^1000 = (2^1000)(10^1000) 10^1300 • A reference number, a benchmark: 10^80 number of atomic particles in universe

How large is the decision space? • Problem: Design an icon in black & How large is the decision space? • Problem: Design an icon in black & white How many options? – Icon is 32 x 32 = 1024 pixels – Each pixel can be on or off, so 2^1024 options – 2^1024 (2^20)^50 (10^6)^50 = 10^300 • Police faces – – – – 10 types of eyes 10 types of noses 10 types of eyebrows 10 types of head shape 10 types of mouth 10 types of ears but already we have 10^7 faces

GA. . • The collection of genetic instruction for human is about 3 billion GA. . • The collection of genetic instruction for human is about 3 billion letters long – Each individual inherits some characteristics of the father and some of the mother. – Individual differences between people, such as hair color and eye color, and also pre-disposition for diseases, are caused by differences in genetic coding • Even the twins are different in numerous aspects. 98

Genetic Algorithm Components • Selection – determines how many and which individuals breed – Genetic Algorithm Components • Selection – determines how many and which individuals breed – premature convergence sacrifices solution quality for speed • Crossover – select a random crossover point – successfully exchange substructures – 00000 x 11111 at point 2 yields 00111 and 11000 • Mutation – random changes in the genetic material (bit pattern) – for problems with billions of local optima, mutations help find the global optimum solution • Evaluator function – rank fitness of each individual in the population – simple function (maximum) or complex function

GA. . • Following are the formula for constructing a genetic algorithm for the GA. . • Following are the formula for constructing a genetic algorithm for the solution of problem – Write a good coding in terms of strings of limited alphabets – Invent an artificial environment in the computer where solution can join each other – Develop ways in which possible solutions can be combined. Like father’s and mother’s strings are simply cut and after changing, stuck together again called cross- over – Provide an initial population or solution set and make the computer play evolution by removing bad solutions from each generation and replacing them with mutations of good solutions – Stop when a family of successful solutions has been produced 100

Example 101 Example 101

Genetic algorithms 102 Genetic algorithms 102