Скачать презентацию CS 590 D Data Mining Prof Chris Clifton Скачать презентацию CS 590 D Data Mining Prof Chris Clifton

ce6519a87b38db52b8adbbfa1892974d.ppt

  • Количество слайдов: 157

CS 590 D: Data Mining Prof. Chris Clifton February 21, 2006 Clustering CS 590 CS 590 D: Data Mining Prof. Chris Clifton February 21, 2006 Clustering CS 590 D

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 2

What is Cluster Analysis? • Finding groups of objects such that the objects in What is Cluster Analysis? • Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized CS 590 D 3

What is Cluster Analysis? • Cluster: a collection of data objects – Similar to What is Cluster Analysis? • Cluster: a collection of data objects – Similar to one another within the same cluster – Dissimilar to the objects in other clusters • Cluster analysis – Grouping a set of data objects into clusters • Clustering is unsupervised classification: no predefined classes • Typical applications – As a stand-alone tool to get insight into data distribution – As a preprocessing step for other algorithms CS 590 D 4

General Applications of Clustering • Pattern Recognition • Spatial Data Analysis – create thematic General Applications of Clustering • Pattern Recognition • Spatial Data Analysis – create thematic maps in GIS by clustering feature spaces – detect spatial clusters and explain them in spatial data mining • Image Processing • Economic Science (especially market research) • WWW – Document classification – Cluster Weblog data to discover groups of similar access patterns CS 590 D 5

Examples of Clustering Applications • Marketing: Help marketers discover distinct groups in their customer Examples of Clustering Applications • Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs • Land use: Identification of areas of similar land use in an earth observation database • Insurance: Identifying groups of motor insurance policy holders with a high average claim cost • City-planning: Identifying groups of houses according to their house type, value, and geographical location • Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults CS 590 D 6

Notion of a Cluster can be Ambiguous How many clusters? Six Clusters Two Clusters Notion of a Cluster can be Ambiguous How many clusters? Six Clusters Two Clusters Four Clusters CS 590 D 7

What Is Good Clustering? • A good clustering method will produce high quality clusters What Is Good Clustering? • A good clustering method will produce high quality clusters with – high intra-class similarity – low inter-class similarity • The quality of a clustering result depends on both the similarity measure used by the method and its implementation. • The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns. CS 590 D 8

Types of Clusterings • A clustering is a set of clusters • Important distinction Types of Clusterings • A clustering is a set of clusters • Important distinction between hierarchical and partitional sets of clusters • Partitional Clustering – A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset • Hierarchical clustering – A set of nested clusters organized as a hierarchical tree CS 590 D 9

Partitional Clustering Original Points A Partitional Clustering CS 590 D 10 Partitional Clustering Original Points A Partitional Clustering CS 590 D 10

Hierarchical Clustering Traditional Dendrogram Non-traditional Hierarchical Clustering Non-traditional Dendrogram CS 590 D 11 Hierarchical Clustering Traditional Dendrogram Non-traditional Hierarchical Clustering Non-traditional Dendrogram CS 590 D 11

Other Distinctions Between Sets of Clusters • Exclusive versus non-exclusive – In non-exclusive clusterings, Other Distinctions Between Sets of Clusters • Exclusive versus non-exclusive – In non-exclusive clusterings, points may belong to multiple clusters. – Can represent multiple classes or ‘border’ points • Fuzzy versus non-fuzzy – In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 – Weights must sum to 1 – Probabilistic clustering has similar characteristics • Partial versus complete – In some cases, we only want to cluster some of the data • Heterogeneous versus homogeneous – Cluster of widely different sizes, shapes, and densities CS 590 D 12

Types of Clusters • • • Well-separated clusters Center-based clusters Contiguous clusters Density-based clusters Types of Clusters • • • Well-separated clusters Center-based clusters Contiguous clusters Density-based clusters Property or Conceptual Described by an Objective Function CS 590 D 13

Types of Clusters: Well. Separated • Well-Separated Clusters: – A cluster is a set Types of Clusters: Well. Separated • Well-Separated Clusters: – A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster. 3 well-separated clusters CS 590 D 14

Types of Clusters: Center. Based • Center-based – A cluster is a set of Types of Clusters: Center. Based • Center-based – A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster – The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster 4 center-based clusters CS 590 D 15

Types of Clusters: Contiguity. Based • Contiguous Cluster (Nearest neighbor or Transitive) – A Types of Clusters: Contiguity. Based • Contiguous Cluster (Nearest neighbor or Transitive) – A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster. 8 contiguous clusters CS 590 D 16

Types of Clusters: Density. Based • Density-based – A cluster is a dense region Types of Clusters: Density. Based • Density-based – A cluster is a dense region of points, which is separated by lowdensity regions, from other regions of high density. – Used when the clusters are irregular or intertwined, and when noise and outliers are present. 6 density-based clusters CS 590 D 17

Types of Clusters: Conceptual Clusters • Shared Property or Conceptual Clusters – Finds clusters Types of Clusters: Conceptual Clusters • Shared Property or Conceptual Clusters – Finds clusters that share some common property or represent a particular concept. 2 Overlapping Circles CS 590 D 18

Types of Clusters: Objective Function • Clusters Defined by an Objective Function – Finds Types of Clusters: Objective Function • Clusters Defined by an Objective Function – Finds clusters that minimize or maximize an objective function. – Enumerate all possible ways of dividing the points into clusters and evaluate the `goodness' of each potential set of clusters by using the given objective function. (NP Hard) – Can have global or local objectives. • Hierarchical clustering algorithms typically have local objectives • Partitional algorithms typically have global objectives – A variation of the global objective function approach is to fit the data to a parameterized model. • Parameters for the model are determined from the data. • Mixture models assume that the data is a ‘mixture' of a number of statistical distributions. CS 590 D 19

Types of Clusters: Objective Function … • Map the clustering problem to a different Types of Clusters: Objective Function … • Map the clustering problem to a different domain and solve a related problem in that domain – Proximity matrix defines a weighted graph, where the nodes are the points being clustered, and the weighted edges represent the proximities between points – Clustering is equivalent to breaking the graph into connected components, one for each cluster. – Want to minimize the edge weight between clusters and maximize the edge weight within clusters CS 590 D 20

Requirements of Clustering in Data Mining • Scalability • Ability to deal with different Requirements of Clustering in Data Mining • Scalability • Ability to deal with different types of attributes • Discovery of clusters with arbitrary shape • Minimal requirements for domain knowledge to determine input parameters • Able to deal with noise and outliers • Insensitive to order of input records • High dimensionality • Incorporation of user-specified constraints • Interpretability and usability CS 590 D 21

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 22

Data Structures • Data matrix – (two modes) • Dissimilarity matrix – (one mode) Data Structures • Data matrix – (two modes) • Dissimilarity matrix – (one mode) CS 590 D 23

Measure the Quality of Clustering • Dissimilarity/Similarity metric: Similarity is expressed in terms of Measure the Quality of Clustering • Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, which is typically metric: d(i, j) • There is a separate “quality” function that measures the “goodness” of a cluster. • The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal and ratio variables. • Weights should be associated with different variables based on applications and data semantics. • It is hard to define “similar enough” or “good enough” – the answer is typically highly subjective. CS 590 D 24

Type of data in clustering analysis • Interval-scaled variables: • Binary variables: • Nominal, Type of data in clustering analysis • Interval-scaled variables: • Binary variables: • Nominal, ordinal, and ratio variables: • Variables of mixed types: CS 590 D 25

Interval-valued variables • Standardize data – Calculate the mean absolute deviation: where – Calculate Interval-valued variables • Standardize data – Calculate the mean absolute deviation: where – Calculate the standardized measurement (z-score) • Using mean absolute deviation is more robust than using standard deviation CS 590 D 26

Similarity and Dissimilarity Between Objects • Distances are normally used to measure the similarity Similarity and Dissimilarity Between Objects • Distances are normally used to measure the similarity or dissimilarity between two data objects • Some popular ones include: Minkowski distance: where i = (xi 1, xi 2, …, xip) and j = (xj 1, xj 2, …, xjp) are two pdimensional data objects, and q is a positive integer • If q = 1, d is Manhattan distance CS 590 D 27

Similarity and Dissimilarity Between Objects (Cont. ) • If q = 2, d is Similarity and Dissimilarity Between Objects (Cont. ) • If q = 2, d is Euclidean distance: – Properties • d(i, j) 0 • d(i, i) = 0 • d(i, j) = d(j, i) • d(i, j) d(i, k) + d(k, j) • Also, one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures CS 590 D 28

Binary Variables • A contingency table for binary data Object j Object i • Binary Variables • A contingency table for binary data Object j Object i • Simple matching coefficient (invariant, if the binary variable is symmetric): • Jaccard coefficient (noninvariant if the binary variable is asymmetric): CS 590 D 30

Dissimilarity between Binary Variables • Example – gender is a symmetric attribute – the Dissimilarity between Binary Variables • Example – gender is a symmetric attribute – the remaining attributes are asymmetric binary – let the values Y and P be set to 1, and the value N be set to 0 CS 590 D 31

Nominal Variables • A generalization of the binary variable in that it can take Nominal Variables • A generalization of the binary variable in that it can take more than 2 states, e. g. , red, yellow, blue, green • Method 1: Simple matching – m: # of matches, p: total # of variables • Method 2: use a large number of binary variables – creating a new binary variable for each of the M nominal states CS 590 D 32

Ordinal Variables • An ordinal variable can be discrete or continuous • Order is Ordinal Variables • An ordinal variable can be discrete or continuous • Order is important, e. g. , rank • Can be treated like interval-scaled – replace xif by their rank – map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by – compute the dissimilarity using methods for interval-scaled variables CS 590 D 33

Ratio-Scaled Variables • Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at Ratio-Scaled Variables • Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as Ae. Bt or Ae-Bt • Methods: – treat them like interval-scaled variables—not a good choice! (why? —the scale can be distorted) – apply logarithmic transformation yif = log(xif) – treat them as continuous ordinal data treat their rank as intervalscaled CS 590 D 34

Variables of Mixed Types • A database may contain all the six types of Variables of Mixed Types • A database may contain all the six types of variables – symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio • One may use a weighted formula to combine their effects – f is binary or nominal: dij(f) = 0 if xif = xjf , or dij(f) = 1 o. w. – f is interval-based: use the normalized distance – f is ordinal or ratio-scaled • compute ranks rif and • and treat zif as interval-scaled CS 590 D 35

CS 590 D: Data Mining Prof. Chris Clifton February 23, 2006 Clustering CS 590 CS 590 D: Data Mining Prof. Chris Clifton February 23, 2006 Clustering CS 590 D

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 40

Partitioning Algorithms: Basic Concept • Partitioning method: Construct a partition of a database D Partitioning Algorithms: Basic Concept • Partitioning method: Construct a partition of a database D of n objects into a set of k clusters • Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion – Global optimal: exhaustively enumerate all partitions – Heuristic methods: k-means and k-medoids algorithms – k-means (Mac. Queen’ 67): Each cluster is represented by the center of the cluster – k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’ 87): Each cluster is represented by one of the objects in the cluster CS 590 D 41

The K-Means Clustering Method • Given k, the k-means algorithm is implemented in four The K-Means Clustering Method • Given k, the k-means algorithm is implemented in four steps: – Partition objects into k nonempty subsets – Compute seed points as the centroids of the clusters of the current partition (the centroid is the center, i. e. , mean point, of the cluster) – Assign each object to the cluster with the nearest seed point – Go back to Step 2, stop when no more new assignment CS 590 D 42

The K-Means Clustering Method 10 10 9 9 8 8 7 7 6 6 The K-Means Clustering Method 10 10 9 9 8 8 7 7 6 6 5 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Assign each objects to most similar center Update the cluster means reassign 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 reassign K=2 Arbitrarily choose K object as initial cluster center Update the cluster means CS 590 D 43 10

Comments on the K-Means Method • Strength: Relatively efficient: O(tkn), where n is # Comments on the K-Means Method • Strength: Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. • Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks 2 + k(n-k)) • Comment: Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms • Weakness – Applicable only when mean is defined, then what about categorical data? – Need to specify k, the number of clusters, in advance – Unable to handle noisy data and outliers – Not suitable to discover clusters with non-convex shapes CS 590 D 44

Variations of the K-Means Method • A few variants of the k-means which differ Variations of the K-Means Method • A few variants of the k-means which differ in – Selection of the initial k means – Dissimilarity calculations – Strategies to calculate cluster means • Handling categorical data: k-modes (Huang’ 98) – Replacing means of clusters with modes – Using new dissimilarity measures to deal with categorical objects – Using a frequency-based method to update modes of clusters – A mixture of categorical and numerical data: k-prototype method CS 590 D 45

What is the problem of k. Means Method? • The k-means algorithm is sensitive What is the problem of k. Means Method? • The k-means algorithm is sensitive to outliers ! – Since an object with an extremely large value may substantially distort the distribution of the data. • K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster. 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 0 1 2 3 4 5 6 7 8 9 10 CS 590 D 0 0 1 2 3 4 5 6 7 8 9 10 46

Importance of Choosing Initial Centroids CS 590 D 47 Importance of Choosing Initial Centroids CS 590 D 47

Solutions to Initial Centroids Problem • Multiple runs – Helps, but probability is not Solutions to Initial Centroids Problem • Multiple runs – Helps, but probability is not on your side • Sample and use hierarchical clustering to determine initial centroids • Select more than k initial centroids and then select among these initial centroids – Select most widely separated • Postprocessing • Bisecting K-means – Not as susceptible to initialization issues CS 590 D 48

Limitations of K-means: Differing Density K-means (3 Clusters) Original Points CS 590 D 49 Limitations of K-means: Differing Density K-means (3 Clusters) Original Points CS 590 D 49

Limitations of K-means: Nonglobular Shapes Original Points K-means (2 Clusters) CS 590 D 50 Limitations of K-means: Nonglobular Shapes Original Points K-means (2 Clusters) CS 590 D 50

The K-Medoids Clustering Method • Find representative objects, called medoids, in clusters • PAM The K-Medoids Clustering Method • Find representative objects, called medoids, in clusters • PAM (Partitioning Around Medoids, 1987) – starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering – PAM works effectively for small data sets, but does not scale well for large data sets • CLARA (Kaufmann & Rousseeuw, 1990) • CLARANS (Ng & Han, 1994): Randomized sampling • Focusing + spatial data structure (Ester et al. , 1995) CS 590 D 51

Typical k-medoids algorithm (PAM) Total Cost = 20 10 9 8 Arbitrary choose k Typical k-medoids algorithm (PAM) Total Cost = 20 10 9 8 Arbitrary choose k object as initial medoids 7 6 5 4 3 2 Assign each remainin g object to nearest medoids 1 0 0 1 2 3 4 5 6 7 8 9 10 K=2 Do loop Until no change Randomly select a nonmedoid object, Oramdom Total Cost = 26 10 10 9 Swapping O and Oramdom If quality is improved. Compute total cost of swapping 8 7 6 9 8 7 6 5 5 4 4 3 3 2 2 1 1 0 0 1 2 CS 590 D 3 4 5 6 0 7 8 9 10 0 1 2 3 4 5 6 7 52 8 9 10

PAM (Partitioning Around Medoids) (1987) • PAM (Kaufman and Rousseeuw, 1987), built in Splus PAM (Partitioning Around Medoids) (1987) • PAM (Kaufman and Rousseeuw, 1987), built in Splus • Use real object to represent the cluster – Select k representative objects arbitrarily – For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih – For each pair of i and h, • If TCih < 0, i is replaced by h • Then assign each non-selected object to the most similar representative object – repeat steps 2 -3 until there is no change CS 590 D 53

PAM Clustering: Total swapping cost TCih= j. Cjih j t t j i h PAM Clustering: Total swapping cost TCih= j. Cjih j t t j i h i h i t h t j

What is the problem with PAM? • Pam is more robust than k-means in What is the problem with PAM? • Pam is more robust than k-means in the presence of noise and outliers because a medoid is less influenced by outliers or other extreme values than a mean • Pam works efficiently for small data sets but does not scale well for large data sets. – O(k(n-k)2 ) for each iteration where n is # of data, k is # of clusters è Sampling based method, CLARA(Clustering LARge Applications) CS 590 D 55

CLARA (Clustering Large Applications) (1990) • CLARA (Kaufmann and Rousseeuw in 1990) – Built CLARA (Clustering Large Applications) (1990) • CLARA (Kaufmann and Rousseeuw in 1990) – Built in statistical analysis packages, such as S+ • It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output • Strength: deals with larger data sets than PAM • Weakness: – Efficiency depends on the sample size – A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased CS 590 D 56

CLARANS (“Randomized” CLARA) (1994) • CLARANS (A Clustering Algorithm based on Randomized Search) (Ng CLARANS (“Randomized” CLARA) (1994) • CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’ 94) • CLARANS draws sample of neighbors dynamically • The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids • If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum • It is more efficient and scalable than both PAM and CLARA • Focusing techniques and spatial access structures may further improve its performance (Ester et al. ’ 95) CS 590 D 57

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 58

Hierarchical Clustering • Use distance matrix as clustering criteria. This method does not require Hierarchical Clustering • Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition Step 0 a b Step 1 Step 2 Step 3 Step 4 ab abcde c cde d de e Step 4 agglomerative (AGNES) Step 3 Step 2 Step 1 Step 0 CS 590 D divisive (DIANA) 59

AGNES (Agglomerative Nesting) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical AGNES (Agglomerative Nesting) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical analysis packages, e. g. , Splus • Use the Single-Link method and the dissimilarity matrix. • Merge nodes that have the least dissimilarity • Go on in a non-descending fashion • Eventually all nodes belong to the same cluster CS 590 D 60

Agglomerative Clustering Algorithm • More popular hierarchical clustering technique • Basic algorithm is straightforward Agglomerative Clustering Algorithm • More popular hierarchical clustering technique • Basic algorithm is straightforward 1. 2. 3. 4. 5. 6. • Compute the proximity matrix Let each data point be a cluster Repeat Merge the two closest clusters Update the proximity matrix Until only a single cluster remains Key operation is the computation of the proximity of two clusters – Different approaches to defining the distance between clusters distinguish the different algorithms CS 590 D 61

Starting Situation • Start with clusters of individual points and p 1 p 2 Starting Situation • Start with clusters of individual points and p 1 p 2 p 3 p 4 p 5. . . a proximity matrix p 1 p 2 p 3 p 4 p 5. . . CS 590 D Proximity Matrix 62

Intermediate Situation • After some merging steps, we have some clusters C 1 C Intermediate Situation • After some merging steps, we have some clusters C 1 C 2 C 3 C 4 C 5 Proximity Matrix C 1 C 2 C 5 CS 590 D 63

Intermediate Situation • We want to merge the two closest clusters (C 2 and Intermediate Situation • We want to merge the two closest clusters (C 2 and C 5) and update the proximity matrix. C 1 C 2 C 3 C 4 C 5 Proximity Matrix C 1 C 2 C 5 CS 590 D 64

After Merging • The question is “How do we update the proximity C 2 After Merging • The question is “How do we update the proximity C 2 matrix? ” U C 1 C 3 C 2 U C 5 C 4 C 5 C 3 C 4 ? ? ? C 3 ? C 4 ? Proximity Matrix C 1 C 2 U C 5 CS 590 D 65

How to Define Inter-Cluster Similarity p 1 Similarity? p 2 p 3 p 4 How to Define Inter-Cluster Similarity p 1 Similarity? p 2 p 3 p 4 p 5 . . . p 1 p 2 • • • MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function p 3 p 4 p 5 . . . – Ward’s Method uses squared error CS 590 D Proximity Matrix 66

How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 • • • MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function p 3 p 4 p 5 . . . – Ward’s Method uses squared error CS 590 D Proximity Matrix 67

How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 • • • MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function p 3 p 4 p 5 . . . – Ward’s Method uses squared error CS 590 D Proximity Matrix 68

How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 • • • MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function p 3 p 4 p 5 . . . – Ward’s Method uses squared error CS 590 D Proximity Matrix 69

How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 • • • MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function p 3 p 4 p 5 . . . – Ward’s Method uses squared error CS 590 D Proximity Matrix 70

Hierarchical Clustering: Time and Space requirements • O(N 2) space since it uses the Hierarchical Clustering: Time and Space requirements • O(N 2) space since it uses the proximity matrix. – N is the number of points. • O(N 3) time in many cases – There are N steps and at each step the size, N 2, proximity matrix must be updated and searched – Complexity can be reduced to O(N 2 log(N) ) time for some approaches CS 590 D 84

Hierarchical Clustering: Problems and Limitations • Once a decision is made to combine two Hierarchical Clustering: Problems and Limitations • Once a decision is made to combine two clusters, it cannot be undone • No objective function is directly minimized • Different schemes have problems with one or more of the following: – Sensitivity to noise and outliers – Difficulty handling different sized clusters and convex shapes – Breaking large clusters CS 590 D 85

A Dendrogram Shows How the Clusters are Merged Hierarchically • Decompose data objects into A Dendrogram Shows How the Clusters are Merged Hierarchically • Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram. • A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster. CS 590 D 86

DIANA (Divisive Analysis) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical DIANA (Divisive Analysis) • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical analysis packages, e. g. , Splus • Inverse order of AGNES • Eventually each node forms a cluster on its own CS 590 D 87

More on Hierarchical Clustering Methods • Major weakness of agglomerative clustering methods – do More on Hierarchical Clustering Methods • Major weakness of agglomerative clustering methods – do not scale well: time complexity of at least O(n 2), where n is the number of total objects – can never undo what was done previously • Integration of hierarchical with distance-based clustering – BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters – CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction – CHAMELEON (1999): hierarchical clustering using dynamic modeling CS 590 D 90

BIRCH (1996) • Birch: Balanced Iterative Reducing and Clustering using Hierarchies, by Zhang, Ramakrishnan, BIRCH (1996) • Birch: Balanced Iterative Reducing and Clustering using Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’ 96) • Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering – Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) – Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree • Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans • Weakness: handles only numeric data, and sensitive to the order of the data record. CS 590 D 91

Clustering Feature Vector Clustering Feature: CF = (N, LS, SS) N: Number of data Clustering Feature Vector Clustering Feature: CF = (N, LS, SS) N: Number of data points LS: Ni=1=Xi SS: Ni=1=Xi 2 CF = (5, (16, 30), (54, 190)) (3, 4) (2, 6) (4, 5) (4, 7) (3, 8) CS 590 D 92

CF-Tree in BIRCH • Clustering feature: – summary of the statistics for a given CF-Tree in BIRCH • Clustering feature: – summary of the statistics for a given subcluster: the 0 -th, 1 st and 2 nd moments of the subcluster from the statistical point of view. – registers crucial measurements for computing cluster and utilizes storage efficiently A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering – A nonleaf node in a tree has descendants or “children” – The nonleaf nodes store sums of the CFs of their children • A CF tree has two parameters – Branching factor: specify the maximum number of children. – threshold: max diameter of sub-clusters stored at the leaf nodes CS 590 D 93

CF Tree Root B=7 CF 1 CF 2 CF 3 CF 6 L=6 child CF Tree Root B=7 CF 1 CF 2 CF 3 CF 6 L=6 child 1 child 2 child 3 child 6 CF 1 Non-leaf node CF 2 CF 3 CF 5 child 1 child 2 child 3 child 5 Leaf node prev CF 1 CF 2 CF 6 next Leaf node prev CF 1 CF 2 CF 4 next

CURE (Clustering Using REpresentatives ) • CURE: proposed by Guha, Rastogi & Shim, 1998 CURE (Clustering Using REpresentatives ) • CURE: proposed by Guha, Rastogi & Shim, 1998 – Stops the creation of a cluster hierarchy if a level consists of k clusters – Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effect CS 590 D 95

Drawbacks of Distance. Based Method • Drawbacks of square-error based clustering method – Consider Drawbacks of Distance. Based Method • Drawbacks of square-error based clustering method – Consider only one point as representative of a cluster – Good only for convex shaped, similar size and density, and if k can be reasonably estimated CS 590 D 96

Cure: The Algorithm • • Draw random sample s. Partition sample to p partitions Cure: The Algorithm • • Draw random sample s. Partition sample to p partitions with size s/p Partially cluster partitions into s/pq clusters Eliminate outliers – By random sampling – If a cluster grows too slow, eliminate it. • Cluster partial clusters. • Label data in disk CS 590 D 97

Data Partitioning and Clustering – s = 50 – p=2 – s/p = 25 Data Partitioning and Clustering – s = 50 – p=2 – s/p = 25 n s/pq = 5 y y y x x CS 590 D x x 98

Cure: Shrinking Representative Points y y x • Shrink the multiple representative points towards Cure: Shrinking Representative Points y y x • Shrink the multiple representative points towards the gravity center by a fraction of . x • Multiple representatives capture the shape of the cluster CS 590 D 99

Clustering Categorical Data: ROCK • ROCK: Robust Clustering using lin. Ks, by S. Guha, Clustering Categorical Data: ROCK • ROCK: Robust Clustering using lin. Ks, by S. Guha, R. Rastogi, K. Shim (ICDE’ 99). – Use links to measure similarity/proximity – Not distance based – Computational complexity: • Basic ideas: – Similarity function and neighbors: Let T 1 = {1, 2, 3}, T 2={3, 4, 5} CS 590 D 100

CHAMELEON (Hierarchical clustering using dynamic modeling) • CHAMELEON: by G. Karypis, E. H. Han, CHAMELEON (Hierarchical clustering using dynamic modeling) • CHAMELEON: by G. Karypis, E. H. Han, and V. Kumar’ 99 • Measures the similarity based on a dynamic model – – • Two clusters are merged only if the interconnectivity and closeness (proximity) between two clusters are high relative to the internal interconnectivity of the clusters and closeness of items within the clusters Cure ignores information about interconnectivity of the objects, Rock ignores information about the closeness of two clusters A two-phase algorithm 1. Use a graph partitioning algorithm: cluster objects into a large number of relatively small sub-clusters 2. Use an agglomerative hierarchical clustering algorithm: find the genuine clusters by repeatedly combining these sub-clusters CS 590 D 102

Overall Framework of CHAMELEON Construct Partition the Graph Sparse Graph Data Set Merge Partition Overall Framework of CHAMELEON Construct Partition the Graph Sparse Graph Data Set Merge Partition Final Clusters CS 590 D 103

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 105

Density-Based Clustering Methods • Clustering based on density (local cluster criterion), such as density-connected Density-Based Clustering Methods • Clustering based on density (local cluster criterion), such as density-connected points • Major features: – – Discover clusters of arbitrary shape Handle noise One scan Need density parameters as termination condition • Several interesting studies: – – DBSCAN: Ester, et al. (KDD’ 96) OPTICS: Ankerst, et al (SIGMOD’ 99). DENCLUE: Hinneburg & D. Keim (KDD’ 98) CLIQUE: Agrawal, et al. (SIGMOD’ 98) CS 590 D 106

Density Concepts • Core object (CO)–object with at least ‘M’ objects within a radius Density Concepts • Core object (CO)–object with at least ‘M’ objects within a radius ‘E-neighborhood’ • Directly density reachable (DDR)–x is CO, y is in x’s ‘Eneighborhood’ • Density reachable–there exists a chain of DDR objects from x to y • Density based cluster–density connected objects maximum w. r. t. reachability CS 590 D 107

Density-Based Clustering: Background • Two parameters: – Eps: Maximum radius of the neighbourhood – Density-Based Clustering: Background • Two parameters: – Eps: Maximum radius of the neighbourhood – Min. Pts: Minimum number of points in an Eps-neighbourhood of that point • NEps(p): {q belongs to D | dist(p, q) <= Eps} • Directly density-reachable: A point p is directly densityreachable from a point q wrt. Eps, Min. Pts if – 1) p belongs to NEps(q) – 2) core point condition: |NEps (q)| >= Min. Pts CS 590 D p q Min. Pts = 5 Eps = 1 cm 108

Density-Based Clustering: Background (II) • Density-reachable: – A point p is density-reachable from a Density-Based Clustering: Background (II) • Density-reachable: – A point p is density-reachable from a point q wrt. Eps, Min. Pts if there is a chain of points p 1, …, pn, p 1 = q, pn = p such that pi+1 is directly densityreachable from pi p p 1 q • Density-connected – A point p is density-connected to a point q wrt. Eps, Min. Pts if there is a point o such that both, p and q are density-reachable from o wrt. Eps and Min. Pts. CS 590 D p q o 109

DBSCAN: Density Based Spatial Clustering of Applications with Noise • Relies on a density-based DBSCAN: Density Based Spatial Clustering of Applications with Noise • Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points • Discovers clusters of arbitrary shape in spatial databases with noise Outlier Border Eps = 1 cm Core Min. Pts = 5 CS 590 D 110

DBSCAN • DBSCAN is a density-based algorithm. – Density = number of points within DBSCAN • DBSCAN is a density-based algorithm. – Density = number of points within a specified radius (Eps) – A point is a core point if it has more than a specified number of points (Min. Pts) within Eps • These are points that are at the interior of a cluster – A border point has fewer than Min. Pts within Eps, but is in the neighborhood of a core point – A noise point is any point that is not a core point or a border point. CS 590 D 112

DBSCAN: Core, Border, and Noise Points CS 590 D 113 DBSCAN: Core, Border, and Noise Points CS 590 D 113

DBSCAN Algorithm • Eliminate noise points • Perform clustering on the remaining points CS DBSCAN Algorithm • Eliminate noise points • Perform clustering on the remaining points CS 590 D 114

DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise Eps = 10, Min. Pts = 4 CS 590 D 115

When DBSCAN Works Well Original Points Clusters • Resistant to Noise • Can handle When DBSCAN Works Well Original Points Clusters • Resistant to Noise • Can handle clusters of different shapes and sizes CS 590 D 116

When DBSCAN Does NOT Work Well (Min. Pts=4, Eps=9. 75). Original Points • Varying When DBSCAN Does NOT Work Well (Min. Pts=4, Eps=9. 75). Original Points • Varying densities • High-dimensional data (Min. Pts=4, Eps=9. 92) CS 590 D 117

DBSCAN: Determining EPS and Min. Pts • • • Idea is that for points DBSCAN: Determining EPS and Min. Pts • • • Idea is that for points in a cluster, their kth nearest neighbors are at roughly the same distance Noise points have the kth nearest neighbor at farther distance So, plot sorted distance of every point to its kth nearest neighbor CS 590 D 118

OPTICS: A Cluster-Ordering Method (1999) • OPTICS: Ordering Points To Identify the Clustering Structure OPTICS: A Cluster-Ordering Method (1999) • OPTICS: Ordering Points To Identify the Clustering Structure – Ankerst, Breunig, Kriegel, and Sander (SIGMOD’ 99) – Produces a special order of the database wrt its density-based clustering structure – This cluster-ordering contains info equiv to the density -based clusterings corresponding to a broad range of parameter settings – Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structure – Can be represented graphically or using visualization techniques CS 590 D 119

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 129

Cluster Validity • For supervised classification we have a variety of measures to evaluate Cluster Validity • For supervised classification we have a variety of measures to evaluate how good our model is – Accuracy, precision, recall • For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? • But “clusters are in the eye of the beholder”! • Then why do we want to evaluate them? – – To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters CS 590 D 130

Clusters found in Random Data Random Points DBSCAN K-means Complete Link CS 590 D Clusters found in Random Data Random Points DBSCAN K-means Complete Link CS 590 D 131

Different Aspects of Cluster Validation 1. 2. 3. Determining the clustering tendency of a Different Aspects of Cluster Validation 1. 2. 3. Determining the clustering tendency of a set of data, i. e. , distinguishing whether non-random structure actually exists in the data. Comparing the results of a cluster analysis to externally known results, e. g. , to externally given class labels. Evaluating how well the results of a cluster analysis fit the data without reference to external information. - Use only the data 4. 5. Comparing the results of two different sets of cluster analyses to determine which is better. Determining the ‘correct’ number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters. CS 590 D 132

Measures of Cluster Validity • Numerical measures that are applied to judge various aspects Measures of Cluster Validity • Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following three types. – External Index: Used to measure the extent to which cluster labels match externally supplied class labels. • Entropy – Internal Index: Used to measure the goodness of a clustering structure without respect to external information. • Sum of Squared Error (SSE) – Relative Index: Used to compare two different clusterings or clusters. • Often an external or internal index is used for this function, e. g. , SSE or entropy • Sometimes these are referred to as criteria instead of indices – However, sometimes criterion is the general strategy and index is the numerical measure that implements the criterion. CS 590 D 133

Measuring Cluster Validity Via Correlation • Two matrices – – Proximity Matrix “Incidence” Matrix Measuring Cluster Validity Via Correlation • Two matrices – – Proximity Matrix “Incidence” Matrix • • Compute the correlation between the two matrices – • • One row and one column for each data point An entry is 1 if the associated pair of points belong to the same cluster An entry is 0 if the associated pair of points belongs to different clusters Since the matrices are symmetric, only the correlation between n(n-1) / 2 entries needs to be calculated. High correlation indicates that points that belong to the same cluster are close to each other. Not a good measure for some density or contiguity based clusters. CS 590 D 134

Measuring Cluster Validity Via Correlation • Correlation of incidence and proximity matrices for the Measuring Cluster Validity Via Correlation • Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets. Corr = -0. 9235 Corr = -0. 5810 CS 590 D 135

Using Similarity Matrix for Cluster Validation • Order the similarity matrix with respect to Using Similarity Matrix for Cluster Validation • Order the similarity matrix with respect to cluster labels and inspect visually. CS 590 D 136

Using Similarity Matrix for Cluster Validation • Clusters in random data are not so Using Similarity Matrix for Cluster Validation • Clusters in random data are not so crisp DBSCAN CS 590 D 137

Using Similarity Matrix for Cluster Validation • Clusters in random data are not so Using Similarity Matrix for Cluster Validation • Clusters in random data are not so crisp K-means CS 590 D 138

Using Similarity Matrix for Cluster Validation DBSCAN CS 590 D 140 Using Similarity Matrix for Cluster Validation DBSCAN CS 590 D 140

Internal Measures: SSE • Clusters in more complicated figures aren’t well separated • Internal Internal Measures: SSE • Clusters in more complicated figures aren’t well separated • Internal Index: Used to measure the goodness of a clustering structure without respect to external information – SSE • SSE is good for comparing two clusterings or two clusters (average SSE). • Can also be used to estimate the number of clusters CS 590 D 141

Internal Measures: SSE • SSE curve for a more complicated data set SSE of Internal Measures: SSE • SSE curve for a more complicated data set SSE of clusters found using K-means CS 590 D 142

Framework for Cluster Validity • Need a framework to interpret any measure. – • Framework for Cluster Validity • Need a framework to interpret any measure. – • For example, if our measure of evaluation has the value, 10, is that good, fair, or poor? Statistics provide a framework for cluster validity – The more “atypical” a clustering result is, the more likely it represents valid structure in the data Can compare the values of an index that result from random data or clusterings to those of a clustering result. – • – • If the value of the index is unlikely, then the cluster results are valid These approaches are more complicated and harder to understand. For comparing the results of two different sets of cluster analyses, a framework is less necessary. – However, there is the question of whether the difference between two index values is significant CS 590 D 143

Statistical Framework for SSE • Example – Compare SSE of 0. 005 against three Statistical Framework for SSE • Example – Compare SSE of 0. 005 against three clusters in random data – Histogram shows SSE of three clusters in 500 sets of random data points of size 100 distributed over the range 0. 2 – 0. 8 for x and y values CS 590 D 144

Statistical Framework for Correlation • Correlation of incidence and proximity matrices for the K-means Statistical Framework for Correlation • Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets. Corr = -0. 9235 Corr = -0. 5810 CS 590 D 145

Internal Measures: Cohesion and Separation • Cluster Cohesion: Measures how closely related are objects Internal Measures: Cohesion and Separation • Cluster Cohesion: Measures how closely related are objects in a cluster – Example: SSE • Cluster Separation: Measure how distinct or wellseparated a cluster is from other clusters • Example: Squared Error – Cohesion is measured by the within cluster sum of squares (SSE) – Separation is measured by the between cluster sum of squares – Where |Ci| is the size of cluster i CS 590 D 146

Internal Measures: Cohesion and Separation • Example: SSE – BSS + WSS = constant Internal Measures: Cohesion and Separation • Example: SSE – BSS + WSS = constant 1 m 2 3 4 m 2 5 K=1 cluster: K=2 clusters: CS 590 D 147

Internal Measures: Cohesion and Separation • A proximity graph based approach can also be Internal Measures: Cohesion and Separation • A proximity graph based approach can also be used for cohesion and separation. – Cluster cohesion is the sum of the weight of all links within a cluster. – Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster. cohesion separation CS 590 D 148

Internal Measures: Silhouette Coefficient • Silhouette Coefficient combine ideas of both cohesion and separation, Internal Measures: Silhouette Coefficient • Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings • For an individual point, i – Calculate a = average distance of i to the points in its cluster – Calculate b = min (average distance of i to points in another cluster) – The silhouette coefficient for a point is then given by s = 1 – a/b if a < b, (or s = b/a - 1 if a b, not the usual case) – Typically between 0 and 1. – The closer to 1 the better. • Can calculate the Average Silhouette width for a clustering CS 590 D 149

External Measures of Cluster Validity: Entropy and Purity CS 590 D 150 External Measures of Cluster Validity: Entropy and Purity CS 590 D 150

Final Comment on Cluster Validity “The validation of clustering structures is the most difficult Final Comment on Cluster Validity “The validation of clustering structures is the most difficult and frustrating part of cluster analysis. Without a strong effort in this direction, cluster analysis will remain a black art accessible only to those true believers who have experience and great courage. ” Algorithms for Clustering Data, Jain and Dubes CS 590 D 151

CS 590 D: Data Mining Prof. Chris Clifton March 2, 2006 Clustering CS 590 CS 590 D: Data Mining Prof. Chris Clifton March 2, 2006 Clustering CS 590 D

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 153

Grid-Based Clustering Method • Using multi-resolution grid data structure • Several interesting methods – Grid-Based Clustering Method • Using multi-resolution grid data structure • Several interesting methods – STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997) – Wave. Cluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’ 98) • A multi-resolution clustering approach using wavelet method – CLIQUE: Agrawal, et al. (SIGMOD’ 98) CS 590 D 154

STING: A Statistical Information Grid Approach • Wang, Yang and Muntz (VLDB’ 97) • STING: A Statistical Information Grid Approach • Wang, Yang and Muntz (VLDB’ 97) • The spatial area is divided into rectangular cells • There are several levels of cells corresponding to different levels of resolution CS 590 D 155

STING: A Statistical Information Grid Approach (2) – Each cell at a high level STING: A Statistical Information Grid Approach (2) – Each cell at a high level is partitioned into a number of smaller cells in the next lower level – Statistical info of each cell is calculated and stored beforehand is used to answer queries – Parameters of higher level cells can be easily calculated from parameters of lower level cell • count, mean, s, min, max • type of distribution—normal, uniform, etc. – Use a top-down approach to answer spatial data queries – Start from a pre-selected layer—typically with a small number of cells – For each cell in the current level compute the confidence interval CS 590 D 156

STING: A Statistical Information Grid Approach (3) – Remove the irrelevant cells from further STING: A Statistical Information Grid Approach (3) – Remove the irrelevant cells from further consideration – When finish examining the current layer, proceed to the next lower level – Repeat this process until the bottom layer is reached – Advantages: • Query-independent, easy to parallelize, incremental update • O(K), where K is the number of grid cells at the lowest level – Disadvantages: • All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected CS 590 D 157

Wave. Cluster (1998) • Sheikholeslami, Chatterjee, and Zhang (VLDB’ 98) • A multi-resolution clustering Wave. Cluster (1998) • Sheikholeslami, Chatterjee, and Zhang (VLDB’ 98) • A multi-resolution clustering approach which applies wavelet transform to the feature space – A wavelet transform is a signal processing technique that decomposes a signal into different frequency sub-band. • Both grid-based and density-based • Input parameters: – # of grid cells for each dimension – the wavelet, and the # of applications of wavelet transform. CS 590 D 158

What is Wavelet (1)? CS 590 D 159 What is Wavelet (1)? CS 590 D 159

Wave. Cluster (1998) • How to apply wavelet transform to find clusters – Summaries Wave. Cluster (1998) • How to apply wavelet transform to find clusters – Summaries the data by imposing a multidimensional grid structure onto data space – These multidimensional spatial data objects are represented in a n-dimensional feature space – Apply wavelet transform on feature space to find the dense regions in the feature space – Apply wavelet transform multiple times which result in clusters at different scales from fine to coarse CS 590 D 160

Wavelet Transform • Decomposes a signal into different frequency subbands. (can be applied to Wavelet Transform • Decomposes a signal into different frequency subbands. (can be applied to ndimensional signals) • Data are transformed to preserve relative distance between objects at different levels of resolution. • Allows natural clusters to become more distinguishable CS 590 D 161

What Is Wavelet (2)? CS 590 D 162 What Is Wavelet (2)? CS 590 D 162

Quantization CS 590 D 163 Quantization CS 590 D 163

Transformation CS 590 D 164 Transformation CS 590 D 164

Wave. Cluster (1998) • Why is wavelet transformation useful for clustering – Unsupervised clustering Wave. Cluster (1998) • Why is wavelet transformation useful for clustering – Unsupervised clustering It uses hat-shape filters to emphasize region where points cluster, but simultaneously to suppress weaker information in their boundary – Effective removal of outliers – Multi-resolution – Cost efficiency • Major features: – – Complexity O(N) Detect arbitrary shaped clusters at different scales Not sensitive to noise, not sensitive to input order Only applicable to low dimensional data CS 590 D 165

CLIQUE (Clustering In QUEst) • Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’ 98). • Automatically identifying CLIQUE (Clustering In QUEst) • Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’ 98). • Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space • CLIQUE can be considered as both density-based and grid-based – It partitions each dimension into the same number of equal length interval – It partitions an m-dimensional data space into non-overlapping rectangular units – A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter – A cluster is a maximal set of connected dense units within a subspace CS 590 D 166

CLIQUE: The Major Steps • Partition the data space and find the number of CLIQUE: The Major Steps • Partition the data space and find the number of points that lie inside each cell of the partition. • Identify the subspaces that contain clusters using the Apriori principle • Identify clusters: – Determine dense units in all subspaces of interests – Determine connected dense units in all subspaces of interests. • Generate minimal description for the clusters – Determine maximal regions that cover a cluster of connected dense units for each cluster – Determination of minimal cover for each cluster CS 590 D 167

30 40 =3 Vacation 20 50 S Salary (10, 000) 0 1 2 3 30 40 =3 Vacation 20 50 S Salary (10, 000) 0 1 2 3 4 5 6 7 a al ry 30 Vacation (week) 0 1 2 3 4 5 6 7 age 60 20 50 30 40 age 50 age 60

Strength and Weakness of CLIQUE • Strength – It automatically finds subspaces of the Strength and Weakness of CLIQUE • Strength – It automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces – It is insensitive to the order of records in input and does not presume some canonical data distribution – It scales linearly with the size of input and has good scalability as the number of dimensions in the data increases • Weakness – The accuracy of the clustering result may be degraded at the expense of simplicity of the method CS 590 D 169

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 170

Model-Based Clustering Methods • Attempt to optimize the fit between the data and some Model-Based Clustering Methods • Attempt to optimize the fit between the data and some mathematical model • Statistical and AI approach – Conceptual clustering • A form of clustering in machine learning • Produces a classification scheme for a set of unlabeled objects • Finds characteristic description for each concept (class) – COBWEB (Fisher’ 87) • A popular a simple method of incremental conceptual learning • Creates a hierarchical clustering in the form of a classification tree • Each node refers to a concept and contains a probabilistic description of that concept CS 590 D 171

COBWEB Clustering Method A classification tree CS 590 D 172 COBWEB Clustering Method A classification tree CS 590 D 172

More on Statistical-Based Clustering • Limitations of COBWEB – The assumption that the attributes More on Statistical-Based Clustering • Limitations of COBWEB – The assumption that the attributes are independent of each other is often too strong because correlation may exist – Not suitable for clustering large database data – skewed tree and expensive probability distributions • CLASSIT – an extension of COBWEB for incremental clustering of continuous data – suffers similar problems as COBWEB • Auto. Class (Cheeseman and Stutz, 1996) – Uses Bayesian statistical analysis to estimate the number of clusters – Popular in industry CS 590 D 173

Other Model-Based Clustering Methods • Neural network approaches – Represent each cluster as an Other Model-Based Clustering Methods • Neural network approaches – Represent each cluster as an exemplar, acting as a “prototype” of the cluster – New objects are distributed to the cluster whose exemplar is the most similar according to some dostance measure • Competitive learning – Involves a hierarchical architecture of several units (neurons) – Neurons compete in a “winner-takes-all” fashion for the object currently being presented CS 590 D 174

Model-Based Clustering Methods CS 590 D 175 Model-Based Clustering Methods CS 590 D 175

Self-organizing feature maps (SOMs) • Clustering is also performed by having several units competing Self-organizing feature maps (SOMs) • Clustering is also performed by having several units competing for the current object • The unit whose weight vector is closest to the current object wins • The winner and its neighbors learn by having their weights adjusted • SOMs are believed to resemble processing that can occur in the brain • Useful for visualizing high-dimensional data in 2 or 3 -D space CS 590 D 176

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 177

What Is Outlier Discovery? • What are outliers? – The set of objects are What Is Outlier Discovery? • What are outliers? – The set of objects are considerably dissimilar from the remainder of the data – Example: Sports: Michael Jordon, Wayne Gretzky, . . . • Problem – Find top n outlier points • Applications: – – Credit card fraud detection Telecom fraud detection Customer segmentation Medical analysis CS 590 D 178

Outlier Discovery: Statistical Approaches f. Assume a model underlying distribution that generates data set Outlier Discovery: Statistical Approaches f. Assume a model underlying distribution that generates data set (e. g. normal distribution) • Use discordancy tests depending on – data distribution – distribution parameter (e. g. , mean, variance) – number of expected outliers • Drawbacks – most tests are for single attribute – In many cases, data distribution may not be known

CS 590 D: Data Mining Prof. Chris Clifton March 4, 2006 Clustering CS 590 CS 590 D: Data Mining Prof. Chris Clifton March 4, 2006 Clustering CS 590 D

Outlier Discovery: Distance. Based Approach • Introduced to counter the main limitations imposed by Outlier Discovery: Distance. Based Approach • Introduced to counter the main limitations imposed by statistical methods – We need multi-dimensional analysis without knowing data distribution. • Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least a fraction p of the objects in T lies at a distance greater than D from O • Algorithms for mining distance-based outliers – Index-based algorithm – Nested-loop algorithm – Cell-based algorithm CS 590 D 181

Outlier Discovery: Deviation. Based Approach • Identifies outliers by examining the main characteristics of Outlier Discovery: Deviation. Based Approach • Identifies outliers by examining the main characteristics of objects in a group • Objects that “deviate” from this description are considered outliers • sequential exception technique – simulates the way in which humans can distinguish unusual objects from among a series of supposedly like objects • OLAP data cube technique – uses data cubes to identify regions of anomalies in large multidimensional data CS 590 D 182

Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis Cluster Analysis • What is Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary CS 590 D 183

Problems and Challenges • Considerable progress has been made in scalable clustering methods – Problems and Challenges • Considerable progress has been made in scalable clustering methods – Partitioning: k-means, k-medoids, CLARANS – Hierarchical: BIRCH, CURE – Density-based: DBSCAN, CLIQUE, OPTICS – Grid-based: STING, Wave. Cluster – Model-based: Autoclass, Denclue, Cobweb • Current clustering techniques do not address all the requirements adequately • Constraint-based clustering analysis: Constraints exist in data space (bridges and highways) or in user queries CS 590 D 184

Constraint-Based Clustering Analysis • Clustering analysis: less parameters but more userdesired constraints, e. g. Constraint-Based Clustering Analysis • Clustering analysis: less parameters but more userdesired constraints, e. g. , an ATM allocation problem CS 590 D 185

Clustering With Obstacle Objects Not Taking obstacles into account CS 590 D Taking obstacles Clustering With Obstacle Objects Not Taking obstacles into account CS 590 D Taking obstacles into account 186

Summary • Cluster analysis groups objects based on their similarity and has wide applications Summary • Cluster analysis groups objects based on their similarity and has wide applications • Measure of similarity can be computed for various types of data • Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods • Outlier detection and analysis are very useful for fraud detection, etc. and can be performed by statistical, distance-based or deviation-based approaches • There are still lots of research issues on cluster analysis, such as constraint-based clustering CS 590 D 187

References (1) • R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace References (1) • R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98 • • M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973. M. Ankerst, M. Breunig, H. -P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD’ 99. P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996 M. Ester, H. -P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96. • • • M. Ester, H. -P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95. • D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2: 139172, 1987. • D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB’ 98. • S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SIGMOD'98. • A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988. CS 590 D 188

References (2) • L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an References (2) • L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990. • E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’ 98. • G. J. Mc. Lachlan and K. E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988. • P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997. • R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94. • E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101 -105. • G. Sheikholeslami, S. Chatterjee, and A. Zhang. Wave. Cluster: A multi-resolution clustering approach for very large spatial databases. VLDB’ 98. • W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining, VLDB’ 97. • T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96. CS 590 D 189