
8021daa46574d4eeb7f2d3e96c4dc2e9.ppt
- Количество слайдов: 169
DATA WAREHOUSING AND DATA MINING S. Sudarshan Krithi Ramamritham IIT Bombay sudarsha@cse. iitb. ernet. in krithi@cse. iitb. ernet. in CSI'99
Course Overview z The course: what and how z 0. Introduction z I. Data Warehousing z II. Decision Support and OLAP z III. Data Mining z IV. Looking Ahead z Demos and Labs 2
0. Introduction z Data Warehousing, OLAP and data mining: what and why (now)? z Relation to OLTP z A case study z demos, labs 3
A producer wants to know…. Which are our lowest/highest margin customers ? Who are my customers and what products are they buying? What is the most effective distribution channel? What product prom-otions have the biggest impact on revenue? Which customers are most likely to go to the competition ? What impact will new products/services have on revenue and margins? 4
Data, Data everywhere yet. . . z I can’t find the data I need y data is scattered over the network y many versions, subtle differences z I can’t get the data I need y need an expert to get the data z I can’t understand the data I found y available data poorly documented z I can’t use the data I found y results are unexpected y data needs to be transformed from one form to other 5
What is a Data Warehouse? A single, complete and consistent store of data obtained from a variety of different sources made available to end users in a what they can understand use in a business context. [Barry Devlin] 6
What are the users saying. . . z Data should be integrated across the enterprise z Summary data has a real value to the organization z Historical data holds the key to understanding data over time z What-if capabilities are required 7
What is Data Warehousing? Information Data A process of transforming data into information and making it available to users in a timely enough manner to make a difference [Forrester Research, April 1996] 8
Evolution z 60’s: Batch reports y hard to find analyze information y inflexible and expensive, reprogram every new request z 70’s: Terminal-based DSS and EIS (executive information systems) y still inflexible, not integrated with desktop tools z 80’s: Desktop data access and analysis tools y query tools, spreadsheets, GUIs y easier to use, but only access operational databases z 90’s: Data warehousing with integrated OLAP engines and tools 9
Warehouses are Very Large Databases 35% 30% Respondents 25% 20% 15% 10% Initial Projected 2 Q 96 5% 0% Source: META Group, Inc. 5 GB 10 -19 GB 5 -9 GB 50 -99 GB 20 -49 GB 250 -499 GB 100 -249 GB 500 GB-1 TB 10
Very Large Data Bases z Terabytes -- 10^12 bytes: Walmart -- 24 Terabytes z Petabytes -- 10^15 bytes: Geographic Information Systems z Exabytes -- 10^18 bytes: National Medical Records z Zettabytes -- 10^21 bytes: Weather images z Zottabytes -- 10^24 bytes: Intelligence Agency Videos 11
Data Warehousing -It is a process z Technique for assembling and managing data from various sources for the purpose of answering business questions. Thus making decisions that were not previous possible z A decision support database maintained separately from the organization’s operational database 12
Data Warehouse z A data warehouse is a ysubject-oriented yintegrated ytime-varying ynon-volatile collection of data that is used primarily in organizational decision making. -- Bill Inmon, Building the Data Warehouse 1996 13
Explorers, Farmers and Tourists: Browse information harvested by farmers Farmers: Harvest information from known access paths Explorers: Seek out the unknown and previously unsuspected rewards hiding in the detailed data 14
Data Warehouse Architecture Relational Databases Optimized Loader ERP Systems Extraction Cleansing Data Warehouse Engine Purchased Data Legacy Data Analyze Query Metadata Repository 15
Data Warehouse for Decision Support & OLAP z Putting Information technology to help the knowledge worker make faster and better decisions y. Which of my customers are most likely to go to the competition? y. What product promotions have the biggest impact on revenue? y. How did the share price of software companies correlate with profits over last 10 years? 16
Decision Support z Used to manage and control business z Data is historical or point-in-time z Optimized for inquiry rather than update z Use of the system is loosely defined and can be ad-hoc z Used by managers and end-users to understand the business and make judgements 17
Data Mining works with Warehouse Data z Data Warehousing provides the Enterprise with a memory z Data Mining provides the Enterprise with intelligence 18
We want to know. . . z Given a database of 100, 000 names, which persons are the least likely to default on their credit cards? z Which types of transactions are likely to be fraudulent given the demographics and transactional history of a particular customer? z If I raise the price of my product by Rs. 2, what is the effect on my ROI? z If I offer only 2, 500 airline miles as an incentive to purchase rather than 5, 000, how many lost responses will result? z If I emphasize ease-of-use of the product as opposed to its technical capabilities, what will be the net effect on my revenues? z Which of my customers are likely to be the most loyal? Data Mining helps extract such information 19
Application Areas Industry Finance Insurance Telecommunication Transport Consumer goods Data Service providers Utilities Application Credit Card Analysis Claims, Fraud Analysis Call record analysis Logistics management promotion analysis Value added data Power usage analysis 20
Data Mining in Use z The US Government uses Data Mining to track fraud z A Supermarket becomes an information broker z Basketball teams use it to track game strategy z Cross Selling z Warranty Claims Routing z Holding on to Good Customers z Weeding out Bad Customers 21
What makes data mining possible? z. Advances in the following areas are making data mining deployable: ydata warehousing ybetter and more data (i. e. , operational, behavioral, and demographic) ythe emergence of easily deployed data mining tools and ythe advent of new data mining techniques. • -- Gartner Group 22
Why Separate Data Warehouse? z Performance y Op dbs designed & tuned for known txs & workloads. y Complex OLAP queries would degrade perf. for op txs. y Special data organization, access & implementation methods needed for multidimensional views & queries. z Function y Missing data: Decision support requires historical data, which op dbs do not typically maintain. y Data consolidation: Decision support requires consolidation (aggregation, summarization) of data from many heterogeneous sources: op dbs, external sources. y Data quality: Different sources typically use inconsistent data representations, codes, and formats which have to be 23 reconciled.
What are Operational Systems? z They are OLTP systems z Run mission critical applications z Need to work with stringent performance requirements for routine tasks z Used to run a business! 24
RDBMS used for OLTP z. Database Systems have been used traditionally for OLTP yclerical data processing tasks ydetailed, up to date data ystructured repetitive tasks yread/update a few records yisolation, recovery and integrity are critical 25
Operational Systems z Run the business in real time z Based on up-to-the-second data z Optimized to handle large numbers of simple read/write transactions z Optimized for fast response to predefined transactions z Used by people who deal with customers, products -- clerks, salespeople etc. z They are increasingly used by customers 26
Examples of Operational Data Industry Usage Technology Customer File All Legacy application, flat Small-medium files, main frames Account Balance Point-of. Sale data Call Record Track Customer Details Finance Control account activities Retail Generate bills, manage stock Telecomm- Billing unications Production Manufact. Record uring Control Production Volumes Legacy applications, Large hierarchical databases, mainframe ERP, Client/Server, Very Large relational databases Legacy application, Very Large hierarchical database, mainframe ERP, Medium relational databases, AS/400 27
So, what’s different? CSI'99
Application-Orientation vs. Subject -Orientation Application-Orientation Subject-Orientation Operational Database Loans Credit Card Data Warehouse Customer Vendor Trust Savings Product Activity 29
OLTP vs. Data Warehouse z OLTP systems are tuned for known transactions and workloads while workload is not known a priori in a data warehouse z Special data organization, access methods and implementation methods are needed to support data warehouse queries (typically multidimensional queries) ye. g. , average amount spent on phone calls between 9 AM-5 PM in Pune during the month of December 30
OLTP vs Data Warehouse z OLTP y. Application Oriented y. Used to run business y. Detailed data y. Current up to date y. Isolated Data y. Repetitive access y. Clerical User z Warehouse (DSS) y. Subject Oriented y. Used to analyze business y. Summarized and refined y. Snapshot data y. Integrated Data y. Ad-hoc access y. Knowledge User (Manager) 31
OLTP vs Data Warehouse z OLTP y Performance Sensitive y Few Records accessed at a time (tens) y Read/Update Access y No data redundancy y Database Size 100 MB -100 GB z Data Warehouse y Performance relaxed y Large volumes accessed at a time(millions) y Mostly Read (Batch Update) y Redundancy present y Database Size 100 GB - few terabytes 32
OLTP vs Data Warehouse z OLTP y. Transaction throughput is the performance metric y. Thousands of users y. Managed in entirety z Data Warehouse y. Query throughput is the performance metric y. Hundreds of users y. Managed by subsets 33
To summarize. . . z OLTP Systems are used to “run” a business z The Data Warehouse helps to “optimize” the business 34
Why Now? z. Data is being produced z. ERP provides clean data z. The computing power is available z. The computing power is affordable z. The competitive pressures are strong z. Commercial products are available 35
Myths surrounding OLAP Servers and Data Marts z Data marts and OLAP servers are departmental solutions supporting a handful of users z Million dollar massively parallel hardware is needed to deliver fast time for complex queries z OLAP servers require massive and unwieldy indices z Complex OLAP queries clog the network with data z Data warehouses must be at least 100 GB to be effective – Source -- Arbor Software Home Page 36
Wal*Mart Case Study z. Founded by Sam Walton z. One the largest Super Market Chains in the US z. Wal*Mart: 2000+ Retail Stores z. SAM's Clubs 100+Wholesalers Stores x. This case study is from Felipe Carino’s (NCR Teradata) presentation made at Stanford Database Seminar 37
Old Retail Paradigm z Wal*Mart z Suppliers y. Inventory Management y. Merchandise Accounts Payable y. Purchasing y. Supplier Promotions: National, Region, Store Level y. Accept Orders y. Promote Products y. Provide special Incentives y. Monitor and Track The Incentives y. Bill and Collect Receivables y. Estimate Retailer Demands 38
New (Just-In-Time) Retail Paradigm z No more deals z Shelf-Pass Through (POS Application) y One Unit Price x. Suppliers paid once a week on ACTUAL items sold y Wal*Mart Manager x. Daily Inventory Restock x. Suppliers (sometimes Same. Day) ship to Wal*Mart z Warehouse-Pass Through y Stock some Large Items x. Delivery may come from supplier y Distribution Center x. Supplier’s merchandise unloaded directly onto Wal*Mart Trucks 39
Wal*Mart System 24 TB Raw Disk; 700 1000 Pentium CPUs > 5 Billions 65 weeks (5 Quarters) Current Apps: 75 Million New Apps: 100 Million + z Number of Users: Thousands z Number of Queries: 60, 000 per week z NCR 5100 M 96 Nodes; z Number of Rows: z Historical Data: z New Daily Volume: 40
Course Overview z 0. Introduction z I. Data Warehousing z II. Decision Support and OLAP z III. Data Mining z IV. Looking Ahead z Demos and Labs 41
I. Data Warehouses: Architecture, Design & Construction z DW Architecture z Loading, refreshing z Structuring/Modeling z DWs and Data Marts z Query Processing z demos, labs 42
Data Warehouse Architecture Relational Databases Optimized Loader ERP Systems Extraction Cleansing Data Warehouse Engine Purchased Data Legacy Data Analyze Query Metadata Repository 43
Components of the Warehouse z. Data Extraction and Loading z. The Warehouse z. Analyze and Query -- OLAP Tools z. Metadata z. Data Mining tools 44
Loading the Warehouse Cleaning the data before it is loaded CSI'99
Source Data Operational/ Source Data Sequential Legacy Relational External z. Typically host based, legacy applications y. Customized applications, COBOL, 3 GL, 4 GL z. Point of Contact Devices y. POS, ATM, Call switches z. External Sources y. Nielsen’s, Acxiom, CMIE, Vendors, Partners 46
Data Quality - The Reality z. Tempting to think creating a data warehouse is simply extracting operational data and entering into a data warehouse z. Nothing could be farther from the truth z. Warehouse data comes from disparate questionable sources 47
Data Quality - The Reality z Legacy systems no longer documented z Outside sources with questionable quality procedures z Production systems with no built in integrity checks and no integration y. Operational systems are usually designed to solve a specific business problem and are rarely developed to a a corporate plan x“And get it done quickly, we do not have time to worry about corporate standards. . . ” 48
Data Integration Across Sources Savings Same data different name Loans Different data Same name Trust Data found here nowhere else Credit card Different keys same data 49
Data Transformation Example encoding appl A - m, f B - 1, 0 C - x, y D - male, female unit appl A - pipeline - cm B - pipeline - in C - pipeline - feet D - pipeline - yds field Data Warehouse appl A - balance B - bal C - currbal D - balcurr 50
Data Integrity Problems z Same person, different spellings y. Agarwal, Agrawal, Aggarwal etc. . . z Multiple ways to denote company name y. Persistent Systems, PSPL, Persistent Pvt. LTD. z Use of different names ymumbai, bombay z Different account numbers generated by different applications for the same customer z Required fields left blank z Invalid product codes collected at point of sale ymanual entry leads to mistakes y“in case of a problem use 9999999” 51
Data Transformation Terms z. Extracting z. Conditioning z. Scrubbing z. Merging z. Householding z. Enrichment z. Scoring z. Loading z. Validating z. Delta Updating 52
Data Transformation Terms z Extracting y. Capture of data from operational source in “as is” status y. Sources for data generally in legacy mainframes in VSAM, IMS, IDMS, DB 2; more data today in relational databases on Unix z Conditioning y. The conversion of data types from the source to the target data store (warehouse) -always a relational database 53
Data Transformation Terms z. Householding y. Identifying all members of a household (living at the same address) y. Ensures only one mail is sent to a household y. Can result in substantial savings: 1 lakh catalogues at Rs. 50 each costs Rs. 50 lakhs. A 2% savings would save Rs. 1 lakh. 54
Data Transformation Terms z. Enrichment y. Bring data from external sources to augment/enrich operational data. Data sources include Dunn and Bradstreet, A. C. Nielsen, CMIE, IMRA etc. . . z. Scoring ycomputation of a probability of an event. e. g. . . , chance that a customer will defect to AT&T from MCI, chance that a customer is likely to buy a new product 55
Loads z. After extracting, scrubbing, cleaning, validating etc. need to load the data into the warehouse z. Issues y huge volumes of data to be loaded y small time window available when warehouse can be taken off line (usually nights) y when to build index and summary tables y allow system administrators to monitor, cancel, resume, change load rates y Recover gracefully -- restart after failure from where you were and without loss of data integrity 56
Load Techniques z. Use SQL to append or insert new data yrecord at a time interface ywill lead to random disk I/O’s z. Use batch load utility 57
Load Taxonomy z. Incremental versus Full loads z. Online versus Offline loads 58
Refresh z. Propagate updates on source data to the warehouse z. Issues: ywhen to refresh yhow to refresh -- refresh techniques 59
When to Refresh? z periodically (e. g. , every night, every week) or after significant events z on every update: not warranted unless warehouse data require current data (up to the minute stock quotes) z refresh policy set by administrator based on user needs and traffic z possibly different policies for different sources 60
Refresh Techniques z. Full Extract from base tables yread entire source table: too expensive ymaybe the only choice for legacy systems 61
How To Detect Changes z. Create a snapshot log table to record ids of updated rows of source data and timestamp z. Detect changes by: y. Defining after row triggers to update snapshot log when source table changes y. Using regular transaction log to detect changes to source data 62
Data Extraction and Cleansing z. Extract data from existing operational and legacy data z. Issues: y. Sources of data for the warehouse y. Data quality at the sources y. Merging different data sources y. Data Transformation y. How to propagate updates (on the sources) to the warehouse y. Terabytes of data to be loaded 63
Scrubbing Data z Sophisticated transformation tools. z Used for cleaning the quality of data z Clean data is vital for the success of the warehouse z Example y. Seshadri, Sheshadri, Seshadri S. , Srinivasan Seshadri, etc. are the same person 64
Scrubbing Tools z. Apertus -- Enterprise/Integrator z. Vality -- IPE z. Postal Soft 65
Structuring/Modeling Issues CSI'99
Data -- Heart of the Data Warehouse z. Heart of the data warehouse is the data itself! z. Single version of the truth z. Corporate memory z. Data is organized in a way that represents business -- subject orientation 67
Data Warehouse Structure z. Subject Orientation -- customer, product, policy, account etc. . . A subject may be implemented as a set of related tables. E. g. , customer may be five tables 68
Data Warehouse Structure ybase customer (1985 -87) xcustid, from date, to date, name, phone, dob Time is ybase customer (1988 -90) part of xcustid, from date, to date, name, credit rating, key of employer each table ycustomer activity (1986 -89) -- monthly summary ycustomer activity detail (1987 -89) xcustid, activity date, amount, clerk id, order no ycustomer activity detail (1990 -91) xcustid, activity date, amount, line item no, order no 69
Data Granularity in Warehouse z. Summarized data stored yreduce storage costs yreduce cpu usage yincreases performance since smaller number of records to be processed ydesign around traditional high level reporting needs ytradeoff with volume of data to be stored and detailed usage of data 70
Granularity in Warehouse z. Can not answer some questions with summarized data y. Did Anand call Seshadri last month? Not possible to answer if total duration of calls by Anand over a month is only maintained and individual call details are not. z. Detailed data too voluminous 71
Granularity in Warehouse z. Tradeoff is to have dual level of granularity y. Store summary data on disks x 95% of DSS processing done against this data y. Store detail on tapes x 5% of DSS processing against this data 72
Vertical Partitioning Acct. No Name Balance Date Opened Interest Rate Frequently accessed Acct. Balance No Address Rarely accessed Acct. No Name Date Opened Interest Rate Address Smaller table and so less I/O 73
Derived Data z. Introduction of derived (calculated data) may often help z. Have seen this in the context of dual levels of granularity z. Can keep auxiliary views and indexes to speed up query processing 74
Schema Design z. Database organization ymust look like business ymust be recognizable by business user yapproachable by business user y. Must be simple z. Schema Types y. Star Schema y. Fact Constellation Schema y. Snowflake schema 75
Dimension Tables z. Dimension tables y. Define business in terms already familiar to users y. Wide rows with lots of descriptive text y. Small tables (about a million rows) y. Joined to fact table by a foreign key yheavily indexed ytypical dimensions xtime periods, geographic region (markets, cities), products, customers, salesperson, etc. 76
Fact Table z. Central table ymostly raw numeric items ynarrow rows, a few columns at most ylarge number of rows (millions to a billion) y. Access via dimensions 77
Star Schema z A single fact table and for each dimension one dimension table z Does not capture hierarchies directly T i m e c u s t date, custno, prodno, cityname, . . . f a c t p r o d c i t y 78
Snowflake schema z Represent dimensional hierarchy directly by normalizing tables. z Easy to maintain and saves storage T i m e c u s t p r o d date, custno, prodno, cityname, . . . f a c t c i t y r e g i o 79 n
Fact Constellation z. Fact Constellation y. Multiple fact tables that share many dimension tables y. Booking and Checkout may share many dimension tables in the hotel industry Hotels Travel Agents Booking Checkout Customer Promotion Room Type 80
De-normalization z. Normalization in a data warehouse may lead to lots of small tables z. Can lead to excessive I/O’s since many tables have to be accessed z. De-normalization is the answer especially since updates are rare 81
Creating Arrays z Many times each occurrence of a sequence of data is in a different physical location z Beneficial to collect all occurrences together and store as an array in a single row z Makes sense only if there a stable number of occurrences which are accessed together z In a data warehouse, such situations arise naturally due to time based orientation ycan create an array by month 82
Selective Redundancy z. Description of an item can be stored redundantly with order table -- most often item description is also accessed with order table z. Updates have to be careful 83
Partitioning z Breaking data into several physical units that can be handled separately z Not a question of whether to do it in data warehouses but how to do it z Granularity and partitioning are key to effective implementation of a warehouse 84
Why Partition? z. Flexibility in managing data z. Smaller physical units allow yeasy restructuring yfree indexing ysequential scans if needed yeasy reorganization yeasy recovery yeasy monitoring 85
Criterion for Partitioning z. Typically partitioned by ydate yline of business ygeography yorganizational unit yany combination of above 86
Where to Partition? z. Application level or DBMS level z. Makes sense to partition at application level y. Allows different definition for each year x. Important since warehouse spans many years and as business evolves definition changes y. Allows data to be moved between processing complexes easily 87
Data Warehouse vs. Data Marts What comes first CSI'99
From the Data Warehouse to Data Marts Information Less Individually Structured History Normalized Detailed Departmentally Structured Organizationally Structured Data Warehouse More Data 89
Data Warehouse and Data Marts OLAP Data Mart Lightly summarized Departmentally structured Organizationally structured Atomic Detailed Data Warehouse Data 90
Characteristics of the Departmental Data Mart z OLAP z Small z Flexible z Customized by Department z Source is departmentally structured data warehouse 91
Techniques for Creating Departmental Data Mart z. OLAP Sales Finance Mktg. z. Subset z. Summarized z. Superset z. Indexed z. Arrayed 92
Data Mart Centric Data Sources Data Marts Data Warehouse 93
Problems with Data Mart Centric Solution If you end up creating multiple warehouses, integrating them is a problem 94
True Warehouse Data Sources Data Warehouse Data Marts 95
Query Processing z Indexing z Pre computed views/aggregates z SQL extensions 96
Indexing Techniques z. Exploiting indexes to reduce scanning of data is of crucial importance z. Bitmap Indexes z. Join Indexes z. Other Issues y. Text indexing y. Parallelizing and sequencing of index builds and incremental updates 97
Indexing Techniques z. Bitmap index: y. A collection of bitmaps -- one for each distinct value of the column y. Each bitmap has N bits where N is the number of rows in the table y. A bit corresponding to a value v for a row r is set if and only if r has the value for the indexed attribute 98
Bit. Map Indexes z An alternative representation of RID-list z Specially advantageous for low-cardinality domains z Represent each row of a table by a bit and the table as a bit vector z There is a distinct bit vector Bv for each value v for the domain z Example: the attribute sex has values M and F. A table of 100 million people needs 2 lists of 100 million bits 99
Bitmap Index M Y 0 1 0 F Y 1 1 1 F N 1 0 0 M N 0 0 0 F Y 1 1 1 F N 1 0 0 Customer Query : select * from customer where 100 gender = ‘F’ and vote = ‘Y’
Bit Map Index Base Table Customers where Region Index Region = W Rating Index And Rating = M 101
Bit. Map Indexes z Comparison, join and aggregation operations are reduced to bit arithmetic with dramatic improvement in processing time z Significant reduction in space and I/O (30: 1) z Adapted for higher cardinality domains as well. z Compression (e. g. , run-length encoding) exploited z Products that support bitmaps: Model 204, Target. Index (Redbrick), IQ (Sybase), Oracle 7. 3 102
Join Indexes z Pre-computed joins z A join index between a fact table and a dimension table correlates a dimension tuple with the fact tuples that have the same value on the common dimensional attribute ye. g. , a join index on city dimension of calls fact table ycorrelates for each city the calls (in the calls table) from that city 103
Join Indexes z. Join indexes can also span multiple dimension tables ye. g. , a join index on city and time dimension of calls fact table 104
Star Join Processing z. Use join indexes to join dimension and fact table Calls C+T Time C+T+L Location Plan C+T+L +P 105
Optimized Star Join Processing Time Apply Selections Location Plan Calls Virtual Cross Product of T, L and P 106
Bitmapped Join Processing Bitmaps Calls 1 0 1 Location Calls 0 0 1 Plan Calls Time AND 1 1 0 107
Intelligent Scan z. Piggyback multiple scans of a relation (Redbrick) ypiggybacking also done if second scan starts a little while after the first scan 108
Parallel Query Processing z. Three forms of parallelism y. Independent y. Pipelined y. Partitioned and “partition and replicate” z. Deterrents to parallelism ystartup ycommunication 109
Parallel Query Processing z Partitioned Data y. Parallel scans y. Yields I/O parallelism z Parallel algorithms for relational operators y. Joins, Aggregates, Sort z Parallel Utilities y. Load, Archive, Update, Parse, Checkpoint, Recovery z Parallel Query Optimization 110
Pre-computed Aggregates z. Keep aggregated data for efficiency (pre-computed queries) z. Questions y. Which aggregates to compute? y. How to update aggregates? y. How to use pre-computed aggregates in queries? 111
Pre-computed Aggregates z. Aggregated table can be maintained by the ywarehouse server ymiddle tier yclient applications z. Pre-computed aggregates -- special case of materialized views -- same questions and issues remain 112
SQL Extensions z. Extended family of aggregate functions yrank (top 10 customers) ypercentile (top 30% of customers) ymedian, mode y. Object Relational Systems allow addition of new aggregate functions 113
SQL Extensions z. Reporting features yrunning total, cumulative totals z. Cube operator ygroup by on all subsets of a set of attributes (month, city) yredundant scan and sorting of data can be avoided 114
Red Brick has Extended set of Aggregates z Select month, dollars, cume(dollars) as run_dollars, weight, cume(weight) as run_weights from sales, market, product, period t where year = 1993 and product like ‘Columbian%’ and city like ‘San Fr%’ order by t. perkey 115
RISQL (Red Brick Systems) Extensions z Aggregates y. CUME y. MOVINGAVG y. MOVINGSUM y. RANK y. TERTILE y. RATIOTOREPORT z Calculating Row Subtotals y. BREAK BY z Sophisticated Date Time Support y. DATEDIFF z Using Sub. Queries in calculations 116
Using Sub. Queries in Calculations select product, dollars as jun 97_sales, (select sum(s 1. dollars) from market mi, product pi, period, ti, sales si where pi. product = product and ti. year = period. year and mi. city = market. city) as total 97_sales, 100 * dollars/ (select sum(s 1. dollars) from market mi, product pi, period, ti, sales si where pi. product = product and ti. year = period. year and mi. city = market. city) as percent_of_yr from market, product, period, sales where year = 1997 and month = ‘June’ and city like ‘Ahmed%’ order by product; 117
Course Overview z The course: what and how z 0. Introduction z I. Data Warehousing z II. Decision Support and OLAP z III. Data Mining z IV. Looking Ahead z Demos and Labs 118
II. On-Line Analytical Processing (OLAP) Making Decision Support Possible CSI'99
Limitations of SQL “A Freshman in Business needs a Ph. D. in SQL” -- Ralph Kimball 120
Typical OLAP Queries z Write a multi-table join to compare sales for each product line YTD this year vs. last year. z Repeat the above process to find the top 5 product contributors to margin. z Repeat the above process to find the sales of a product line to new vs. existing customers. z Repeat the above process to find the customers that have had negative sales growth. 121
What Is OLAP? z Online Analytical Processing - coined by EF Codd in 1994 paper contracted by Arbor Software* z Generally synonymous with earlier terms such as Decisions Support, Business Intelligence, Executive Information System z OLAP = Multidimensional Database z MOLAP: Multidimensional OLAP (Arbor Essbase, Oracle Express) z ROLAP: Relational OLAP (Informix Meta. Cube, Microstrategy DSS Agent) * Reference: http: //www. arborsoft. com/essbase/wht_ppr/codd. TOC. html 122
The OLAP Market z Rapid growth in the enterprise market y 1995: $700 Million y 1997: $2. 1 Billion z Significant consolidation activity among major DBMS vendors y 10/94: Sybase acquires Express. Way y 7/95: Oracle acquires Express y 11/95: Informix acquires Metacube y 1/97: Arbor partners up with IBM y 10/96: Microsoft acquires Panorama z Result: OLAP shifted from small vertical niche to mainstream DBMS category 123
Strengths of OLAP z It is a powerful visualization paradigm z It provides fast, interactive response times z It is good for analyzing time series z It can be useful to find some clusters and outliers z Many vendors offer OLAP tools 124
OLAP Is FASMI z. Fast z. Analysis z. Shared z. Multidimensional z. Information Nigel Pendse, Richard Creath - The OLAP Report 125
Multi-dimensional Data z“Hey…I sold $100 M worth of goods” Re gi on Dimensions: Product, Region, Time Hierarchical summarization paths Product W S N Juice Cola Milk Cream Toothpaste Soap 1 2 34 5 6 7 Product Industry Region Country Time Year Category Region Quarter Product City Month Week Month Office Day 126
Data Cube Lattice z Cube lattice y ABC AB AC BC A B C none z Can materialize some groupbys, compute others on demand z Question: which groupbys to materialze? z Question: what indices to create z Question: how to organize data (chunks, etc) 127
Visualizing Neighbors is simpler 128
A Visual Operation: Pivot (Rotate) NY NY LA LA SF SF h nt Mo 10 Cola 47 Milk 30 Cream 12 Region Juice Product 3/1 3/2 3/3 3/4 Date 129
“Slicing and Dicing” The Telecomm Slice Product Household Telecomm Video Audio R gi e ns o Europe Far East India Retail Direct Special Sales Channel 130
Roll-up and Drill Down z Sales Channel z Region z Country z State z Location Address z Sales Representative Drill-Down Roll Up Higher Level of Aggregation Low-level Details 131
Nature of OLAP Analysis z Aggregation -- (total sales, percent-to-total) z Comparison -- Budget vs. Expenses z Ranking -- Top 10, quartile analysis z Access to detailed and aggregate data z Complex criteria specification z Visualization 132
Organizationally Structured Data z Different Departments look at the same detailed data in different ways. Without the detailed, organizationally structured data as a foundation, there is no reconcilability of data marketing sales finance manufacturing 133
Multidimensional Spreadsheets z Analysts need spreadsheets that support ypivot tables (cross-tabs) ydrill-down and roll-up yslice and dice ysort yselections yderived attributes z Popular in retail domain 134
OLAP - Data Cube z Idea: analysts need to group data in many different ways yeg. Sales(region, product, prodtype, prodstyle, date, saleamount) ysaleamount is a measure attribute, rest are dimension attributes ygroupby every subset of the other attributes xmaterialize (precompute and store) groupbys to give online response y. Also: hierarchies on attributes: date -> weekday, date -> month -> quarter -> year 135
SQL Extensions z. Front-end tools require y. Extended Family of Aggregate Functions xrank, median, mode y. Reporting Features xrunning totals, cumulative totals y. Results of multiple group by xtotal sales by month and total sales by product y. Data Cube 136
Relational OLAP: 3 Tier DSS Data Warehouse ROLAP Engine Database Layer Application Logic Layer Presentation Layer Generate SQL execution plans in the ROLAP engine to obtain OLAP functionality. Obtain multidimensional reports from the DSS Client. Store atomic data in industry standard RDBMS. Decision Support Client 137
MD-OLAP: 2 Tier DSS MDDB Engine Database Layer MDDB Engine Application Logic Layer Store atomic data in a proprietary data structure (MDDB), pre-calculate as many outcomes as possible, obtain OLAP functionality via proprietary algorithms running against this data. Decision Support Client Presentation Layer Obtain multidimensional reports from the DSS Client. 138
Typical OLAP Problems Number of Aggregations Data Explosion (4 levels in each dimension) Data Explosion Syndrome Number of Dimensions Microsoft Tech. Ed’ 98 139
Metadata Repository z Administrative metadata y source databases and their contents y gateway descriptions y warehouse schema, view & derived data definitions y dimensions, hierarchies y pre-defined queries and reports y data mart locations and contents y data partitions y data extraction, cleansing, transformation rules, defaults y data refresh and purging rules y user profiles, user groups y security: user authorization, access control 140
Metdata Repository. . 2 z Business data ybusiness terms and definitions yownership of data ycharging policies z operational metadata ydata lineage: history of migrated data and sequence of transformations applied ycurrency of data: active, archived, purged ymonitoring information: warehouse usage statistics, error reports, audit trails. 141
Recipe for a Successful Warehouse CSI'99
For a Successful Warehouse From Larry Greenfield, http: //pwp. starnetinc. com/larryg/index. html z From day one establish that warehousing is a joint user/builder project z Establish that maintaining data quality will be an ONGOING joint user/builder responsibility z Train the users one step at a time z Consider doing a high level corporate data model in no more than three weeks 143
For a Successful Warehouse z Look closely at the data extracting, cleaning, and loading tools z Implement a user accessible automated directory to information stored in the warehouse z Determine a plan to test the integrity of the data in the warehouse z From the start get warehouse users in the habit of 'testing' complex queries 144
For a Successful Warehouse z Coordinate system roll-out with network administration personnel z When in a bind, ask others who have done the same thing for advice z Be on the lookout for small, but strategic, projects z Market and sell your data warehousing systems 145
Data Warehouse Pitfalls z You are going to spend much time extracting, cleaning, and loading data z Despite best efforts at project management, data warehousing project scope will increase z You are going to find problems with systems feeding the data warehouse z You will find the need to store data not being captured by any existing system z You will need to validate data not being validated by transaction processing systems 146
Data Warehouse Pitfalls z Some transaction processing systems feeding the warehousing system will not contain detail z Many warehouse end users will be trained and never or seldom apply their training z After end users receive query and report tools, requests for IS written reports may increase z Your warehouse users will develop conflicting business rules z Large scale data warehousing can become an exercise in data homogenizing 147
Data Warehouse Pitfalls z 'Overhead' can eat up great amounts of disk space z The time it takes to load the warehouse will expand to the amount of the time in the available window. . . and then some z Assigning security cannot be done with a transaction processing system mindset z You are building a HIGH maintenance system z You will fail if you concentrate on resource optimization to the neglect of project, data, and customer management issues and an understanding of what adds value to the customer 148
DW and OLAP Research Issues z Data cleaning y focus on data inconsistencies, not schema differences y data mining techniques z Physical Design y design of summary tables, partitions, indexes y tradeoffs in use of different indexes z Query processing y selecting appropriate summary tables y dynamic optimization with feedback y acid test for query optimization: cost estimation, use of transformations, search strategies y partitioning query processing between OLAP server and backend server. 149
DW and OLAP Research Issues. . 2 z Warehouse Management ydetecting runaway queries yresource management yincremental refresh techniques ycomputing summary tables during load yfailure recovery during load and refresh yprocess management: scheduling queries, load and refresh y. Query processing, caching yuse of workflow technology for process management 150
Products, References, Useful Links CSI'99
Reporting Tools z Andyne Computing -- GQL z Brio -- Brio. Query z Business Objects -- Business Objects z Cognos -- Impromptu z Information Builders Inc. -- Focus for Windows z Oracle -- Discoverer 2000 z Platinum Technology -- SQL*Assist, Pro. Reports z Power. Soft -- Info. Maker z SAS Institute -- SAS/Assist z Software AG -- Esperant z Sterling Software -- VISION: Data 152
OLAP and Executive Information Systems z Andyne Computing -- Pablo z Microsoft -- Plato z Arbor Software -- Essbase z Oracle -- Express z Cognos -- Power. Play z Pilot -- Light. Ship z Comshare -- Commander OLAP z Planning Sciences -Gentium z Holistic Systems -- Holos z Platinum Technology -Prodea. Beacon, Forest & Trees z Information Advantage -AXSYS, Web. OLAP z Informix -- Metacube z Microstrategies --DSS/Agent z SAS Institute -- SAS/EIS, OLAP++ z Speedware -- Media 153
Other Warehouse Related Products z. Data extract, clean, transform, refresh y. CA-Ingres replicator y. Carleton Passport y. Prism Warehouse Manager y. SAS Access y. Sybase Replication Server y. Platinum Inforefiner, Infopump 154
Extraction and Transformation Tools z Carleton Corporation -- Passport z Evolutionary Technologies Inc. -- Extract z Informatica -- Open. Bridge z Information Builders Inc. -- EDA Copy Manager z Platinum Technology -- Info. Refiner z Prism Solutions -- Prism Warehouse Manager z Red Brick Systems -- Decision. Scape Formation 155
Scrubbing Tools z. Apertus -- Enterprise/Integrator z. Vality -- IPE z. Postal Soft 156
Warehouse Products z Computer Associates -- CA-Ingres z Hewlett-Packard -- Allbase/SQL z Informix -- Informix, Informix XPS z Microsoft -- SQL Server z Oracle -- Oracle 7, Oracle Parallel Server z Red Brick -- Red Brick Warehouse z SAS Institute -- SAS z Software AG -- ADABAS z Sybase -- SQL Server, IQ, MPP 157
Warehouse Server Products z. Oracle 8 z. Informix y. Online Dynamic Server y. XPS --Extended Parallel Server y. Universal Server for object relational applications z. Sybase y. Adaptive Server 11. 5 y. Sybase MPP y. Sybase IQ 158
Warehouse Server Products z. Red Brick Warehouse z. Tandem Nonstop z. IBM y. DB 2 MVS y. Universal Server y. DB 2 400 z. Teradata 159
Other Warehouse Related Products z. Connectivity to Sources y. Apertus y. Information Builders EDA/SQL y. Platimum Infohub y. SAS Connect y. IBM Data Joiner y. Oracle Open Connect y. Informix Express Gateway 160
Other Warehouse Related Products z. Query/Reporting Environments y. Brio/Query y. Cognos Impromptu y. Informix Viewpoint y. CA Visual Express y. Business Objects y. Platinum Forest and Trees 161
4 GL's, GUI Builders, and PC Databases z. Information Builders -- Focus z. Lotus -Approach z. Microsoft -- Access, Visual Basic z. MITI -- SQR/Workbench z. Power. Soft -Power. Builder z. SAS Institute -- SAS/AF 162
Data Mining Products z. Data. Mind -- neur. Oagent z. Information Discovery -- IDIS z. SAS Institute -- SAS/Neuronets 163
Data Warehouse z W. H. Inmon, Building the Data Warehouse, Second Edition, John Wiley and Sons, 1996 z W. H. Inmon, J. D. Welch, Katherine L. Glassey, Managing the Data Warehouse, John Wiley and Sons, 1997 z Barry Devlin, Data Warehouse from Architecture to Implementation, Addison Wesley Longman, Inc 1997 164
Data Warehouse z W. H. Inmon, John A. Zachman, Jonathan G. Geiger, Data Stores Data Warehousing and the Zachman Framework, Mc. Graw Hill Series on Data Warehousing and Data Management, 1997 z Ralph Kimball, The Data Warehouse Toolkit, John Wiley and Sons, 1996 165
OLAP and DSS z Erik Thomsen, OLAP Solutions, John Wiley and Sons 1997 z Microsoft Tech. Ed Transparencies from Microsoft Tech. Ed 98 z Essbase Product Literature z Oracle Express Product Literature z Microsoft Plato Web Site z Microstrategy Web Site 166
Data Mining z Michael J. A. Berry and Gordon Linoff, Data Mining Techniques, John Wiley and Sons 1997 z Peter Adriaans and Dolf Zantinge, Data Mining, Addison Wesley Longman Ltd. 1996 z KDD Conferences 167
Other Tutorials z Donovan Schneider, Data Warehousing Tutorial, Tutorial at International Conference for Management of Data (SIGMOD 1996) and International Conference on Very Large Data Bases 97 z Umeshwar Dayal and Surajit Chaudhuri, Data Warehousing Tutorial at International Conference on Very Large Data Bases 1996 z Anand Deshpande and S. Seshadri, Tutorial on Datawarehousing and Data Mining, CSI-97 168
Useful URLs z Ralph Kimball’s home page yhttp: //www. rkimball. com z Larry Greenfield’s Data Warehouse Information Center yhttp: //pwp. starnetinc. com/larryg/ z Data Warehousing Institute yhttp: //www. dw-institute. com/ z OLAP Council yhttp: //www. olapcouncil. com/ 169