Скачать презентацию Lecture 4 Information Retrieval and Web Mining http Скачать презентацию Lecture 4 Information Retrieval and Web Mining http

438b08b66de825c1dc1cf071309c5377.ppt

  • Количество слайдов: 83

Lecture 4: Information Retrieval and Web Mining http: //www. cs. kent. edu/~jin/advdatabases. html 1 Lecture 4: Information Retrieval and Web Mining http: //www. cs. kent. edu/~jin/advdatabases. html 1 1

Outline n Information Retrieval H Chapter 19 (Database System Concepts) n Web Mining (Mining Outline n Information Retrieval H Chapter 19 (Database System Concepts) n Web Mining (Mining the Web, Soumen Chakrabarti) n Page. Rank H One of the key techniques that contributes to google’s initial success 2 2

Chapter 19: Information Retrieval n Relevance Ranking Using Terms n Relevance Using Hyperlinks n Chapter 19: Information Retrieval n Relevance Ranking Using Terms n Relevance Using Hyperlinks n Synonyms. , Homonyms, and Ontologies n Indexing of Documents n Measuring Retrieval Effectiveness n Information Retrieval and Structured Data 3 3

Information Retrieval Systems n Information retrieval (IR) systems use a simpler data model than Information Retrieval Systems n Information retrieval (IR) systems use a simpler data model than database systems H Information organized as a collection of documents H Documents are unstructured, no schema n Information retrieval locates relevant documents, on the basis of user input such as keywords or example documents H e. g. , find documents containing the words “database systems” n Can be used even on textual descriptions provided with non-textual data such as images n Web search engines are the most familiar example of IR systems 4 4

Information Retrieval Systems (Cont. ) n Differences from database systems H IR systems don’t Information Retrieval Systems (Cont. ) n Differences from database systems H IR systems don’t deal with transactional updates (including concurrency control and recovery) H Database systems deal with structured data, with schemas that define the data organization H IR systems deal with some querying issues not generally addressed by database systems 4 Approximate searching by keywords 4 Ranking of retrieved answers by estimated degree of relevance 5 5

Keyword Search n In full text retrieval, all the words in each document are Keyword Search n In full text retrieval, all the words in each document are considered to be keywords. H We use the word term to refer to the words in a document n Information-retrieval systems typically allow query expressions formed using keywords and the logical connectives and, or, and not H Ands are implicit, even if not explicitly specified n Ranking of documents on the basis of estimated relevance to a query is critical H Relevance ranking is based on factors such as 4 Term frequency – Frequency of occurrence of query keyword in document 4 Inverse document frequency – How many documents the query keyword occurs in » Fewer give more importance to keyword 4 Hyperlinks to documents – More links to a document is more important 6 6

Relevance Ranking Using Terms n TF-IDF (Term frequency/Inverse Document frequency) ranking: H Let n(d) Relevance Ranking Using Terms n TF-IDF (Term frequency/Inverse Document frequency) ranking: H Let n(d) = number of terms in the document d H n(d, t) = number of occurrences of term t in the document d. H Relevance of a document d to a term t TF (d, t) = log n(d, t) 1+ n(d) 4 The log factor is to avoid excessive weight to frequent terms H Relevance of document to query Q r (d, Q) = TF (d, t) t Q n(t) IDF=1/n(t), n(t) is the number of documents that contain the term t 7 7

Relevance Ranking Using Terms (Cont. ) n Most systems add to the above model Relevance Ranking Using Terms (Cont. ) n Most systems add to the above model H Words that occur in title, author list, section headings, etc. are given greater importance H Words whose first occurrence is late in the document are given lower importance H Very common words such as “a”, “an”, “the”, “it” etc are eliminated 4 Called stop words H Proximity: if keywords in query occur close together in the document, the document has higher importance than if they occur far apart n Documents are returned in decreasing order of relevance score H Usually only top few documents are returned, not all 8 8

Review n What’s IR system? n What’s the key difference between IR system and Review n What’s IR system? n What’s the key difference between IR system and traditional relation database system? n What’s keyword search? n What’s the main factors we considered In key word search? H How to estimate/rank the relevance of a document? H What’s TF/IDF ranking? 9 9

Similarity Based Retrieval n Similarity based retrieval - retrieve documents similar to a given Similarity Based Retrieval n Similarity based retrieval - retrieve documents similar to a given document H Similarity may be defined on the basis of common words 4 E. g. find k terms in A with highest TF (d, t ) / n (t ) and use these terms to find relevance of other documents. n Relevance feedback: Similarity can be used to refine answer set to keyword query H User selects a few relevant documents from those retrieved by keyword query, and system finds other documents similar to these n Vector space model: define an n-dimensional space, where n is the number of words in the document set. H Vector for document d goes from origin to a point whose i th coordinate is TF (d, t ) / n (t ) H The cosine of the angle between the vectors of two documents is used as a measure of their similarity. 10 10

Relevance Using Hyperlinks n Number of documents relevant to a query can be enormous Relevance Using Hyperlinks n Number of documents relevant to a query can be enormous if only term frequencies are taken into account n Using term frequencies makes “spamming” easy 4 E. g. a travel agency can add many occurrences of the words “travel” to its page to make its rank very high n Most of the time people are looking for pages from popular sites n Idea: use popularity of Web site (e. g. how many people visit it) to rank site pages that match given keywords n Problem: hard to find actual popularity of site H How? 11 11

Relevance Using Hyperlinks (Cont. ) n Solution: use number of hyperlinks to a site Relevance Using Hyperlinks (Cont. ) n Solution: use number of hyperlinks to a site as a measure of the popularity or prestige of the site H Count only one hyperlink from each site (why? ) H Popularity measure is for site, not for individual page 4 But, most hyperlinks are to root of site 4 Also, concept of “site” difficult to define since a URL prefix like cs. kent. edu contains many unrelated pages of varying popularity n Refinements H When computing prestige based on links to a site, give more weight to links from sites that themselves have higher prestige 4 Definition is circular 4 Set up and solve system of simultaneous linear equations H Above idea is basis of the Google Page. Rank ranking mechanism 12 12

Relevance Using Hyperlinks (Cont. ) n Connections to social networking theories that ranked prestige Relevance Using Hyperlinks (Cont. ) n Connections to social networking theories that ranked prestige of people H E. g. the president of the U. S. A has a high prestige since many people know him H Someone known by multiple prestigious people has high prestige n Hub and authority based ranking H A hub is a page that stores links to many pages (on a topic) H An authority is a page that contains actual information on a topic H Each page gets a hub prestige based on prestige of authorities that it points to H Each page gets an authority prestige based on prestige of hubs that point to it H Again, prestige definitions are cyclic, and can be got by solving linear equations H Use authority prestige when ranking answers to a query 13 13

Review n What’s IR system? n What’s the key difference between IR system and Review n What’s IR system? n What’s the key difference between IR system and traditional relation database system? n What’s keyword search? n What’s the main factors we considered In key word search? H How to estimate/rank the relevance of a document? H What’s TF/IDF ranking? n Methods for similarity-based search n Relevance Using Hyperlinks 14 14

Synonyms and Homonyms n Synonyms H E. g. document: “motorcycle repair”, query: “motorcycle maintenance” Synonyms and Homonyms n Synonyms H E. g. document: “motorcycle repair”, query: “motorcycle maintenance” 4 need to realize that “maintenance” and “repair” are synonyms H System can extend query as “motorcycle and (repair or maintenance)” n Homonyms H E. g. “object” has different meanings as noun/verb H Can disambiguate meanings (to some extent) from the context n Extending queries automatically using synonyms can be problematic H Need to understand intended meaning in order to infer synonyms 4 Or verify synonyms with user H Synonyms may have other meanings as well 15 15

Concept-Based Querying n Approach H For each word, determine the concept it represents from Concept-Based Querying n Approach H For each word, determine the concept it represents from context H Use one or more ontologies: 4 Hierarchical structure showing relationship between concepts 4 E. g. : the ISA relationship that we saw in the E-R model n This approach can be used to standardize terminology in a specific field n Ontologies can link multiple languages n Foundation of the Semantic Web (not covered here) 16 16

Indexing of Documents n An inverted index maps each keyword Ki to a set Indexing of Documents n An inverted index maps each keyword Ki to a set of documents Si that contain the keyword H Documents identified by identifiers n Inverted index may record H Keyword locations within document to allow proximity based ranking H Counts of number of occurrences of keyword to compute TF n and operation: Finds documents that contain all of K 1, K 2, . . . , Kn. H Intersection S 1 S 2 . . . Sn n or operation: documents that contain at least one of K 1, K 2, …, Kn H union, S 1 S 2 . . . Sn, . n Each Si is kept sorted to allow efficient intersection/union by merging H “not” can also be efficiently implemented by merging of sorted lists 17 17

Word-Level Inverted File lexicon posting 18 18 Word-Level Inverted File lexicon posting 18 18

Measuring Retrieval Effectiveness n Information-retrieval systems save space by using index structures that support Measuring Retrieval Effectiveness n Information-retrieval systems save space by using index structures that support only approximate retrieval. May result in: H false negative (false drop) - some relevant documents may not be retrieved. H false positive - some irrelevant documents may be retrieved. H For many applications a good index should not permit any false drops, but may permit a few false positives. n Relevant performance metrics: H precision - what percentage of the retrieved documents are relevant to the query. H recall - what percentage of the documents relevant to the query were retrieved. 19 19

Measuring Retrieval Effectiveness (Cont. ) n Recall vs. precision tradeoff: 4 Can increase recall Measuring Retrieval Effectiveness (Cont. ) n Recall vs. precision tradeoff: 4 Can increase recall by retrieving many documents (down to a low level of relevance ranking), but many irrelevant documents would be fetched, reducing precision n Measures of retrieval effectiveness: H Recall as a function of number of documents fetched, or H Precision as a function of recall 4 Equivalently, as a function of number of documents fetched H E. g. “precision of 75% at recall of 50%, and 60% at a recall of 75%” n Problem: which documents are actually relevant, and which are not 20 20

Outline n Information Retrieval H Chapter 19 (Database System Concepts) n Web Mining H Outline n Information Retrieval H Chapter 19 (Database System Concepts) n Web Mining H What is web mining? H Structures of WWW H Searching the Web H Web Directory H Web Mining topics n Page. Rank H One of the key techniques that help google succeed 21 21

What is Web Mining? n Discovering useful information from the World-Wide Web and its What is Web Mining? n Discovering useful information from the World-Wide Web and its usage patterns n Applications H Web search e. g. , Google, Yahoo, … H Vertical Search e. g. , Fat. Lens, Become, … H Recommendations e. g. , Amazon. com H Advertising e. g. , Google, Yahoo H Web site design e. g. , landing page optimization 22 22

How does it differ from “classical” Data Mining? n The web is not a How does it differ from “classical” Data Mining? n The web is not a relation H Textual information and linkage structure n Usage data is huge and growing rapidly H Google’s usage logs are bigger than their web crawl H Data generated per day is comparable to largest conventional data warehouses n Ability to react in real-time to usage patterns H No human in the loop 23 23

The World-Wide Web n Huge n Distributed content creation, linking (no coordination) n Structured The World-Wide Web n Huge n Distributed content creation, linking (no coordination) n Structured databases, unstructured text, semistructured Content includes truth, lies, obsolete information, contradictions, … n n Our modern-day Library of Alexandria The Web 24 24

Size of the Web n Number of pages H Technically, infinite 4 Because of Size of the Web n Number of pages H Technically, infinite 4 Because of dynamically generated content 4 Lots of duplication (30 -40%) H Best estimate of “unique” static HTML pages comes from search engine claims 4 Google = 8 billion, Yahoo = 20 billion 4 Lots of marketing hype n Number of unique web sites H Netcraft survey says 76 million sites (http: //news. netcraft. com/archives/web_server_survey. html ) 25 25

The web as a graph n Pages = nodes, hyperlinks = edges H Ignore The web as a graph n Pages = nodes, hyperlinks = edges H Ignore content H Directed graph n High linkage H 8 -10 links/page on average H Power-law degree distribution 26 26

Power-law degree distribution 27 Source: Broder et al, 2000 27 Power-law degree distribution 27 Source: Broder et al, 2000 27

Power-laws galore n In-degrees n Out-degrees n Number of pages per site n Number Power-laws galore n In-degrees n Out-degrees n Number of pages per site n Number of visitors n Let’s take a closer look at structure H Broder et al. (2000) studied a crawl of 200 M pages and other smaller crawls H Bow-tie structure 4 Not a “small world” 28 28

Bow-tie Structure 29 Source: Broder et al, 2000 29 Bow-tie Structure 29 Source: Broder et al, 2000 29

Searching the Web The Web Content aggregators 30 Content consumers 30 Searching the Web The Web Content aggregators 30 Content consumers 30

Ads vs. search results 31 31 Ads vs. search results 31 31

Ads vs. search results n Search advertising is the revenue model H Multi-billion-dollar industry Ads vs. search results n Search advertising is the revenue model H Multi-billion-dollar industry H Advertisers pay for clicks on their ads n Interesting problems H How to pick the top 10 results for a search from 2, 230, 000 matching pages? H What ads to show for a search? H If I’m an advertiser, which search terms should I bid on and how much to bid? 32 32

Sidebar: What’s in a name? n Geico sued Google, contending that it owned the Sidebar: What’s in a name? n Geico sued Google, contending that it owned the trademark “Geico” H Thus, ads for the keyword geico couldn’t be sold to others n Court Ruling: search engines can sell keywords including trademarks n No court ruling yet: whether the ad itself can use the trademarked word(s) 33 33

The Long Tail Source: Chris Anderson (2004) 34 34 The Long Tail Source: Chris Anderson (2004) 34 34

The Long Tail n Shelf space is a scarce commodity for traditional retailers H The Long Tail n Shelf space is a scarce commodity for traditional retailers H Also: TV networks, movie theaters, … n The web enables near-zero-cost dissemination of information about products n More choices necessitate better filters H Recommendation engines (e. g. , Amazon) H How Into Thin Air made Touching the Void a bestseller 35 35

Web search basics User Web crawler Search Indexer The Web 36 36 Indexes Ad Web search basics User Web crawler Search Indexer The Web 36 36 Indexes Ad indexes

Search engine components n Spider (a. k. a. crawler/robot) – builds corpus H Collects Search engine components n Spider (a. k. a. crawler/robot) – builds corpus H Collects web pages recursively 4 For each known URL, fetch the page, parse it, and extract new URLs 4 Repeat H Additional pages from direct submissions & other sources n The indexer – creates inverted indexes H Various policies wrt which words are indexed, capitalization, support for Unicode, stemming, support for phrases, etc. n Query processor – serves query results H Front end – query reformulation, word stemming, capitalization, optimization of Booleans, etc. H Back end – finds matching documents and ranks them 37 37

Web Search Engines n Web crawlers are programs that locate and gather information on Web Search Engines n Web crawlers are programs that locate and gather information on the Web H Recursively follow hyperlinks present in known documents, to find other documents 4 Starting from a seed set of documents H Fetched documents 4 Handed over to an indexing system 4 Can be discarded after indexing, or store as a cached copy n Crawling the entire Web would take a very large amount of time H Search engines typically cover only a part of the Web, not all of it H Take months to perform a single crawl 38 38

Web Crawling (Cont. ) n Crawling is done by multiple processes on multiple machines, Web Crawling (Cont. ) n Crawling is done by multiple processes on multiple machines, running in parallel H Set of links to be crawled stored in a database H New links found in crawled pages added to this set, to be crawled later n Indexing process also runs on multiple machines H Creates a new copy of index instead of modifying old index H Old index is used to answer queries H After a crawl is “completed” new index becomes “old” index n Multiple machines used to answer queries H Indices may be kept in memory H Queries may be routed to different machines for load balancing 39 39

Directories n Storing related documents together in a library facilitates browsing H users can Directories n Storing related documents together in a library facilitates browsing H users can see not only requested document but also related ones. n Browsing is facilitated by classification system that organizes logically related documents together. n Organization is hierarchical: classification hierarchy 40 40

A Classification Hierarchy For A Library System 41 41 A Classification Hierarchy For A Library System 41 41

Classification DAG n Documents can reside in multiple places in a hierarchy in an Classification DAG n Documents can reside in multiple places in a hierarchy in an information retrieval system, since physical location is not important. n Classification hierarchy is thus Directed Acyclic Graph (DAG) 42 42

A Classification DAG For A Library Information Retrieval System 43 43 A Classification DAG For A Library Information Retrieval System 43 43

Web Directories n A Web directory is just a classification directory on Web pages Web Directories n A Web directory is just a classification directory on Web pages H E. g. Yahoo! Directory, Open Directory project H Issues: 4 What should the directory hierarchy be? 4 Given a document, which nodes of the directory are categories relevant to the document H Often done manually 4 Classification of documents into a hierarchy may be done based on term similarity 44 44

Web Mining topics n Crawling the web n Web graph analysis n Structured data Web Mining topics n Crawling the web n Web graph analysis n Structured data extraction n Classification and vertical search n Collaborative filtering n Web advertising and optimization n Mining web logs n Systems Issues 45 45

Extracting structured data http: //www. fatlens. com 46 46 Extracting structured data http: //www. fatlens. com 46 46

Extracting Structured Data 47 http: //www. simplyhired. com 47 Extracting Structured Data 47 http: //www. simplyhired. com 47

Information Retrieval and Structured Data n Information retrieval systems originally treated documents as a Information Retrieval and Structured Data n Information retrieval systems originally treated documents as a collection of words n Information extraction systems infer structure from documents, e. g. : H Extraction of house attributes (size, address, number of bedrooms, etc. ) from a text advertisement H Extraction of topic and people named from a new article n Relations or XML structures used to store extracted data H System seeks connections among data to answer queries H Question answering systems 48 48

Page. Rank n Intuition: solve the recursive equation: “a page is important if important Page. Rank n Intuition: solve the recursive equation: “a page is important if important pages link to it. ” n In high-falutin’ terms: importance = the principal eigenvector of the stochastic matrix of the Web. H A few fixups needed. 49 49

Stochastic Matrix of the Web n Enumerate pages. n Page i corresponds to row Stochastic Matrix of the Web n Enumerate pages. n Page i corresponds to row and column i. n M [i, j ] = 1/n if page j links to n pages, including page i ; 0 if j does not link to i. H M [i, j ] is the probability we’ll next be at page i if we are now at page j. 50 50

Example Suppose page j links to 3 pages, including i j i 1/3 51 Example Suppose page j links to 3 pages, including i j i 1/3 51 51

Random Walks on the Web n Suppose v is a vector whose i component Random Walks on the Web n Suppose v is a vector whose i component is the probability that we are at page i at a certain time. th n If we follow a link from i at random, the probability distribution for the page we are then at is given by the vector M v. 52 52

Random Walks --- (2) n Starting from any vector v, the limit M (M Random Walks --- (2) n Starting from any vector v, the limit M (M (…M (M v ) …)) is the distribution of page visits during a random walk. n Intuition: pages are important in proportion to how often a random walker would visit them. n The math: limiting distribution = principal eigenvector of M = Page. Rank. 53 53

Example: The Web in 1839 y a y 1/2 a 1/2 0 m 0 Example: The Web in 1839 y a y 1/2 a 1/2 0 m 0 1/2 Yahoo Amazon m 0 1 0 M’soft 54 54

Simulating a Random Walk n Start with the vector v = [1, 1, …, Simulating a Random Walk n Start with the vector v = [1, 1, …, 1] representing the idea that each Web page is given one unit of importance. n Repeatedly apply the matrix M to v, allowing the importance to flow like a random walk. n Limit exists, but about 50 iterations is sufficient to estimate final distribution. 55 55

Example n Equations v = M v : y = y /2 + a Example n Equations v = M v : y = y /2 + a /2 a = y /2 + m m = a /2 y a = m 1 1 3/2 1/2 5/4 1 3/4 9/8 11/8 1/2 . . . 6/5 3/5 56 56

Solving The Equations n Because there are no constant terms, these 3 equations in Solving The Equations n Because there are no constant terms, these 3 equations in 3 unknowns do not have a unique solution. n Add in the fact that y +a +m = 3 to solve. n In Web-sized examples, we cannot solve by Gaussian elimination; we need to use relaxation (= iterative solution). 57 57

Real-World Problems n Some pages are “dead ends” (have no links out). H Such Real-World Problems n Some pages are “dead ends” (have no links out). H Such a page causes importance to leak out. n Other (groups of) pages are spider traps (all out-links are within the group). H Eventually spider traps absorb all importance. 58 58

Microsoft Becomes Dead End y a y 1/2 a 1/2 0 m 0 1/2 Microsoft Becomes Dead End y a y 1/2 a 1/2 0 m 0 1/2 Yahoo Amazon m 0 0 0 M’soft 59 59

Example n Equations v = M v : y = y /2 + a Example n Equations v = M v : y = y /2 + a /2 a = y /2 m = a /2 y a = m 1 1 1/2 3/4 1/2 1/4 5/8 3/8 1/4 . . . 0 0 0 60 60

M’soft Becomes Spider Trap y a y 1/2 a 1/2 0 m 0 1/2 M’soft Becomes Spider Trap y a y 1/2 a 1/2 0 m 0 1/2 Yahoo Amazon m 0 0 1 M’soft 61 61

Example n Equations v = M v : y = y /2 + a Example n Equations v = M v : y = y /2 + a /2 a = y /2 m = a /2 + m y a = m 1 1 1/2 3/4 1/2 7/4 5/8 3/8 2 . . . 0 0 3 62 62

Google Solution to Traps, Etc. n “Tax” each page a fixed percentage at each Google Solution to Traps, Etc. n “Tax” each page a fixed percentage at each interation. n Add the same constant to all pages. n Models a random walk with a fixed probability of going to a random place next. 63 63

Example: Previous with 20% Tax n Equations v = 0. 8(M v ) + Example: Previous with 20% Tax n Equations v = 0. 8(M v ) + 0. 2: y = 0. 8(y /2 + a/2) + 0. 2 a = 0. 8(y /2) + 0. 2 m = 0. 8(a /2 + m) + 0. 2 y a = m 1 1. 00 0. 60 1. 40 0. 84 0. 60 1. 56 0. 776 0. 536. . . 1. 688 7/11 5/11 21/11 64 64

General Case n In this example, because there are no dead-ends, the total importance General Case n In this example, because there are no dead-ends, the total importance remains at 3. n In examples with dead-ends, some importance leaks out, but total remains finite. 65 65

Solving the Equations n Because there are constant terms, we can expect to solve Solving the Equations n Because there are constant terms, we can expect to solve small examples by Gaussian elimination. n Web-sized examples still need to be solved by relaxation. 66 66

Speeding Convergence n Newton-like prediction of where components of the principal eigenvector are heading. Speeding Convergence n Newton-like prediction of where components of the principal eigenvector are heading. n Take advantage of locality in the Web. n Each technique can reduce the number of iterations by 50%. H Important --- Page. Rank takes time! 67 67

Predicting Component Values n Three consecutive values for the importance of a page suggests Predicting Component Values n Three consecutive values for the importance of a page suggests where the limit might be. 1. 0 0. 7 Guess for the next round 0. 6 0. 55 68 68

Exploiting Substructure n Pages from particular domains, hosts, or paths, like stanford. edu or Exploiting Substructure n Pages from particular domains, hosts, or paths, like stanford. edu or www-db. stanford. edu/~ullman tend to have higher density of links. n Initialize Page. Rank using ranks within your local cluster, then ranking the clusters themselves. 69 69

Strategy n Compute local Page. Ranks (in parallel? ). n Use local weights to Strategy n Compute local Page. Ranks (in parallel? ). n Use local weights to establish intercluster weights on edges. n Compute Page. Rank on graph of clusters. n Initial rank of a page is the product of its local rank and the rank of its cluster. n “Clusters” are appropriately sized regions with common domain or lower-level detail. 70 70

In Pictures 1. 5 2. 05 3. 0 2. 0 0. 15 0. 1 In Pictures 1. 5 2. 05 3. 0 2. 0 0. 15 0. 1 Local ranks Intercluster weights 0. 05 Ranks of clusters Initial eigenvector 71 71

Hubs and Authorities n Mutually recursive definition: H A hub links to many authorities; Hubs and Authorities n Mutually recursive definition: H A hub links to many authorities; H An authority is linked to by many hubs. n Authorities turn out to be places where information can be found. H Example: course home pages. n Hubs tell where the authorities are. H Example: CSD course-listing page. 72 72

Transition Matrix A n H&A uses a matrix A [i, j ] = 1 Transition Matrix A n H&A uses a matrix A [i, j ] = 1 if page i links to page j, 0 if not. n AT, the transpose of A, is similar to the Page. Rank matrix M, but AT has 1’s where M has fractions. 73 73

Example Yahoo Amazon y a m y 1 1 1 A= a 1 0 Example Yahoo Amazon y a m y 1 1 1 A= a 1 0 1 m 0 1 0 M’soft 74 74

Using Matrix A for H&A n Powers of A and AT diverge in size Using Matrix A for H&A n Powers of A and AT diverge in size of elements, so we need scale factors. n Let h and a be vectors measuring the “hubbiness” and authority of each page. n Equations: h = λAa; a = μAT h. H Hubbiness = scaled sum of authorities of successor pages (out-links). H Authority = scaled sum of hubbiness of predecessor pages (in-links). 75 75

Consequences of Basic Equations n From h = λAa; a = μAT h we Consequences of Basic Equations n From h = λAa; a = μAT h we can derive: H h = λμAAT h H a = λμATA a n Compute h and a by iteration, assuming initially each page has one unit of hubbiness and one unit of authority. H Pick an appropriate value of λμ. 76 76

Example 111 A= 101 010 110 AT = 1 0 1 110 321 AAT= Example 111 A= 101 010 110 AT = 1 0 1 110 321 AAT= 2 2 0 101 212 ATA= 1 212 = a(yahoo) a(amazon) = a(m’soft) = 1 1 1 5 4 5 24 18 24 114 84 114 . . 1+ 3 2 1+ 3 h(yahoo) = h(amazon) = h(m’soft) = 1 1 1 6 4 2 28 20 8 132 96 36 . . 1. 000 0. 735 0. 268 77 77

Solving the Equations n Solution of even small examples is tricky, because the value Solving the Equations n Solution of even small examples is tricky, because the value of λμ is one of the unknowns. H Each equation like y = λμ(3 y +2 a +m) lets us solve for λμ in terms of y, a, m ; equate each expression for λμ. n As for Page. Rank, we need to solve big examples by relaxation. 78 78

Details for h --- (1) y = λμ(3 y +2 a +m) a = Details for h --- (1) y = λμ(3 y +2 a +m) a = λμ(2 y +2 a ) m = λμ(y +m) n Solve for λμ: λμ = y /(3 y +2 a +m) = a / (2 y +2 a ) = m / (y +m) 79 79

Details for h --- (2) n Assume y = 1. λμ = 1/(3 +2 Details for h --- (2) n Assume y = 1. λμ = 1/(3 +2 a +m) = a / (2 +2 a ) = m / (1+m) n Cross-multiply second and third: a +am = 2 m +2 am or a = 2 m /(1 -m ) n Cross multiply first and third: 1+m = 3 m + 2 am +m 2 or a =(1 -2 m -m 2)/2 m 80 80

Details for h --- (3) n Equate formulas for a : a = 2 Details for h --- (3) n Equate formulas for a : a = 2 m /(1 -m ) = (1 -2 m -m 2)/2 m n Cross-multiply: 1 - 2 m - m 2 - m + 2 m 2 + m 3 = 4 m 2 n Solve for m : m =. 268 n Solve for a : a = 2 m /(1 -m ) =. 735 81 81

Solving H&A in Practice n Iterate as for Page. Rank; don’t try to solve Solving H&A in Practice n Iterate as for Page. Rank; don’t try to solve equations. n But keep components within bounds. H Example: scale to keep the largest component of the vector at 1. n Trick: start with h = [1, 1, …, 1]; multiply by AT to get first a; scale, then multiply by A to get next h, … 82 82

H&A Versus Page. Rank n If you talk to someone from IBM, they will H&A Versus Page. Rank n If you talk to someone from IBM, they will tell you “IBM invented Page. Rank. ” H What they mean is that H&A was invented by Jon Kleinberg when he was at IBM. n But these are not the same. n H&A has been used, e. g. , to analyze important research papers; it does not appear to be a substitute for Page. Rank. 83 83