ea75eaa4155d8f86f71cc28acd6850e2.ppt
- Количество слайдов: 37
CS 430: Information Discovery Lecture 20 Web Search 2 1
Course Administration • • 2 Outstanding queries on Assignment 2 have been answered. Wording change made to Assignment 3: output need not be in Web format.
Effective Information Retrieval 1. Comprehensive metadata with Boolean retrieval (e. g. , monograph catalog). Can be excellent for well-understood categories of material, but requires expensive metadata, which is rarely available. 2. Full text indexing with ranked retrieval (e. g. , news articles). Excellent for relatively homogeneous material, but requires available full text. Neither of these methods is very effective when applied directly to the Web. 3
New concepts in Web Searching • • Concept of relevance is changed. • Browsing is tightly connected to searching. • 4 Goal of search is redefined. Contextual information is used as an integral part of the search.
Indexing Goals: Precision Short queries applied to very large numbers of items leads to large numbers of hits. • Goal is that the first 10 -100 hits presented should satisfy the user's information need -- requires ranking hits in order that fits user's requirements • Recall is not an important criterion Completeness of index is not an important factor. • Comprehensive crawling is unnecessary 5
Concept of Relevance Document measures Relevance, as conventionally defined, is binary (relevant or not relevant). It is usually estimated by the similarity between the terms in the query and each document. Importance measures documents by their likelihood of being useful to a variety of users. It is usually estimated by some measure of popularity. Web search engines rank documents by a combination of relevance and importance. The goal is to present the user with the most important of the relevant documents. 6
Ranking Options 1. Paid advertisers 2. Manually created classification 3. Vector space ranking with corrections for document length 4. Extra weighting for specific fields, e. g. , title, anchors, etc. 5. Popularity, e. g. , Page. Rank The balance between 3, 4, and 5 is not made public. 7
Browsing and Searching is followed by browsing. Browsing the hit list: helpful summary records (snippets) removal of duplicates grouping results from a single site Browsing the Web pages themselves: direct links from the snippets to the pages cache with highlights translation in same format 8
Browsing and Searching Query: Cornell sports LII: Law about. . . Sports. . . sports law: an overview. Sports Law encompasses a multitude areas of law brought together in unique ways. Issues. . . vocation. Amateur Sports. . www. law. cornell. edu/topics/sports. html Query: NCAA Tarkanian LII: Law about. . . Sports. . . purposes. See NCAA v. Tarkanian, 109 US 454 (1988). State action status may also be a factor in mandatory drug testing rules. On. . . www. law. cornell. edu/topics/sports. html 9
Contextual information Information about a document: • Content (terms, formatting, etc. ) • Metadata (externally created following rules) • Context (citations and links, reviews, annotations, etc. ) Context has many uses: • Selecting documents to index • Retrieval clues (e. g. , href text) • Ranking 10
Effective Information Retrieval (cont) 3. Full text indexing with contextual information and ranked retrieval (e. g. , Google, Teoma). Excellent for mixed textual information with rich structure. 4. Contextual information with non-textual materials and ranked retrieval (e. g. , Google image retrieval). Promising, but still experimental. 11
Scalability 10, 000, 000 The growth of the web 1, 000, 000 100, 000 10, 000 1, 000 100, 000 1, 000 10 1 1994 12 1997 2000
Scalability Web search services are centralized systems • Over the past 9 years, Moore's Law has enabled the services to keep pace with the growth of the web and the number of users, while adding extra function. • Will this continue? • Possible areas for concern are: staff costs, telecommunications costs, disk access rates. 13
Cost Example (Google) 85 people 50% technical, 14 Ph. D. in Computer Science Equipment 2, 500 Linux machines 80 terabytes of spinning disks 30 new machines installed daily Reported by Larry Page, Google, March 2000 At that time, Google was handling 5. 5 million searches per day Increase rate was 20% per month By fall 2002, Google had grown to over 400 people. 14
Scalability: Staff Programming: Have very well trained staff. Isolate complex code. Most coding is single image. System maintenance: Organize for minimal staff (e. g. , automated log analysis, do not fix broken computers). Customer service: Automate everything possible, but complaints, large collections, etc. require staff. 15
Scalability: Performance Very large numbers of commodity computers Algorithms and data structures scale linearly • Storage – Scale with the size of the Web – Compression/decompression • System – Crawling, indexing, sorting simultaneously • Searching – Bounded by disk I/O 16
Bibliometrics Techniques that use citation analysis to measure the similarity of journal articles or their importance Bibliographic coupling: two papers that cite many of the same papers Co-citation: two papers that were cited by many of the same papers Impact factor (of a journal): frequency with which the average article in a journal has been cited in a particular year or period 17
Citation Graph cites Paper is cited by Note that journal citations always refer to earlier work. 18
Graphical Analysis of Hyperlinks on the Web This page links to many other pages (hub) 1 2 4 3 5 19 6 Many pages link to this page (authority)
Page. Rank Algorithm Used to estimate importance of documents. Concept: The rank of a web page is higher if many pages link to it. Links from highly ranked pages are given greater weight than links from less highly ranked pages. 20
Intuitive Model (Basic Concept) A user: 1. Starts at a random page on the web 2. Selects a random hyperlink from the current page and jumps to the corresponding page 3. Repeats Step 2 a very large number of times Pages are ranked according to the relative frequency with which they are visited. 21
Matrix Representation Citing page (from) P 1 P 2 P 3 P 4 P 1 P 6 1 P 2 1 P 3 Cited page (to) P 4 1 1 P 5 1 Number 2 1 3 1 1 4 Number 1 1 P 6 22 P 5 2 1 1 3 1 1
Basic Algorithm: Normalize by Number of Links from Page Citing page P 1 P 2 P 3 P 4 P 5 P 1 0. 33 P 2 0. 25 P 3 0. 25 0. 5 P 4 0. 25 0. 5 P 5 Cited page 1 0. 25 23 =B 1 0. 33 P 6 Number P 6 1 0. 33 4 2 1 1 3 1 Normalized link matrix
Basic Algorithm: Weighting of Pages Initially all pages have weight 1 Recalculate weights 0. 33 1 w 1 = 1 w 2 = Bw 1 = 1. 25 1 1 2. 08 1 0. 25 1 24 1. 75 0. 33
Basic Algorithm: Iterate: wk = Bwk-1 w 3 w 4 1 0. 33 0. 08 0. 03 -> 0. 00 1 1. 25 1. 83 2. 80 -> 2. 39 1 1. 75 2. 79 2. 06 -> 2. 39 1 2. 08 1. 12 1. 05 -> 1. 19 1 0. 25 0. 08 0. 02 -> 0. 00 1 25 w 2 . . . converges to. . . w 0. 33 0. 08 0. 03 -> 0. 00
Graphical Analysis of Hyperlinks on the Web There is no link out of {2, 3, 4} 2 1 4 3 5 26 6
Google Page. Rank with Damping A user: 1. Starts at a random page on the web 2 a. With probability p, selects any random page and jumps to it 2 b. With probability 1 -p, selects a random hyperlink from the current page and jumps to the corresponding page 3. Repeats Step 2 a and 2 b a very large number of times Pages are ranked according to the relative frequency with which they are visited. 27
The Page. Rank Iteration The basic method iterates using the normalized link matrix, B. wk = Bwk-1 This w is the high order eigenvector of B Google iterates using a damping factor. The method iterates using a matrix B', where: B' = d. N + (1 - d)B N is the matrix with every element equal to 1/n. d is a constant found by experiment. 28
Google: Page. Rank The Google Page. Rank algorithm is usually written with the following notation If page A has pages Ti pointing to it. – d: damping factor – C(A): number of links out of A Iterate until: 29
Information Retrieval Using Page. Rank Simple Method Consider all hits (i. e. , all document vectors that share at least one term with the query vector) as equal. Display the hits ranked by Page. Rank. The disadvantage of this method is that it gives no attention to how closely a document matches a query 30
Reference Pattern Ranking using Dynamic Document Sets Page. Rank calculates document ranks for the entire (fixed) set of documents. The calculations are made periodically (e. g. , monthy) and the document ranks are the same for all queries. Concept of dynamic document sets. Reference patterns among documents that are related to a specific query convey more information than patterns calculated across entire document collections. With dynamic document sets, references patterns are calculated for a set of documents that are selected based on each individual query. 31
Reference Pattern Ranking using Dynamic Document Sets Teoma Dynamic Ranking Algorithm (used in Ask Jeeves) 1. Search using conventional term weighting. Rank the hits using similarity between query and documents. 2. Select the highest ranking hits (e. g. , top 5, 000 hits). 3. Carry out Page. Rank or similar algorithm on this set of hits. This creates a set of document ranks that are specific to this query. 4. Display the results ranked in the order of the reference patterns calculated. 32
Combining Term Weighting with Reference Pattern Ranking Combined Method 1. Find all documents that share a term with the query vector. 2. The similarity, using conventional term weighting, between the query and document j is sj. 3. The rank of document j using Page. Rank or other reference pattern ranking is pj. 4. Calculate a combined rank cj = sj + (1 - )pj, where is a constant. 5. Display the hits ranked by cj. 33 This method is used in several commercial systems, but the details have not been published.
Cornell Note Jon Kleinberg of Cornell Computer Science has carried out extensive research in this area, both theoretical and practical development of new algorithms. In particular he has studied hubs (documents that refer to many others) and authorities (documents that are referenced by many others). 34
Google API 35
Selective searching 36
Google News 37
ea75eaa4155d8f86f71cc28acd6850e2.ppt