01bee0e1a572f5b956d90fd09c51bb3e.ppt
- Количество слайдов: 110
Web characteristics, broadly defined Thanks to C. Manning, P. Raghavan, H. Schutze
What we have covered n n n n n What is IR Evaluation Tokenization and properties of text Web crawling Vector methods Measures of similarity Indexing Inverted files Today: n Web characteristics n n n Web vs classic IR Web advertising Web as a graph – size SEO Web spam Dup detection
Query Engine Index Interface Indexer Users Crawler Characteristics Web A Typical Web Search Engine
What Is the World Wide Web? n The world wide web (web) is a network of information resources, most of which is published by humans. – n World’s largest publishing mechanism The web relies on three mechanisms to make these resources readily available to the widest possible audience: – A uniform naming scheme for locating resources on the web (e. g. , URIs). • – Protocols, for access to named resources over the web (e. g. , HTTP). • – How to get there Hypertext, for easy navigation among resources (e. g. , HTML). • n Find it Where to go from there. Web is opt-out not an opt-in system – – Your information is available to all; you have to protect it. DMCA (Digital Millennium Copyright Act) – Only to those who sign it.
Internet vs. Web n Internet: • Internet is a more general term Includes physical aspect of underlying networks and mechanisms such as email, FTP, HTTP… n Web: • • n HTTP Associated with information stored on the Internet Refers to a broader class of networks, i. e. Web of English Literature Both Internet and web are networks
Networks vs Graphs Examples? Old internet network
Internet Technologies Web Standards n Internet Engineering Task Force (IETF) n n http: //www. ietf. org/ Founded 1986 Request For Comments (RFC) at http: //www. ietf. org/rfc. html World Wide Web Consortium (W 3 C) n n n http: //www. w 3. org Founded 1994 by Tim Berners-Lee Publishes technical reports and recommendations
Internet Technologies Web Design Principles Interoperability: Web languages and protocols must be compatible with one another independent of hardware and software. Evolution: The Web must be able to accommodate future technologies. n n Encourages simplicity, modularity and extensibility. Decentralization: Facilitates scalability and robustness.
Languages of the WWW n Markup languages n n A markup language combines text and extra information about the text. The extra information, for example about the text's structure or presentation, is expressed using markup, which is intermingled with the primary text. The best-known markup language is in modern use is HTML (Hypertext Markup Language), one of the foundations of the World Wide Web. Historically, markup was (and is) used in the publishing industry in the communication of printed work between authors, editors, and printers.
Without search engines the web wouldn’t scale (there would be no web) 1. 2. 3. No incentive in creating content unless it can be easily found – other finding methods haven’t kept pace (taxonomies, bookmarks, etc) The web is both a technology artifact and a social environment n “The Web has become the “new normal” in the American way of life; those who don’t go online constitute an ever-shrinking minority. ” – [Pew Foundation report, January 2005] Search engines make aggregation of interest possible: n Create incentives for very specialized niche players n n 4. 5. Economical – specialized stores, providers, etc Social – narrow interests, specialized communities, etc The acceptance of search interaction makes “unlimited selection” stores possible: n Amazon, Netflix, etc Search turned out to be the best mechanism for advertising on the web, a $15+ B industry (2011) n Growing very fast but entire US advertising industry $250 B – huge room to grow n Sponsored search marketing is about $10 B
Classical IR vs. Web IR
Basic assumptions of Classical Information Retrieval n n n Corpus: Fixed document collection Goal: Retrieve documents with information content that is relevant to user’s information need Searcher: information scientist or a search professional trained in making logical queries
Classic IR Goal n Classic relevance n n n For each query Q and stored document D in a given corpus assume there exists relevance Score(Q, D) n Score is average over users U and context C Optimize Score(Q, D) as opposed to Score(Q, D, U, C) That is, usually: n Context ignored n Individuals ignored Bad assumptions in the web context n Corpus predetermined
Web Information Retrieval
Basic assumptions of Web Information Retrieval n n Corpus: constantly changing; created by amateurs and professionals Goal: Retrieve summaries of relevant information quickly with links to the original site n n High precision! Recall not important Searcher: amateurs; no professional training and less or no concern about quality queries
Subscription Feeds Crawls Content creators Transaction Advertisement Editorial The coarse-level dynamics Content aggregators Content consumers
Brief (non-technical) history n Early keyword-based engines n n Altavista, Excite, Infoseek, Inktomi, ca. 1995 -1997 Paid placement ranking: Goto. com (morphed into Overture. com Yahoo!) n n Your search ranking depended on how much you paid Auction for keywords: casino was expensive!
Brief (non-technical) history n 1998+: Link-based ranking pioneered by Google n n Blew away all early engines save Inktomi Great user experience in search of a business model Meanwhile Goto/Overture’s annual revenues were nearing $1 billion Result: Google added paid-placement “ads” to the side, independent of search results n Yahoo follows suit, acquiring Overture (for paid placement) and Inktomi (for search)
Query Results 2009 Ads Algorithmic results.
Query Results 2010 Ads Algorithmic results.
Query Results 2013 Algorithmic results.
Query Results 2014 Algorithmic results.
Query Results 2016 Algorithmic results.
Query Results 2017 Algorithmic results.
Query String for Google 2013 http: //www. google. com/search? q=lee+giles “? ” in query implies the start of a query component query equals lee AND giles Basic query reduced from the full one. Query above is lee giles What does Bing do? duckgo? • For more, see wikipedia query string
Ads vs. search results n Google has maintained that ads (based on vendors bidding for keywords) do not affect vendors’ rankings in search results Search = miele
n Google has maintained that ads (based on vendors bidding for keywords) do not affect vendors’ rankings in search results
n Search 2016
Ads vs. search results n Other vendors (Yahoo, Bing) have made similar statements from time to time n n Any of them can change anytime Focus primarily on search results independent of paid placement ads n Although the latter is a fascinating technical subject in itself
Pay Per Click (PPC) Search Engine Ranking n n n PPC ads appear as “sponsored listings” Companies bid on price they are willing to pay “per click” Typically have very good tracking tools and statistics Ability to control ad text Can set budgets and spending limits Google Ad. Words and Overture are the two leaders
n n n Google $67 B income from advertising Most expensive ad words Research field: n n Computational advertising Algorithmic advertising
Web search basics User Web spider Search Indexer The Web Indexes Ad indexes
User Needs n Need [Brod 02, RL 04] n n n Informational – want to learn about something (~40% / 65%) Low hemoglobin Navigational – want to go to that page (~25% / 15%) United Airlines Transactional – want to do something (web-mediated) (~35% / 20%) n n Downloads n n Access a service Shop Seattle weather Mars surface images Canon S 410 Gray areas n n Car rental Brasil Find a good hub Exploratory search “see what’s there”
Web search users n Make ill defined queries n Short n n n n n Specific behavior n AV 2001: 2. 54 terms avg, 80% < 3 words) AV 1998: 2. 35 terms avg, 88% < 3 words [Silv 98] Imprecise terms Sub-optimal syntax (most queries without operator) Low effort Wide variance in n n Needs Expectations Knowledge Bandwidth n n 85% look over one result screen only (mostly above the fold) 78% of queries are not modified (one query/session) Follow links – “the scent of information”. . .
Query Distribution Power law: few popular broad queries, many rare specific queries
How far do people look for results? (Source: iprospect. com White. Paper_2006_Search. Engine. User. Behavior. pdf)
Users’ empirical evaluation of results n Quality of pages varies widely n n Relevance is not enough Other desirable qualities (non IR!!) n n Precision vs. recall n n On the web, recall seldom matters What matters n n Precision at 1? Precision above the fold? Comprehensiveness – must be able to deal with obscure queries n n Content: Trustworthy, new info, non-duplicates, well maintained, Web readability: display correctly & fast No annoyances: pop-ups, etc Recall matters when the number of matches is very small User perceptions may be unscientific, but are significant over a large aggregate
Users’ empirical evaluation of engines n n Relevance and validity of results UI – Simple, no clutter, error tolerant Trust – Results are objective Coverage of topics for polysemic queries Eg. mole, bank n Pre/Post process tools provided n n Mitigate user errors (auto spell check, syntax errors, …) Explicit: Search within results, more like this, refine. . . Anticipative: related searches Deal with idiosyncrasies n Web specific vocabulary n n n Impact on stemming, spell-check, etc Web addresses typed in the search box …
The Web corpus The Web Large dynamic directed graph n No design/co-ordination n Distributed content creation, linking, democratization of publishing n Content includes truth, lies, obsolete information, contradictions … n Unstructured (text, html, …), semistructured (XML, annotated photos), structured (Databases)… n Scale much larger than previous text corpora … but corporate records are catching up. n Growth – slowed down from initial “volume doubling every few months” but still expanding n Content can be dynamically generated
Web Today n The Web consists of hundreds of billions of pages (Google query ‘the’) n n Potentially infinite if dynamic pages are considered It is considered one of the biggest information revolutions in recent human history One of the largest graphs around Full of information n Trends
Web page
The simple Web graph n A graph G = (V, E) is defined by n n n The Web page graph (directed) n n n a set V of vertices (nodes) a set E of edges (links) = pairs of nodes V is the set of static public pages E is the set of static hyperlinks Many more graphs can be defined n n The host graph The co-citation graph Temporal graph etc
Which pages do we care for if we want to measure the web graph? n Avoid “dynamic” pages? n n catalogs pages generated by queries pages generated by cgi-scripts (the nostradamus effect) Only interested in “static” web pages
The Web: Dynamic content n A page without a static html version n E. g. , current status of flight AA 129 Current availability of rooms at a hotel Usually, assembled at the time of a request from a browser n n Sometimes, URL has a ‘? ’ character in it ‘? ’ precedes the actual query AA 129 Application server Browser Back-end databases
The Static Public Web Example - http: /clgiles. ist. psu. edu n Static Public no password required n n no robots. txt exclusion n n no “noindex” meta tag n § These rules can still be fooledn etc. § “Dynamic pages” appear static n not the result of a cgibin scripts no “? ” in the URL doesn’t change very often etc. n n • browseable catalogs (Hierarchy built from DB) § Spider traps -- infinite url descent • www. x. com/home/home/…. /home. html § Spammer games
Why do we care about the Web graph? n n Is it the largest human artifact ever created? Exploit the Web structure for n n n n crawlers search and link analysis ranking spam detection community discovery classification/organization business, politics, society applications Predict the Web future n n n mathematical models algorithm analysis sociological understanding New business opportunities New politics
The first question: what is the size of the Web? n n n Surprisingly hard to answer Naïve solution: keep crawling until the whole graph has been explored Extremely simple but wrong solution: crawling is complicated because the web is complicated n n spamming duplicates mirrors Simple example of a complication: Soft 404 n n When a page does not exists, the server is supposed to return an error code = “ 404” Many servers do not return an error code, but keep the visitor on site, or simply send him to the home page
A sampling approach n Sample pages uniformly at random Compute the percentage of the pages that belong to a search engine repository (search engine coverage) Estimate the size of the Web n Problems: n n how do you sample a page uniformly at random? how do you test if a page is indexed by a search engine?
Sampling pages [LG 99, HHMN 00] n Create IP addresses uniformly at random n n problems with virtual hosting, spamming Starting from a subset of pages perform a random walk on the graph. After “enough” steps you should end up in a random page. n near uniform sampling Testing search engine containment [BB 98]
Measuring the Web n n n It is clear that the Web that we see is what the crawler discovers We need large crawls in order to make meaningful measurements The measurements are still biased by n n n the crawling policy size limitations of the crawl Perturbations of the "natural" process of birth and death of nodes and links
Measures on the Web graph [BKMRRSTW 00] n n Degree distributions The global picture n n n what does the Web look like from a far? Reachability Connected components Community structure The finer picture
In-degree distribution n Power-law distribution with exponent 2. 1
Out-degree distribution n Power-law distribution with exponent 2. 7
The good news n n n The fact that the exponent is greater than 2 implies that the expected value of the degree is a constant (not growing with n) Therefore, the expected number of edges is linear in the number of nodes n This is good news, since we cannot handle anything much more than linear
Connected components – definitions n Weakly connected components (WCC) n n Set of nodes such that from any node can go to any node via an undirected path Strongly connected components (SCC) n Set of nodes such that from any node can go to any node via a directed path. WCC SCC
The bow-tie structure of the Web
SCC and WCC distribution n The SCC and WCC sizes follows a power law distribution n the second largest SCC is significantly smaller
The inner structure of the bow-tie n [LMST 05] What do the individual components of the bow tie look like? n They obey the same power laws in the degree distributions
The inner structure of the bow-tie n Is it the case that the bow-tie repeats itself in each of the components (self-similarity)? n n It would look nice, but this does not seem to be the case no large WCC, many small ones
The daisy structure? n n Large connected core, and highly fragmented IN and OUT components Unfortunately, we do not have a large crawl to verify this hypothesis
A different kind of self-similarity [DKCRST 01] n Consider Thematically Unified Clusters (TUC): pages grouped by n n n keyword searches web location (intranets) geography hostgraph random collections All such TUCs exhibit a bow-tie structure!
Self-similarity n The Web consists of a collection of self-similar structures that form a backbone of the SCC
Dynamic content n Most dynamic content is ignored by web spiders n n Some dynamic content (news stories from subscriptions) are sometimes delivered as dynamic content n n n Many reasons including malicious spider traps Application-specific spidering Spiders commonly view web pages just as Lynx (a text browser) would Note: even “static” pages are typically assembled on the fly (e. g. , headers are common)
The web: size n What is being measured? n n Number of hosts Number of (static) html pages n n Number of hosts – netcraft survey n n n Volume of data http: //news. netcraft. com/archives/web_server_survey. html Monthly report on how many web hosts & servers are out there Number of pages – numerous estimates
Netcraft Web Server Survey http: //news. netcraft. com/archives/web_server_survey. html
Netcraft Web Server Survey http: //news. netcraft. com/archives/web_server_survey. html
Netcraft Web Server Survey http: //news. netcraft. com/archives/web_server_survey. html
Netcraft Web Server Survey http: //news. netcraft. com/archives/web_server_survey. html
Netcraft Web Server Survey http: //news. netcraft. com/archives/web_server_survey. html
The web: evolution n n All of these numbers keep changing Relatively few scientific studies of the evolution of the web [Fetterly & al, 2003] n n http: //research. microsoft. com/research/sv/svpubs/p 97 -fetterly. pdf Sometimes possible to extrapolate from small samples (fractal models) [Dill & al, 2001] n http: //www. vldb. org/conf/2001/P 069. pdf
Rate of change n [Cho 00] 720 K pages from 270 popular sites sampled daily from Feb 17 – Jun 14, 1999 n n [Fett 02] Massive study 151 M pages checked over few months n n n Any changes: 40% weekly, 23% daily Significant changed -- 7% weekly Small changes – 25% weekly [Ntul 04] 154 large sites re-crawled from scratch weekly n n 8% new pages/week 8% die 5% new content 25% new links/week
Static pages: rate of change n Fetterly et al. study (2002): several views of data, 150 million pages over 11 weekly crawls n Bucketed into 85 groups by extent of change
Other characteristics n Significant duplication n High linkage n n More than 8 links/page in the average Complex graph topology n n Syntactic – 30%-40% (near) duplicates [Brod 97, Shiv 99 b, etc. ] Semantic – ? ? ? Not a small world; bow-tie structure [Brod 00] Spam n Billions of pages
Web evolution
Spam vs Search Engine Optimization (SEO)
SERPs • Search engine results page (SERP) are very influential • How can they be manipulated • How can you come up first? Google SERP for query “lee giles’
The trouble with paid placement… n n It costs money. What’s the alternative? Search Engine Optimization (SEO): n n n “Tuning” your web page to rank highly in the search results for select keywords Alternative to paying for placement Thus, intrinsically a marketing function Performed by companies, webmasters and consultants (“Search engine optimizers”) for their clients Some perfectly legitimate, some very shady Some frowned upon by Search Engines
Simplest forms n First generation engines relied heavily on tf/idf n n The top-ranked pages for the query maui resort were the ones containing the most maui’s and resort’s SEOs responded with dense repetitions of chosen terms n n e. g. , maui resort Often, the repetitions would be in the same color as the background of the web page n n Repeated terms got indexed by crawlers But not visible to humans on browsers Pure word density cannot be trusted as an IR signal
Search Engine Spam: Objective Success of commercial Web sites depends on the number of visitors that find the site while searching for a particular product. 85% of searchers look at only the first page of results A new business sector – search engine optimization M. Henzinger, R. Motwani, and C. Silverstein. Challenges in web search engines. International Joint Conference on Artificial Intelligence, 2003. Drost, I. and Scheffer, T. , Thwarting the Nigritude Ultramarine: Learning to Identify Link Spam. 16 th European Conference on Machine Learning, Porto, 2005
What’s SEO? n SEO = Search Engine Optimization n n Refers to the process of “optimizing” both the on-page and off-page ranking factors in order to achieve high search engine rankings for targeted search terms. Refers to the “industry” that revolves around obtaining high rankings in the search engines for desirable keyword search terms as a means of increasing the relevant traffic to a given website. Refers to an individual or company that optimizes websites for its clientele. Has a number of related meanings, and usually refers to an individual/firm that focuses on optimizing for “organic” search engine rankings
What’s SEO based on n Features we know used in web page ranking n n n Page content Page metadata Anchor text Links User behavior Others?
Search Engine Spam: Spamdexing spamdexing (also known as search spam, search engine spam, web spam or search engine poisoning) is the deliberate manipulation of search engine indexes aka: black hat SEO Porn led the way – ‘ 96
Search Engine Spamdexing Methods Content based Link spam Cloaking Mirror sites URL redirection
Content spamming Keyword stuffing • calculated placement of keywords within a page to raise the keyword count, variety, and density of the page. Truncation so that massive dictionary lists cannot be indexed on a single webpage. • Hidden or invisible text Meta-tag stuffing • Out of date Doorway pages • "Gateway" or doorway pages created with very little content but are instead stuffed with very similar keywords and phrases. • BMW caught Scraper sites • amalgamation of content taken from other sites; still works Article spinning • rewriting existing articles, as opposed to merely scraping content from other sites, undertaken by hired writers or automated using a thesaurus database or a neural network.
Variants of keyword stuffing n n Misleading meta-tags, excessive repetition Hidden text with colors, style sheet tricks, etc. Meta-Tags = “… London hotels, hotel, holiday inn, hilton, discount, booking, reservation, sex, mp 3, britney spears, viagra, …”
Link Spamming: Techniques Link farms: Densely connected arrays of pages. Farm pages propagate their Page. Rank to the target, e. g. , by a funnelshaped architecture that points directly or indirectly towards the target page. To camouflage link farms, tools fill in inconspicuous content, e. g. , by copying news bulletins. Link exchange services: Listings of (often unrelated) hyperlinks. To be listed, businesses have to provide a back link that enhances the Page. Rank of the exchange service. Guestbooks, discussion boards, and weblogs: Automatic tools post large numbers of messages to many sites; each message contains a hyperlink to the target website.
Cloaking n n Serve fake content to search engine spider DNS cloaking: Switch IP address. Impersonate Y SPAM Is this a Search Engine spider? Cloaking N Real Doc
Google Bombing != Google Hacking n n n http: //en. wikipedia. org/wiki/Google_bomb A Google bomb or Google wash is an attempt to influence the ranking of a given site in results returned by the Google search engine. Due to the way that Google's Page Rank algorithm works, a website will be ranked higher if the sites that link to that page all use consistent anchor text. A google bomb is when links are put in several sites on the internet by a number of people so it leads a particular keyword search combination to a specific site and the site is deluged/swamped (it may crash the site) with hits. 96
Google Bomb - old Query: french military victories Others?
Link Spamming: Defenses Manual identification of spam pages and farms to create a blacklist. Automatic classification of pages using machine learning techniques. Bad. Rank algorithm. The "bad rank" is initialized to a high value for blacklisted pages. It propagates bad rank to all referring pages (with a damping factor) thus penalizing pages that refer to spam.
Intelligent SEO n n Figure out how search engines do their ranking Inductive science n n n Make intelligent changes Figure our what happens Repeat
So What is the Search Engine Ranking Algorithm? n n n Top Secret! Only select employees of the actual search engines know for certain Reverse engineering, research and experiments gives some idea of major factors and approximate weight assignments Constant changing, tweaking, updating is done to the algorithm Websites and documents being searched are also constantly changing Varies by Search Engine – some give more weight to onpage factors, some to link popularity
SEO Expert
Search engine optimization vs Spam n Motives n n n Operators n n Commercial, political, religious, lobbies Promotion funded by advertising budget Contractors (Search Engine Optimizers) for lobbies, companies Web masters Hosting services Forums n E. g. , Web master world ( www. webmasterworld. com ) n n Search engine specific tricks Discussions about academic papers
The spam industry
SEO contests • Now part of some search engine classes!
The war against spam n Quality signals - Prefer authoritative pages based on: n n Anti robot test Limits on meta-keywords Robust link analysis n n Ignore statistically implausible linkage (or text) Use link analysis to detect spammers (guilt by association) Spam recognition by machine learning n n Training set based on known spam Family friendly filters n Policing of URL submissions n n Votes from authors (linkage signals) Votes from users (usage signals) n Linguistic analysis, general classification techniques, etc. For images: flesh tone detectors, source text analysis, etc. Editorial intervention n n Blacklists Top queries audited Complaints addressed Suspect pattern detection
The war against spam – what Google does n Google changes their algorithms regularly n Panda and Penguin updates n n n Suing Google doesn’t work n n Google Wins in Search. King Lawsuit Defines what counts as a high quality site n n Uses machine learning and AI methods SEO doesn’t seem to use this 23 features Google penalty n n n refers to a negative impact on a website's search ranking based on updates to Google's search algorithms. penalty can be an unfortunate by-product an intentional penalization of various black-hat SEO techniques
Future of spamdexing n Who’s the smartest tech wise: Search engines vs SEOs n n n Web search engines have policies on SEO practices they tolerate/block n n n http: //www. bing. com/toolbox/webmaster http: //www. google. com/intl/en/webmasters/ Adversarial IR: the unending (technical) battle between SEO’s and web search engines n n Constant evolution Can it ever be solved? Research http: //airweb. cse. lehigh. edu/ Many SEO companies will suffer
Reported “Organic” (white hat) Optimization Techniques n n n Register with search engines Research keywords related to your business Identify competitors, utilize benchmarking techniques and identify level of competition Utilize descriptive title tags for each page Ensure that your text is HTML text and not image text Use text links when possible Use appropriate keywords in your content and internal hyperlinks (don’t overdo!) Obtain inbound links from related websites Use Sitemaps Monitor your search engine rankings and more importantly your website traffic statistics and sales/leads produced Educate yourself about search engine marketing or consult a search engine optimization firm or SEO expert Use the Google Guide to High Quality Websites
Duplicate detection Get rid of duplicates; save space and time n. Product or idea evolution (near duplicates) n. Check for stolen information, plagiarism, etc. n
Duplicate documents n The web is full of duplicated content n n Strict duplicate detection = exact match n n Estimates at 40% Not as common But many, many cases of near duplicates n E. g. , Last modified date the only difference between two copies of a page
Duplicate/Near-Duplicate Detection n Duplication: Exact match can be detected with fingerprints n n Hash value Near-Duplication: Approximate match n Overview n n Compute syntactic similarity with an edit-distance measure Use similarity threshold to detect near-duplicates n n E. g. , Similarity > 80% => Documents are “near duplicates” Not transitive though sometimes used transitively
Computing Similarity n n Features: n Segments of a document (natural or artificial breakpoints) n Shingles (Word N-Grams) n a rose is a rose → a_rose_is_a_rose is_a_rose_is_a Similarity Measure between two docs (= sets of shingles) n Set intersection n Specifically (Size_of_Intersection / Size_of_Union)
Shingles + Set Intersection Computing exact set intersection of shingles between all pairs of documents is expensive/intractable n n Approximate using a cleverly chosen subset of shingles from each (a sketch) Estimate (size_of_intersection / size_of_union) based on a short sketch n Doc A Shingle set A Doc B Shingle set A Sketch A Jaccard Sketch A
Sketch of a document n Create a “sketch vector” (of size ~200) for each document n n Documents that share ≥ t (say 80%) corresponding vector elements are near duplicates For doc D, sketch. D[ i ] is as follows: n n n Let f map all shingles in the universe to 0. . 2 m (e. g. , f = fingerprinting) Let pi be a random permutation on 0. . 2 m Pick MIN {pi(f(s))} over all shingles s in D
Comparing Signatures n n Signature Matrix S n Rows = Hash Functions n Columns = Columns n Entries = Signatures Can compute – Pair-wise similarity of any pair of signature columns
All signature pairs n n Now we have an extremely efficient method for estimating a Jaccard coefficient for a single pair of documents. But we still have to estimate N 2 coefficients where N is the number of web pages. n n n Still slow One solution: locality sensitive hashing (LSH) Another solution: sorting (Henzinger 2006)
What we covered n n Web vs classic IR What users do Web advertising Web as a graph n n n SEO/Web spam n n Applications Structure and size White vs black hat Dup detection
01bee0e1a572f5b956d90fd09c51bb3e.ppt