Скачать презентацию Social Spam Kristina Lerman University of Southern California Скачать презентацию Social Spam Kristina Lerman University of Southern California

81d77f22ba6f063477ce0872f4273dd0.ppt

  • Количество слайдов: 46

Social Spam Kristina Lerman University of Southern California CS 599: Social Media Analysis University Social Spam Kristina Lerman University of Southern California CS 599: Social Media Analysis University of Southern California 1

Manipulation of social media • Spam use of electronic messaging systems to send unsolicited Manipulation of social media • Spam use of electronic messaging systems to send unsolicited bulk messages indiscriminately, for financial gain – Malware • if the page hosts malicious software or attempts to exploit a user’s browser. – Phishing • pages include any website attempting to solicit a user’s account credentials – Scam • any website advertising pharmaceuticals, software, adult content, and a multitude of other solicitations • Deception

Motivations for spam • Abusers drive traffic to a web site – Malicious sites Motivations for spam • Abusers drive traffic to a web site – Malicious sites • phishing, malware, sell products • Compromised accounts then sold to other spammers – “Click fraud” • Gain financially from showing ads to visitors

What is the cost of spam? • Are users harmed by click fraud? – What is the cost of spam? • Are users harmed by click fraud? – advertiser gains, because real users click on ads – intermediary gains fees from the advertiser. – Spammer gains its cut from the clicked ads. – User gains, since she learns about products from ads • No harm is done?

What is the cost of spam • Costs to consumers – Information pollution: good What is the cost of spam • Costs to consumers – Information pollution: good content is hard to find – Search engines and folksonomies direct traffic in the wrong directions – Users end up with less relevant resources • Costs to content producers – Less revenue for producers of relevant content • Costs to search engines – Develop algorithms to combat spam Everybody pays for the cost of information pollution

Combatting spam • Social media spam is successful – 8% of URLs posted on Combatting spam • Social media spam is successful – 8% of URLs posted on Twitter are spam [2010] – Much higher click-through rates than email Strategies designed to make spam more costly to spammers • Search engine spam – Algorithms to combat rank manipulations, e. g. , link farms – Blacklists of suspected malware and phishing (e. g. , Google’s Safe. Browsing API) • Email spam – Filters on servers and clients – Blacklists: IP, domain and URL • Social spam?

Social Spam Detection Benjamin Markines, Ciro Cattuto, Filippo Menczer Presented by Yue Cai Social Spam Detection Benjamin Markines, Ciro Cattuto, Filippo Menczer Presented by Yue Cai

Introduction • Web 2. 0: social annotation user-driven simplicity and open-ended nature • Folksonomy: Introduction • Web 2. 0: social annotation user-driven simplicity and open-ended nature • Folksonomy: set of triples (u, r, t) user annotates resource r with tag t • Problem: social spam malicious user exploit collaborative tagging

Focus of paper • Six features of social spam in collaborative tagging system limited Focus of paper • Six features of social spam in collaborative tagging system limited to social bookmarking system (delicious. com) • Prove each feature has predictive power • Evaluate various supervised learning algorithms using these features

Background • Why? financial gains • How? create content (generate by NLP or plagiarizing) Background • Why? financial gains • How? create content (generate by NLP or plagiarizing) place ads misleading tagging in social sites to attract traffic -- “Gossip Search Engine” • Outcome? Pollution of web environment

Levels of spam • Content of tagged resources subjectivity • Posts: associate resources with Levels of spam • Content of tagged resources subjectivity • Posts: associate resources with tags create artificial links between resources and unrelated tags for questionable content, how user annotates it reveals intent • User account flag users as spammers – Bib. Sonomy broad brush: exceedingly strict

Tag. Spam • Spammers may use tags and tag combinations that are statistically unlikely Tag. Spam • Spammers may use tags and tag combinations that are statistically unlikely in legitimate posts • Pr(t) : possibility of a given tag t is associated with spam users with tag t: • For a post: • Time complexity: constant time for any post • Cold start problem: needs a body of labeled annotations to bootstrap tag possibilities

Tag. Blur • Spam posts associate spam resources with popular tags that are often Tag. Blur • Spam posts associate spam resources with popular tags that are often semantically unrelated to each other • Semantic similarity of tags: base on prior work • For a post: Z: number of tag pairs in T(u, r) ε: attuning constant • Time complexity: quadratic in number of tags per post considers constant time • Needs precomputed similarity for any two tags

Dom. Fp • Spam webpages tend to have similar document structure • Estimate likelihood Dom. Fp • Spam webpages tend to have similar document structure • Estimate likelihood of r being spam by structure similarity with spam pages • Fingerprint: string containing all HTML 4. 0 elements with order preserved K fingerprints of spam pages, each with its frequency Pr(k) Shingles method: • Time complexity: grows linearly with size of labeled spam collection • Needs to crawl each resource and precompute spam fingerprint possibility

Plagiarism • Spammers often copy original content from all over the Web • Estimate Plagiarism • Spammers often copy original content from all over the Web • Estimate likelihood of content of r not being genuine • Random sequence of 10 words from page submit to Yahoo API get numbers of results • Most expensive feature: page download, query limit

Num. Ads • Spammers create pages for serving ads • g(r): number of googlesyndication. Num. Ads • Spammers create pages for serving ads • g(r): number of googlesyndication. com appeared in page r • Needs complete download of a web page Valid. Links • Many spam resources may be taken offline when detected • High portion of links by a spam user are invalid after some time • Expensive: send HTTP HEAD request for each resource

Evaluation • Public Dataset by Bib. Sonomy. org annotations of 27, 000 users, 25, Evaluation • Public Dataset by Bib. Sonomy. org annotations of 27, 000 users, 25, 000 of which are spammers • Training dataset: 500 users, half spammers, half legitimate users • Another training dataset of same size for precompution features like Tag. Spam, Tag. Blur and Dom. Fp • Aggregation of features on user level: Tag. Spam, Tag. Blur: post level Dom. Fp, Plagiarism, Num. Ads: resource level Simple average works most effective across all features

Each feature has predictive power • Each feature: contingency matrix n(l, f) Tag. Spam Each feature has predictive power • Each feature: contingency matrix n(l, f) Tag. Spam works the best

Classification Effect of feature selection (SVM): • a modest improvement in accuracy and decrease Classification Effect of feature selection (SVM): • a modest improvement in accuracy and decrease in false positive rate by using both Tag. Spam and Tag. Blur • Performance is hindered by the addition of the Valid. Links feature (not for linear separation) All classifiers perform very well, with accuracy over 96% and false positive rate below 5%.

Conclusion • Features are strong single use : 96% accuracy, 5% false positive combining: Conclusion • Features are strong single use : 96% accuracy, 5% false positive combining: : 98% accuracy, 2% false positive • Tag. Blur feature looking promising its reliance on tag-tag similarity could be updated others rely on resource content or search engine so not reliable • Bootstrap still an open issue features like Tag. Spam and Dom. Fp needs spam labels • Whether unsupervised features still needed like Valid. Links and Plagiarism

Questions? Questions?

@spam: The Underground on 140 Characters or Less Chris Grier, Kurt Thomas, Vern Paxson, @spam: The Underground on 140 Characters or Less Chris Grier, Kurt Thomas, Vern Paxson, Michael Zhang Presented by Renjie Zhao

Focus of the Paper • Categorization and measure of Twitter spam – Spammers’ strategies, Focus of the Paper • Categorization and measure of Twitter spam – Spammers’ strategies, accounts and tools – How good are they? (Much better than junk emails) • Identification of spam campaigns – URL clustering – Extraction of distinct spam behaviors and targets • Performance of URL blacklists against Twitter spam – Temporal effectiveness (lead/lag) – Spammers’ counter-measures

Preparation • Data Collection – Tapping into Twitter’s Streaming API • 7 million tweets Preparation • Data Collection – Tapping into Twitter’s Streaming API • 7 million tweets per day • Over the course of one month (January 2010 – Feburary 2010) – Total: 200 million tweets gathered • Spam Identification – Focus on tweets with URL (25 million URLs) – Check URLs with 3 blacklists: Google Safebrowsing API, URIBL, Joewein – Result: 2 million URLs are flagged as spam • Challenged by manual inspection!

Spam Breakdown Win an i. Touch AND a $150 Apple gift card @victim! http: Spam Breakdown Win an i. Touch AND a $150 Apple gift card @victim! http: //spam. com Call outs RT @scammer: check out the i. Pads there having a giveaway http: //spam. com Retweets http: //spam. com RT @barackobama A great battle is ahead of us Tweet hijacking Buy more followers! http: //spam. com #fwlr Trend setting Help donate to #haiti relief: http: //spam. com Trend hijacking

Clickthrough Analysis • According to Clickthrough data of 245, 000 URLs: – Only 2. Clickthrough Analysis • According to Clickthrough data of 245, 000 URLs: – Only 2. 3% have traffic – They had over 1. 6 million visitors • Clickthrough rate – For a certain spam URL, CR = <# of clicks> / <# of URL’s exposure> – Result: 0. 13% of spams tweets generate a visit (Compared to junk emails’ CR of 0. 0003%-0. 0006%)

Spam Accounts • 2 tests to identify career spamming accounts – χ2 test on Spam Accounts • 2 tests to identify career spamming accounts – χ2 test on timestamp – consistency with uniform distribution – Tweet entropy – whether content is repeated throughout tweets • Result In a sample of 43, 000 spam accounts: – 16% are identified as career spammers – What about the rest 84%?

Spam Accounts • Compromised (non-career) spamming accounts – Phishing sites • 86% of 20, Spam Accounts • Compromised (non-career) spamming accounts – Phishing sites • 86% of 20, 000 victims passed career spammer tests – Malware botnet: Koobface

Spam Campaigns • Multiple spamming accounts may co-operate to advertise a spam website • Spam Campaigns • Multiple spamming accounts may co-operate to advertise a spam website • URL clustering – Define a spam campaign as a binary feature vector c={0, 1}n – For two accounts i and j, if ci∩cj ≠ Ø, then i and j are clustered

Spam Campaigns • Phishing for followers – A pyramid scheme – Most spammers are Spam Campaigns • Phishing for followers – A pyramid scheme – Most spammers are compromised users advertising the service • Personalized mentions – twitprize. com/ – Unique, victim-specific landing pages shortened with tinyurl – Most relevant tweets are just RT or mentions

Spam Campaigns • Buying retweets – retweet. it – Usually employed by spammers to Spam Campaigns • Buying retweets – retweet. it – Usually employed by spammers to spread malware and scams – Most accounts are career spammers (by χ2 test) • Distributing malware – ‘Free’ software, drive-by download – Use multiple hops of redirect to mask landing pages

URL Blacklists • Currently (2010), Twitter relies Google Safebrowsing API to block malicious URLs. URL Blacklists • Currently (2010), Twitter relies Google Safebrowsing API to block malicious URLs. – Blacklists usually lags behind spam tweets – No retroactive blocking!

Evading URL Blacklists • URL shortening service – bit. ly goo. gl ow. ly Evading URL Blacklists • URL shortening service – bit. ly goo. gl ow. ly spam. com • What about domain-wise blacklists?

Conclusion • • 8% of URLs on Twitter are spams 16% of spam accounts Conclusion • • 8% of URLs on Twitter are spams 16% of spam accounts are automated bots Spam Clickthrough rate = 0. 13% Spammers may coordinate thousands of accounts in a campaign • URL blacklists don’t work very well – because of delayed response – unable to reveal shortened URLs • Advice – Dig deeper into redirect chains – Retroactive blacklisting to increase spammers’ cost

Follow-ups • More researches on spammers’ behaviors • Twitter added feature for user to Follow-ups • More researches on spammers’ behaviors • Twitter added feature for user to report spam • ‘Bot. Maker’ launched in August

Entropy-based Classification of ‘Retweeting’ Activity [Ghosh et al. ] • Question – Given the Entropy-based Classification of ‘Retweeting’ Activity [Ghosh et al. ] • Question – Given the time series of ‘retweeting’ activity on some usergenerated content or tweet, how do we meaningfully categorize it as organic or spam? • Contributions – Use information theory-based features to categorize tweeting activity • Time interval entropy • User entropy

Dynamics of Retweeting Activity (i) Popular news website (nytimes) (ii) Popular celebrity (billgates) (iii) Dynamics of Retweeting Activity (i) Popular news website (nytimes) (ii) Popular celebrity (billgates) (iii) Politician (silva_marina) vs (iv) An aspiring artist (youngdizzy) (v) Post by a fan site (Annie. Beiber) (vi) Advertisement using social media(onstrategy)

Measuring time interval and user diversity • Measure time interval between consecutive retweets • Measuring time interval and user diversity • Measure time interval between consecutive retweets • Count distinct tweeting users Dti

Time Interval Diversity Many different time intervals (i) Frequency of time Intervals of duration Time Interval Diversity Many different time intervals (i) Frequency of time Intervals of duration Dti Few time intervals observed (ii) Time Interval Entropy

User Diversity Many different users retweet a few times each Frequency of retweets by User Diversity Many different users retweet a few times each Frequency of retweets by distinct user fi Few users retweet many times each User Entropy

Bloggers and News Website Dynamics of Retweeting Activity (i) Popular news website (nytimes) (ii) Bloggers and News Website Dynamics of Retweeting Activity (i) Popular news website (nytimes) (ii) Popular celebrity (billgates)

Campaigners Dynamics of Retweeting Activity (iii) Politician (silva_marina) (vi) Animal Right Activist(nokillanimalist) Campaigners Dynamics of Retweeting Activity (iii) Politician (silva_marina) (vi) Animal Right Activist(nokillanimalist)

Performers and their fans Dynamics of Retweeting Activity (iv) An aspiring artist (youngdizzy) (v) Performers and their fans Dynamics of Retweeting Activity (iv) An aspiring artist (youngdizzy) (v) Post by a fan site (Annie. Beiber)

Advertisers and spammers (ix) Advertisement by a Japanese user (nikotono) (vii) Advertisement using social Advertisers and spammers (ix) Advertisement by a Japanese user (nikotono) (vii) Advertisement using social media(onstrategy) (viii) Account eventually suspended by Twitter(Easy. Cash 435)

Validation Annie. Bieber News and blogs bot activity nytimes billgates silva_marina Easy. Cash advertisements Validation Annie. Bieber News and blogs bot activity nytimes billgates silva_marina Easy. Cash advertisements & spams animalist onstrategy campaigns Donna. CCasteel Manually annotated URLs shown in the entropy plane

Conclusion – Novel information theoretic approach to activity recognition • Content independent • Scalable Conclusion – Novel information theoretic approach to activity recognition • Content independent • Scalable and efficient • Robust to sampling – Results • sophisticated tools for marketing and spamming • Twitter is exploited for promotional and spam-like activities • Able to identify distinct classes of dynamic activities in Twitter and associated content • Separation of popular with unpopular content – Applications-spam detection, trend identification, trust management, user-modeling, social search, content classification