Скачать презентацию Assignments Basic idea is to choose a Скачать презентацию Assignments Basic idea is to choose a

9c3b0fa45f9fb0e3e06c780178128730.ppt

  • Количество слайдов: 32

Assignments • Basic idea is to choose a topic of your own, or to Assignments • Basic idea is to choose a topic of your own, or to take a study found in the literature • Report is in two parts – Description of problem and Review of relevant literature (not just the study you are going to replicate, but related things too) – Description and discussion of your own results • First part (1000 -1500 words) due in Friday 25 April • Second part (1500 -2000 words) due in Friday 9 May • No overlap allowed with LELA 30122 projects – Though you are free to use that list of topics for inspiration – See LELA 30122 Web. CT page, “project report”

Church et al. 1991 K Church, W Gale, P Hanks, D Hindle (1991) Using Church et al. 1991 K Church, W Gale, P Hanks, D Hindle (1991) Using Statistics in Lexical Analysis, in U Zernik (ed) Lexical Acquisition: Exploiting online resources to build a lexicon. Hillsdale NJ (1991): Lawrence Erlbaum, pp. 115 -164. 2

Background • Corpora were becoming more widespread and bigger • Computers becoming more powerful Background • Corpora were becoming more widespread and bigger • Computers becoming more powerful • But tools for handling them still relatively primitive • Use of corpora for lexicology • Written for the First International Workshop on Lexical Acquisition, Detroit 1989 • In fact there was no “Second IWLA” • But this paper (and others in the collection) become much cited and well known

The problem • Assuming a lexicographer has at their disposal a reference corpus of The problem • Assuming a lexicographer has at their disposal a reference corpus of considerable size, … • A typical concordance listing only works well with – words with just two or three major sense divisions – preferably well distinct – and generating only a pageful of hits • Even then, the information you may be interested in may not be in the immediate vicinity

5 5

The solution • Information Retrieval faces a comparable problem (overwhelming data), and suggests a The solution • Information Retrieval faces a comparable problem (overwhelming data), and suggests a solution 1. Choose an appropriate statistic to highlight information “hidden” in the corpus 2. Preprocess the corpus to highlight properties of interest 3. Select an appropriate unit of text to constrain the information extracted

Mutual Information • MI: a measure of similarity • Compares the joint probability of Mutual Information • MI: a measure of similarity • Compares the joint probability of observing two words together with the probabilities of observing them independently (chance) • If there is a genuine association, I(x; y)>>0 • If no association, P(x, y) P(x)P(y), I(x; y) 0 • If complementary distribution, I(x; y)<<0 7

Top ten scoring pairs of strong y and powerful y Data from AP corpus, Top ten scoring pairs of strong y and powerful y Data from AP corpus, N=44. 3 m words 8

Mutual Information • Can be used to demonstrate a strong association • Counts can Mutual Information • Can be used to demonstrate a strong association • Counts can be based on immediate neighbourhood, as in previous slide, or on cooccurrence within a window (to left or right or both), or within same sentence, paragraph, etc. • MI shows strongly associated word pairs, but cannot show the difference between, eg strong and powerful

t-test • A measure of dissimilarity • How to explain relative strength of collocations t-test • A measure of dissimilarity • How to explain relative strength of collocations such as – strong tea ~ powerful tea – powerful car ~ strong car • The less usual combination is either rejected, or has a marked contrastive meaning • Use example of {strong|powerful} support because tea rather infrequent in AP corpus

{strong|powerful} support • MI can’t help: very difficult to get value for I(powerful; support)<<0 {strong|powerful} support • MI can’t help: very difficult to get value for I(powerful; support)<<0 because of size of corpus – Say x and y both occur about 10 times per 1 m words in a corpus – P(x) = P(y) = 10 -5 and chance P(x)P(y) = 10 -10 – I(powerful; support)<<0 means P(x)P(y) << 10 -10 – ie much less than 1 in 10, 000, 000 – Hard to say with confidence

Rephrase the question • Can’t ask “what doesn’t collocate with powerful? ” • Also, Rephrase the question • Can’t ask “what doesn’t collocate with powerful? ” • Also, can’t show that powerful support is less likely than chance: in fact it isn’t – I(powerful; support)=1. 74 – 3 x greater than chance! • Try to compare what words are more likely to appear after strong than after powerful • Show that strong support relatively more likely than powerful support

t-test • Null hypothesis (H 0) – H 0 says that there is no t-test • Null hypothesis (H 0) – H 0 says that there is no significant difference between the scores • H 0 can be rejected if – Difference of at least 1. 65 sd’s – 95% confidence – ie the difference is real 13

t-test • Comparison of powerful support with chance is not significant • t = t-test • Comparison of powerful support with chance is not significant • t = 0. 99 (less than 1 sd!) • But if we compare powerful support with strong support, t = – 13 • Strongly suggests there is a difference 14

15 15

 • MI and t-score show different things 16 • MI and t-score show different things 16

How is this useful? • Helps lexicographers recognize significant patters • Especially useful for How is this useful? • Helps lexicographers recognize significant patters • Especially useful for learners’ dictionaries to make explicit the difference in distribution between near synonyms • eg what is the difference between a strong nation and a powerful nation? – Strong as in strong defense, strong economy, strong growth – Powerful as in powerful posts, powerful figure, powerful presidency

Taking advantage of POS tags • Looking at context in terms of POS rather Taking advantage of POS tags • Looking at context in terms of POS rather than lexical items may be more informative • Example, how can we distinguish to as an infinitive marker from to as a preposition? • Look at words which immediately precede to – able to, began to, … vs back to, according to, … • t-score can show that they have a different distribution

19 19

 • Similar investigation with subordinate conjunction that (fact that, say that, that the, • Similar investigation with subordinate conjunction that (fact that, say that, that the, that he) and demonstrative pronoun that (that of, that is, in that, to that) • Look at both preceding and following word • Distribution is so distinctive that this process can help us to spot tagging errors

21 21

subordinate conjunction t 14. 19 w that/cs 227 w that/dt 2 demonstrative pronoun w subordinate conjunction t 14. 19 w that/cs 227 w that/dt 2 demonstrative pronoun w so/cs t – 12. 25 w that/cs 1 w that/dt 151 w of/in 22

If your corpus is parsed • Looking for word sequences can be limiting • If your corpus is parsed • Looking for word sequences can be limiting • More useful if you can extract things like subjects and objects of verbs • (Can be done to some extent by specifying POS tags within a window, but that’s very noisy) • Assuming you can easily extract, eg Ss, Vs, and Os …

What kinds of things do boats do? What kinds of things do boats do?

25 25

What is an appropriate unit of text? • Mostly we have looked at neighbouring What is an appropriate unit of text? • Mostly we have looked at neighbouring words, or words within a defined context • Bigger discourse units can also provide useful information • eg taking entire text as the unit: – How do stories that mention food differ from stories that mention water?

27 27

 • More subtle distinctions can be brought out in this way • What’s • More subtle distinctions can be brought out in this way • What’s the difference between a boat and a ship? • Notice how immediately neighbouring words won’t necessarily tell much of a story • But words found in stories that mention boats/ships help to characterize the difference in distribution, and give a clue as to the difference in meaning • Notice that human lexicographer still has to interpret the data

29 29

Word-sense disambiguation • The article also shows how you can distinguish two senses of Word-sense disambiguation • The article also shows how you can distinguish two senses of bank – Identify words which occur in the same text as bank and river on the one hand, and bank and money on the other

bank (river) vs bank (money) t 6. 63 4. 90 4. 01 3. 57 bank (river) vs bank (money) t 6. 63 4. 90 4. 01 3. 57 3. 46 3. 44 3. 27 3. 06 2. 83 2. 76 2. 74 2. 72 2. 71 2. 70 2. 66 2. 58 2. 53 bank&river bank&money 45 4 28 13 20 13 16 11 23 39 21 32 12 5 14 16 8 1 21 49 11 12 17 35 9 6 7 0 16 32 9 7 7 2 10 13 w river River water feet miles near boat south fisherman along border area village drinking across east century missing t bank&river bank&money -15. 95 6 467 -10. 70 2 199 -10. 60 0 134 -10. 46 0 131 -10. 13 0 124 - 9. 43 0 110 - 9. 03 1 134 - 8. 79 1 129 - 8. 79 0 98 - 8. 38 1 121 - 8. 17 0 87 - 7. 57 0 77 - 7. 44 0 75 - 7. 38 1 102 - 7. 31 1 101 - 7. 25 0 72 w money Bank funds billion Washington Federal cash interest financial Corp loans loan amount fund William company account deposits 31

Bank vs bank t 35. 02 34. 03 33. 60 33. 18 32. 98 Bank vs bank t 35. 02 34. 03 33. 60 33. 18 32. 98 32. 68 31. 56 31. 13 30. 79 27. 97 Bank bank 1324 24 1301 36 1316 48 1206 26 1204 29 1339 72 4116 1284 1151 47 1104 40 867 21 bank w Gaza Palestinian Israeli Strip Palestinians Israel Bank occupied Arab territories t -36. 48 -10. 93 -10. 43 - 9. 59 - 8. 47 - 8. 26 - 8. 21 - 7. 74 - 7. 54 Bank 1284 900 624 586 282 544 408 675 546 52 bank 3362 1161 859 786 430 693 554 816 676 140 w bank money federal company accounts central cash business loans robbery 32