Скачать презентацию Improved Word Alignments Using the Web as a Скачать презентацию Improved Word Alignments Using the Web as a

34da1c71f76fa35b6ddba48c7e591847.ppt

  • Количество слайдов: 32

Improved Word Alignments Using the Web as a Corpus International Conference RANLP 2007 (Recent Improved Word Alignments Using the Web as a Corpus International Conference RANLP 2007 (Recent Advances in Natural Language Processing) Preslav Nakov, University of California, Berkeley Svetlin Nakov, Sofia University "St. Kliment Ohridski" Elena Paskaleva, Bulgarian Academy of Sciences RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Statistical Machine Translation (SMT) n 1988 – IBM models 1, 2, 3, 4 and Statistical Machine Translation (SMT) n 1988 – IBM models 1, 2, 3, 4 and 5 n Start with bilingual parallel sentence- aligned corpus n Learn translation probabilities of individual words n 2004 – PHARAOH model n Learn translation probabilities for phrases n Alignment template approach – extracts translation phrases from word alignments n Improved word alignments in sentences improve translation quality! RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Word Alignments n The word alignments problem n Given a bilingual parallel sentence-aligned corpus Word Alignments n The word alignments problem n Given a bilingual parallel sentence-aligned corpus align the words in each sentence with corresponding words in its translation n Example English sentence Try our same day delivery of fresh flowers, roses, and unique gift baskets. n Example Bulgarian sentence Опитайте нашите свежи цветя, рози и уникални кошници с подаръци с доставка на същия ден. RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Word Alignments – Example try our same day delivery of fresh flowers roses and Word Alignments – Example try our same day delivery of fresh flowers roses and unique gift baskets опитайте нашите свежи цветя рози и уникални кошници с подаръци с доставка на същия ден RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Our Method n Use combination of n Orthographic similarity measure n Semantic similarity measure Our Method n Use combination of n Orthographic similarity measure n Semantic similarity measure n Competitive linking n Orthographic similarity measure n Modified weighted minimum-edit-distance n Semantic similarity measure n Analyses the co-occurring words in the local contexts of the target words using the Web as a corpus RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Orthographic Similarity n Minimum Edit Distance Ratio (MEDR) n MED(s 1, s 2) = Orthographic Similarity n Minimum Edit Distance Ratio (MEDR) n MED(s 1, s 2) = the minimum number of INSERT / REPLACE / DELETE operations for transforming s 1 to s 2 n n Longest Common Subsequence Ratio (LCSR) n LCS(s 1, s 2) = the longest common subsequence of s 1 and s 2 n RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Orthographic Similarity n Modified Minimum Edit Distance Ratio (MMEDR) for Bulgarian / Russian 1. Orthographic Similarity n Modified Minimum Edit Distance Ratio (MMEDR) for Bulgarian / Russian 1. Normalize the strings 2. Assign weights for the edit operations n Normalizing the strings n Hand-crafted rules n Strip the Russian letters "ь" and "ъ" n Remove the Russian "й" at the endings n Remove the definite article in Bulgarian (e. g. "ът", "ят" at the endings) RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Orthographic Similarity n Assigning weights for the edit operations n 0. 5 -0. 9 Orthographic Similarity n Assigning weights for the edit operations n 0. 5 -0. 9 for the vowel to vowel substitutions, e. g. 0. 5 for е о n 0. 5 -0. 9 for some consonant-consonant replacements, e. g. с з n 1. 0 for all other edit operations n Example: Bulgarian първият and the Russian первый (first) n Normalization produces първи and перви, thus MMED = 0. 5 (weight 0. 5 for ъ о) RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Semantic Similarity n What is local context? n Few words before and after the Semantic Similarity n What is local context? n Few words before and after the target word Same day delivery of fresh flowers, roses, and unique gift baskets from our online boutique. Flower delivery online by local florists for birthday flowers. n The words in the local context of given word are semantically related to it n Need to exclude the stop words: prepositions, pronouns, conjunctions, etc. n Stop words appear in all contexts n Need of sufficiently big corpus RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Semantic Similarity n Web as a corpus n The Web can be used as Semantic Similarity n Web as a corpus n The Web can be used as a corpus to extract the local context for given word n The Web is the largest possible corpus n Contains big corpora in any language n Searching some word in Google can return up to 1 000 excerpts of texts n The target word is given along with its local context: few words before and after it n Target language can be specified RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Semantic Similarity n Web as a corpus n Example: Google query for Semantic Similarity n Web as a corpus n Example: Google query for "flower" Flowers, Plants, Gift Baskets - 1 -800 -FLOWERS. COM - Your Florist. . . Flowers, balloons, plants, gift baskets, gourmet food, and teddy bears presented by 1 -800 -FLOWERS. COM, Your Florist of Choice for over 30 years. Margarita Flowers - Delivers in Bulgaria for you! - gifts, flowers, roses. . . Wide selection of BOUQUETS, FLORAL ARRANGEMENTS, CHRISTMAS ECORATIONS, PLANTS, CAKES and GIFTS appropriate for various occasions. CREDIT cards acceptable. Flowers, plants, roses, & gifts. Flowers delivery with fewer. . . Flowers, roses, plants and gift delivery. Order flowers from Pro. Flowers once, and you will never use flowers delivery from florists again. RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Semantic Similarity n Measuring semantic similarity n For given two words their local contexts Semantic Similarity n Measuring semantic similarity n For given two words their local contexts are extracted from the Web n A set of words and their frequencies n Apply lemmatization n Semantic similarity is measured as similarity between these local contexts n Local contexts are represented as frequency vectors for given set of words n Cosine between the frequency vectors in the Euclidean space is calculated RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Semantic Similarity n Example of context words frequencies word: flower word: computer word count Semantic Similarity n Example of context words frequencies word: flower word: computer word count fresh order rose delivery gift welcome red. . . 217 204 183 165 124 98 87. . . Internet PC technology order new Web site. . . 291 286 252 185 174 159 146. . . RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Semantic Similarity n Example of frequency vectors v 1: flower # word 0 1 Semantic Similarity n Example of frequency vectors v 1: flower # word 0 1 2 3. . . 4999 5000 alias alligator amateur apple. . . zap zoo freq. 3 2 0 5. . . 0 6 v 2: computer # word freq. 0 1 2 3. . . 4999 5000 n Similarity = cosine(v 1, v 2) RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria alias alligator amateur apple. . . zap zoo 7 0 8 133. . . 3 0

Cross-Lingual Semantic Similarity n We are given two words in different languages L 1 Cross-Lingual Semantic Similarity n We are given two words in different languages L 1 and L 2 n We have a bilingual glossary G of translation pairs {p ∈ L 1, q ∈ L 2} n Measuring cross-lingual similarity: 1. We extract the local contexts of the target words from the Web: C 1 ∈ L 1 and C 2 ∈ L 2 G C 1* 2. We translate the context 3. We measure similarity between C 1* and C 2 RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Competitive Linking n What is competitive linking? n One-to-one bi-directional word alignments algorithm n Competitive Linking n What is competitive linking? n One-to-one bi-directional word alignments algorithm n Greedy "best first" approach n Links the most probable pair first, removes it, and repeats the same for the rest RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Applying Competitive Linking 1. Make all words lowercase 2. Remove punctuation 3. Remove the Applying Competitive Linking 1. Make all words lowercase 2. Remove punctuation 3. Remove the stop words: prepositions, pronouns, conjunctions, etc. n We don't align them 4. Align the most similar pair of words 1. Using the orthographic similarity combined with the semantic similarity 5. Remove the aligned words 6. Align the rest of the sentences RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Our Method – Example n Bulgarian sentence Процесът на създаването на такива рефлекси е Our Method – Example n Bulgarian sentence Процесът на създаването на такива рефлекси е посложен, но същността им е еднаква. n Russian sentence Процесс создания таких рефлексов сложнее, но существо то же. RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Out Method – Example 1. Remove the stop words n Bulgarian: на, такива, е, Out Method – Example 1. Remove the stop words n Bulgarian: на, такива, е, но, им, е n Russian: таких, но, то 2. Align рефлекси and рефлексов (semantic 3. 4. 5. 6. 7. similarity = 0. 989) Align по-сложен and сложнее (orthographic similarity = 0. 750) Align процесът and процесс (orthographic similarity = 0. 714) Align създаването and создания (orthographic similarity = 0. 544) Align процесът and процесс (orthographic similarity = 0. 536) Not aligned: еднаква RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Our Method – Example процесът на създаването на такива рефлекси е по-сложен но същността Our Method – Example процесът на създаването на такива рефлекси е по-сложен но същността процесс создания таких рефлексов сложнее но существо им то е же еднаква RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Evaluation n We evaluated the following algorithms n BASELINE: the traditional alignment n n Evaluation n We evaluated the following algorithms n BASELINE: the traditional alignment n n n algorithm (IBM model 4) LCSR, MEDR, MMEDR: orthographic similarity algorithms WEB-ONLY: semantic similarity algorithm WEB-AVG: average of WEB-ONLY and MMEDR WEB-MAX: maximum of WEB-ONLY and MMEDR WEB-CUT: 1 if MMEDR(s 1, s 2) >= α (0 < α < 1), or WEB-ONLY(s 1, s 2) otherwise RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Testing Data and Experiments n Testing data set n A corpus of 5 827 Testing Data and Experiments n Testing data set n A corpus of 5 827 parallel sentences n Training set: 4 827 sentences n Tuning set: 500 sentences n Testing set: 500 sentences n Experiments n Manual evaluation of WEB-CUT n AER for competitive linking n Translation quality: BLEU / NIST RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Manual Evaluation of WEB-CUT n Aligned the texts of the testing data set n Manual Evaluation of WEB-CUT n Aligned the texts of the testing data set n Used competitive linking and WEB-CUT for α=0. 62 n Obtained 14, 246 distinct word pairs n Manually evaluated the aligned pairs as: n Correct n Rough (considered incorrect) n Wrong (considered incorrect) n Calculated precision and recall n For the case MMEDR < 0. 62 RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Manual Evaluation of WEB-CUT n Precision-recall curve RANLP 2007 – September 27 -29, 2007, Manual Evaluation of WEB-CUT n Precision-recall curve RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Evaluation of Alignment Error Rate n Gold standard for alignment n For the first Evaluation of Alignment Error Rate n Gold standard for alignment n For the first 100 sentences n Created manually by a linguist n Stop words and punctuation were removed n Evaluated the alignment error rate (AER) for competitive linking n Evaluated for all the algorithms n LCSR, MEDR, MMEDR, WEB-ONLY, WEB- AVG, WEB-MAX and WEB-CUT RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Evaluation of Alignment Error Rate n AER for competitive linking RANLP 2007 – September Evaluation of Alignment Error Rate n AER for competitive linking RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Evaluation of Translation Quality n Built a Russian Bulgarian statistical machine translation (SMT) system Evaluation of Translation Quality n Built a Russian Bulgarian statistical machine translation (SMT) system n Extracted from the training set the distinct word pairs aligned with competitive linking n Added them twice as additional “sentence” pairs to the training corpus n Trained log-linear model for SMT with standard feature functions n Used minimum error rate training on the tuning set n Evaluated BLUE and NIST score on the testing set RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Evaluation of Translation Quality n Translation quality: BLEU RANLP 2007 – September 27 -29, Evaluation of Translation Quality n Translation quality: BLEU RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Evaluation of Translation Quality n Translation quality: NIST RANLP 2007 – September 27 -29, Evaluation of Translation Quality n Translation quality: NIST RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Resources n We used the following resources: n Bulgarian-Russian parallel corpus: 5 827 sentences Resources n We used the following resources: n Bulgarian-Russian parallel corpus: 5 827 sentences n Bilingual Bulgarian / Russian glossary: 3 794 pairs of translation words n A list of 599 Bulgarian / 508 Russian stop words n Bulgarian lemma dictionary: 1 000 wordforms and 70 000 lemmata n Russian lemma dictionary: 1 500 000 wordforms and 100 000 lemmata RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Conclusion and Future Work n Conclusion n Semantic similarity extracted from the Web can Conclusion and Future Work n Conclusion n Semantic similarity extracted from the Web can improve statistical machine translation n For similar languages like Bulgarian and Russian orthographic similarity is useful n Future Work n Improve MMED with automatic leaned rules n Improve the semantic similarity algorithm n Filter parasite words like "site", "click", etc. n Replace competitive linking with maximum weight bipartite matching RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria

Improved Word Alignments Using the Web as a Corpus Questions? RANLP 2007 – September Improved Word Alignments Using the Web as a Corpus Questions? RANLP 2007 – September 27 -29, 2007, Borovets, Bulgaria