Скачать презентацию Translation Model Parameters adapted from notes from Philipp Скачать презентацию Translation Model Parameters adapted from notes from Philipp

fb96101f355ddf98fc396f92d9e4f007.ppt

  • Количество слайдов: 15

Translation Model Parameters (adapted from notes from Philipp Koehn & Mary Hearne) 24 th Translation Model Parameters (adapted from notes from Philipp Koehn & Mary Hearne) 24 th March 2011 Dr. Declan Groves, CNGL, DCU dgroves@computing. dcu. ie

Translation Model Translation Model

Lexical (Word) Translation How to translate a word? Dictionary look up: Haus: house, building, Lexical (Word) Translation How to translate a word? Dictionary look up: Haus: house, building, home, household, shell Multiple translations: some more frequent than others How do we determine probabilities for possible candidate translations? Collect statistics from a parallel corpus: Translation of Haus Count house 8, 000 building 1, 600 home 200 household 150 shell 50

Estimate Translation Probabilities Translation of Haus house 8, 000 building 1, 600 home 200 Estimate Translation Probabilities Translation of Haus house 8, 000 building 1, 600 home 200 household 150 shell 50 Total Count 10, 000 Use relative frequencies to estimate probabilities P(s|t) = 0. 8, if t = house 0. 16, if t = building 0. 02, if e = home 0. 015, if e = household 0. 005, if e = shell

Alignment 1 das the 1 2 Haus house 2 3 4 ist klein is Alignment 1 das the 1 2 Haus house 2 3 4 ist klein is small 3 4

Reordering klein the ist house das Haus is small Reordering klein the ist house das Haus is small

One-to-many, one-to-none das the das Haus ist klitzeklein house is very small Haus ist One-to-many, one-to-none das the das Haus ist klitzeklein house is very small Haus ist klein house is small

Inserting words NULL das the Haus ist house is klein just small Inserting words NULL das the Haus ist house is klein just small

Translation Process as String Re-Writing SMT Translation Model takes these alignment characteristics into account: Translation Process as String Re-Writing SMT Translation Model takes these alignment characteristics into account: John did not slap the green witch One-to-many, many-to-none FERTITLITY IBM Model 3 John not slap the green witch TRANSLATION John no daba una botefada la verde bruja INSERTION John no daba una botefada a la verde bruja Reordering John no daba una botefada a la bruja verde DISTORTION

Translation Model Parameters (1/3) Translation Model takes these characteristics into account, modelling them using Translation Model Parameters (1/3) Translation Model takes these characteristics into account, modelling them using different parameters. t: Lexical / word-to-word translation parameters t(building|Haus)… t(house|Haus) i. e. what is the probability that “Haus” will produce the English word house/building whenever “Haus” appears? n: Fertility parameters n(1|klitzklein) n(2|klitzklein) … i. e. what is the probability that “klitzklein” will produce exactly 1/2… English words?

Translation Model Parameters (2/3) d: Distortion parameters d(3|2) i. e. what is the probability Translation Model Parameters (2/3) d: Distortion parameters d(3|2) i. e. what is the probability that the German word in position 2 of the German sentence will generate an English word that ends up in position 2/3 of an English translation? d(2|2) Enhanced distortion scheme takes into account the lengths of the German and English sentences: d(3|2, 4, 6): Same as for d(3|2), except we also specify that the given German string has 4 words and the given English string has 6 words We also have word-translation parameters corresponding to insertions: t( just | NULL) = ? i. e. what is the probability that the English word just is inserted into the English string? Insertion strategy: Pretend that each German sentence begins with the invisible word NULL

Translation Model Parameters: Insertion p: set a single parameter p 1 and use it Translation Model Parameters: Insertion p: set a single parameter p 1 and use it as follows: At this point we are ready to start translating these German words into English words As each word is translated, we insert an English word into the target string with probability p 1 Assign fertilities to each word in the German string The probability p 0 of not inserting an extra word is given as: p 0 = 1 – p 1 What about distortion parameters for inserted words? Overly-simplistic to say that NULL will generate a word at position X rather than somewhere else – insertions are unpredictable. Instead: Generate all English words predicted by actually occurring German words (i. e. not NULL) Position these English words according to distortion parameters Then, generate possible insertion words and position them in the spaces left over i. e. if there are 3 -NULL generated words and 3 left-over slots, then there are 3!=6 ways of inserting, all which we assign an equal probability of 1/6

Summary of Translation Model Parameters FERTITLITY n Table plotting source words against fertilities TRANSLATION Summary of Translation Model Parameters FERTITLITY n Table plotting source words against fertilities TRANSLATION t Table plotting source words against target words INSERTION p 1 Single number indicating the probability of insertion DISTORTION d Table plotting source string positions against target string positions

Learning Translation Models How can we automatically acquire parameter values for t, n, d Learning Translation Models How can we automatically acquire parameter values for t, n, d and p from data? If we had a set of source language strings (e. g. German) and for each of those strings a sequence of step-by-step rewritings into English… problem solved! Fairly unlikely to have this type of data If we had a set of word alignments, we could estimate the parameters of our generative translation model If we had the parameters, we could estimate the alignments Chicken & Egg problem How can collect estimates from non-aligned data? Expectation Maximization Algorithm (EM) We can gather information incrementally, each new piece helping us build the next.

Expectation Maxmization Algorithm Incomplete Data If we had complete data, we could estimate the Expectation Maxmization Algorithm Incomplete Data If we had complete data, we could estimate the model If we had a model we could fill in the gaps in the data i. e. if we had a rough idea about which words correspond, then we could use this knowledge to infer more data EM in a nutshell: Initialise model parameters (i. e. uniform) Assign probabilities to the missing data Estimate model parameters from completed data Iterate