Скачать презентацию Overview of the KBP 2012 Slot-Filling Tasks Hoa Скачать презентацию Overview of the KBP 2012 Slot-Filling Tasks Hoa

3c271c5cdf3c292b59bc2b5b2f08cfc7.ppt

  • Количество слайдов: 16

Overview of the KBP 2012 Slot-Filling Tasks Hoa Trang Dang (National Institute of Standards Overview of the KBP 2012 Slot-Filling Tasks Hoa Trang Dang (National Institute of Standards and Technology Javier Artiles (Rakuten Institute of Technology) James Mayfield (Johns Hopkins University) Joe Ellis, Xuansong Li, Kira Griffitt, Stephanie Strassel, Jonathan Wright (Linguistic Data Consortium)

Slot-filling Tasks • Goal: Augment a reference knowledge base (KB) with info about target Slot-filling Tasks • Goal: Augment a reference knowledge base (KB) with info about target entities as found in a diverse collection of documents • Reference KB: Oct 2008 Wikipedia snapshot. Each KB node corresponds to a Wikipedia and contains: ▫ Infobox ▫ Wiki_text (free text not in infobox) • English source documents: ▫ 2. 3 M news docs (1. 2 M docs in 2011) ▫ 1. 5 M Web and other docs (0. 5 M docs in 2011) • [Spanish source documents] • Diagnostic task: Slot Filler Validation

Slots derived from Wikipedia infobox Person Organization per: alternate_names per: member_of org: alternate_names per: Slots derived from Wikipedia infobox Person Organization per: alternate_names per: member_of org: alternate_names per: date_of_birth per: employee_of org: political_religious_affiliation per: age per: religion org: top_members_employees per: country_of_birth per: spouse org: number_of_employees per: stateorprovince_of_birth per: children org: members per: city_of_birth per: parents org: member_of per: date_of_death per: siblings org: subsidiaries per: country_of_death per: other_family org: parents per: stateorprovince_of_death per: charges org: founded_by per: city_of_death org: date_founded per: cause_of_death org: date_dissolved per: countries_of_residence org: country_of_headquarters per: statesorprovinces_of_residence org: stateorprovince_of_headquarters per: cities_of_residence org: city_of_headquarters per: schools_attended org: shareholders per: title org: website

Slot-Filling Task Requirements • Task: given target entity and predefined slots for each entity Slot-Filling Task Requirements • Task: given target entity and predefined slots for each entity type (PER, ORG), return all new slot fillers for that entity that can be found in the source documents, and a supporting document for each filler • Non-redundant ▫ Don’t return a slot filler if it’s already in the KB ▫ Don’t return more than one instance of a slot filler • Exact boundaries of filler string, as found in supporting document ▫ Text is complete (e. g. , “John Doe” rather than “John”) ▫ No extraneous text (e. g. , “John Doe” rather than “John Doe’s house” • Evaluation based on TREC-QA pooling methodology, combine ▫ Candidate slot fillers from non-exhaustive manual search ▫ Candidate slot fillers from fully automatic systems Answer “key” is incomplete, coverage depends on number, quality, and diversity of contributing systems.

Differences from KBP 2011 • Offsets provided for target entity mention in query • Differences from KBP 2011 • Offsets provided for target entity mention in query • Increased number of submissions (up to 5) • Require normalization of slot fillers that are dates (“yesterday” -> “ 2012 -11 -04”) • Request each proposed slot filler to include ▫ A confidence value ▫ Offsets for justification (usually a sentence) ▫ Offsets for the raw (unnormalized) slot filler in the document • Move toward more precise justifications ▫ Improved usability (for humans) in end applications ▫ Improved training data for systems • Offsets and confidence values did not affect official scores ▫ But confidence values were used to rank and truncate extremely lengthy submissions

Slot-Filling Evaluation • Pool responses from submitted runs and from manual search -> ▫ Slot-Filling Evaluation • Pool responses from submitted runs and from manual search -> ▫ Set of [docid, answer-string] pairs for each target entity and slot • Assessment: ▫ Each pair judged as one of correct, redundant, inexact, or wrong (credit given only for correct responses) ▫ Correct pairs grouped into equivalence classes (entities); each singlevalued slot has at most one equivalence class for a given target entity • Scoring: ▫ Recall: number of correct equivalence classes returned / number of known equivalence classes ▫ Precision: number of correct equivalence classes returned / number of [docid, answer-string] pairs returned ▫ F 1 = (P*R)/(R+P)

Slot Filling Participants Team Organization ADVIS_UIC * University of Illinois at Chicago GDUFS * Slot Filling Participants Team Organization ADVIS_UIC * University of Illinois at Chicago GDUFS * Guangdong University of Foreign Affairs IIRG University College Dublin lsv Saarland University NLPComp The Hong Kong Polytechnic University NYU New York University papelo * NEC Laboratories America PRIS Beijing University of Posts and Telecommunications Siel_12 International Institute of Information Technology, Hyderabad sweat 2012 * Chinese Academy of Sciences TALP_UPC * Technical University of Catalonia, UPC * first-time slot-filling team

Top 6 KBP 2012 Slot-Filling teams Top 6 KBP 2012 Slot-Filling teams

Top 4 KBP 2012 Slot-Filling teams Top 4 KBP 2012 Slot-Filling teams

Slot-Filling Approaches • IIRG: (+ling, -ML) ▫ Stanford Core. NLP for POS, NER, parse. Slot-Filling Approaches • IIRG: (+ling, -ML) ▫ Stanford Core. NLP for POS, NER, parse. ▫ Sentence retrieval by exact match with named mention of target entity. ▫ Rule-based pattern matching and keyword matching to identify slot fillers. • lsv: (-ling, +ML) ▫ Shallow approach – no parse or coref ▫ Query expansion via Wikipedia redirect links ▫ SVM and Freebase for distant supervision • NYU: (+ling, +ML) ▫ ▫ POS, parse, NER, time expression tagging, coref Query expansion via small set of handcrafted rules, Wikipedia redirect links Max. Ent and Freebase for distant supervision. Combination of: hand-coded rules, patterns generated by bootstrapping and then manually reviewed, and classifier trained by distant supervision • PRIS: (+ling, +ML) ▫ Stanford Core. NLP for POS, NER, SUTime, parse, coref; ▫ Query expansion via small set of handcrafted rules, coref’d names; ▫ Adaboost for finding new extraction patterns (word sequence patterns and dependency path patterns)

Distribution of slots in answer key Distribution of slots in answer key

Slot productivity 2010 1057 2011 953 2012 1569 per: title 14% per: title 21% Slot productivity 2010 1057 2011 953 2012 1569 per: title 14% per: title 21% per: title 14% org: top_members/e mployees 12% org: top_members/em ployees 12% org: top_members_ employees 11% 10% per: member_of 6% per: employee_of 7% org: alternate_names 5% per: employee_of 7% per: children 6% org: subsidiaries 4% per: member_of 5% org: alternate_names 6% per: member_of 4% per: alternate_names 5% per: employee_of 4% per: cities_of_ residence 4% org: subsidiaries 3% per: cities_of_ residence 4%

Slot filler Validation (SFV) • Goals ▫ Improve precision of full slot-filling systems (without Slot filler Validation (SFV) • Goals ▫ Improve precision of full slot-filling systems (without reducing recall) ▫ Allow teams without a full slot-filling system to participate, focus on answer validation rather than document retrieval • SFV input: ▫ All input to slot-filling task ▫ Submission files from all Slot Filling runs, containing candidate slot fillers ▫ No information about “past performance” of each slot filling system • SFV output: ▫ Binary classification (Correct / Incorrect) of each candidate slot filler • Evaluation: ▫ Filter out “Incorrect” slot fillers from each run, and score; compare to score for original run • Submissions: 1 team (Blender_CUNY)

Ru 10_ n_ 2 1 Ru 0_3 n_ Ru 2_ n_ 1 Ru 8_ Ru 10_ n_ 2 1 Ru 0_3 n_ Ru 2_ n_ 1 Ru 8_ n_ 2 Ru 8_4 n_ Ru 6_ n_ 1 Ru 8_5 n_ Ru 3_ n_ 1 Ru 3_2 n_ Ru 4_1 n_ Ru 1_ n_ 1 Ru 1_3 n_ Ru 1_5 n_ 9_ 1 n_ Ru F 1 Filtering candidate slot fillers 0. 6 Original Filtered by SFV 0. 5 0. 4 0. 3 0. 2 0. 1 0

Answer Justification • Goals ▫ Improve training data for systems – narrow down location Answer Justification • Goals ▫ Improve training data for systems – narrow down location of answer patterns ▫ Reduce assessment effort (for correct answers with correct justifications) ▫ Improve usability (for humans) in end applications • Task guidelines: ▫ For each slot filler, provide start and end offsets for the sentence or clause that provides justification for the relation. For example, for query per: spouse of “Michelle Obama” and the sentence “He is married to Michelle Obama” (“He” referring to Barack Obama mentioned earlier in the document), the filler … should be “Barack Obama”, the offsets for filler must point to “He” and the offsets for justification must point to “He is married to Michelle Obama”. • Slight mismatch with LDC assessment guidelines (require antecedent of relevant pronouns in justification, otherwise judged as inexact) ▫ Need additional discussion/refinement of guidelines

LDC Data, Annotation, and Assessment LDC Data, Annotation, and Assessment