9f0469a11199b2997694a35ea1ce2eb4.ppt
- Количество слайдов: 47
Where does it break? or: Why Semantic Web research is not just “Computer Science as usual” Frank van Harmelen AI Department Vrije Universiteit Amsterdam
But first: “the Semantic Web forces us to rethink the foundations of many subfields of Computer Science” n f you ca continues to i “the challenge of the Semantic Webs, read thishared break many often silently held andst be you mu research” assumptions underlying decades of ids on stero “I will try to identify silently held assumptions which are no longer true on the Semantic Web, prompting a radical rethink of many past results” 2
Oh no, not more “vision”… sales are dropping like a rock our plan is to invent some sort of doohicky that everyone wants to buy. the visionary leadership work is done. How long will your part take? Don’t worry, there will be lots of technical content 3
Grand Topics… n what are the science challenges in SW? n Which implicit traditional assumptions break? n Illustrated with 4 such “traditional assumptions” and also: n “Which Semantic Web” ? 4
Before we go on: Which Semantic Web are we talking about?
Which Semantic Web? n Version 1: "Semantic Web as Web of Data" (TBL) n recipe: expose databases on the web, use RDF, integrate n meta-data from: l expressing DB schema semantics in machine interpretable ways n enable integration and unexpected re-use 8
Which Semantic Web? n Version 2: “Enrichment of the current Web” n recipe: Annotate, classify, index n meta-data from: l automatically producing markup: named-entity recognition, concept extraction, tagging, etc. n enable personalisation, search, browse, . . 9
Which Semantic Web? n Version 1: “Semantic Web as Web of Data” n Version 2: “Enrichment of the current Web” n Different use-cases n Different techniques n Different users 10
Semantic Web: Science or technology?
Semantic Web as Technology n n n better search & browse personalisation semantic linking semantic web services. . . Semantic Web as Science 15
4 examples of “where does it break? ” n old assumptions that no longer hold, n old approaches that no longer work
4 examples of “where does it break? ” Traditional complexity measures
Who cares about decidability? n Decidability ≈ completeness guarantee to find an answer, or tell you it doesn’t exist, given enough run-time & memory n Sources of incompleteness: l l incompleteness of the input data insufficient run-time to wait for the answer Completeness is unachievable in practice anyway, regardless of the completeness of the algorithm 18
Who cares about undecidability? n Undecidability ≠ always guaranteed not to find an answer n Undecidability = not always guaranteed to find an answer n Undecidability may be harmless in many cases; in all cases that matter 19
Who cares about complexity? n worst-case: may be exponentially rare n asymptotic n ignores constants 20
What to do instead? n Practical observations on RDF Schema: l 6 9 Compute full closure of O(105) statements n Practical observations on OWL: l NEXPTIME but fine on many practical cases n Do more experimental performance profiles with realistic data n Think hard about “average case” complexity…. 21
4 examples of “where does it break? ” Traditional complexity measures Hard in theory, easy in practice
Example: Reasoning with Inconsistent Knowledge This work with Zhisheng Huang & Annette ten Teije
Knowledge will be inconsistent Because of: n mistreatment of defaults n homonyms n migration from another formalism n integration of multiple sources 24
New formal notions are needed n New notions: l l Accepted: Rejected: Overdetermined: Undetermined: n Soundness: (only classically justified results) 25
Basic Idea 1. Start from the query 2. Incrementally select larger parts of the ontology that are “relevant” to the query, until: i. you have an ontology subpart that is Selection small enough to be consistent and function large enough to answer the query or ii. the selected subpart is already inconsistent before it can answer the query 26
General Framework s s(T, , 0) s(T, , 1) (T, , 2) 27
More precisely: Use selection function s(T, , k), with s(T, , k) µ s(T, , k+1) 1. Start with k=0: s(T, , 0) j¼ or s(T, , 0) j¼ : ? 2. Increase k, until s(T, , k) j¼ or s(T, , k) j¼ : 3. Abort when l l undetermined at maximal k overdetermined at some k 28
Nice general framework, but. . . n which selection function s(T, , k) to use? n Simple option: syntactic distance l l l put all formulae in clausal form: a 1 Ç a 2 Ç … Ç a n distance k=1 if some clausal letters overlap a 1 Ç X Ç … Ç a n, b 1 Ç … X Ç b n distance k if chain of k overlapping clauses are needed a 1 Ç X Ç … X 1 Ç a n b 1 Ç X 1 Ç … X 2 Ç b n, …. c 1 Ç X k Ç … X Ç c n 29
Evaluation Ontologies: n Transport: Communication: Madcow: 450 concepts 200 concepts 55 concepts Selection functions: n symbol-relevance = axioms overlap by 1 symbol n concept-relevance axioms overlap by 1 concept Query a random set of subsumption queries: Concept 1 Concept 2 ? 30
Evaluation: Lessons this makes concept-relevance a high quality sound approximation (> 90% recall, 100% precision) 31
Works surprisingly well On our benchmarks, allmost all answers are “intuitive” n Not well understood why n Theory doesn’t predict that this is easy l l l paraconsistent logic, relevance logic multi-valued logic n Hypothesis: due to “local structure of knowledge”? 32
4 examples of “where does it break? ” Traditional complexity measures Hard in theory, easy in practice context-specific nature of knowledge
Opinion poll left meaning of a sentence is only determined by the sentence itself, and not influenced by the surrounding sentences, and not by the situation in which the sentence is used right meaning of sentence is not only determined by the sentence itself, but is also influenced by by the surrounding sentences, and also by the situation in which the sentence is used 34
Opinion poll left right don’t you see what I mean? 35
Example: Ontology mapping with community support This work with Zharko Aleksovski & Michel Klein
The general idea background knowledge anchoring inference source target mapping 37
Example 1 38
Example 2 39
Experimental results n Source & target = flat lists of ± 1400 ICU terms each n Anchoring = substring + simple germanic morphology n Background = DICE (2300 concepts in DL) 41
New results: n more background knowledge makes mappings better l l l DICE (2300 concepts) Me. SH (22000 concepts) ICD-10 (11000 concepts) n Monotonic improvement of quality n Linear increase of cost 42
So… n The OLVG & AMC terms get their meaning from the context in which they are being used. n Different background knowledge would have resulted in different mappings n Their semantics is not context-free n See also: S-MATCH by Trento 44
4 examples of “where does it break? ” Traditional complexity measures Hard in theory, easy in practice context-specific nature of knowledge logic vs. statistics
Logic vs. statistics n DB schema’s & integration is only logic, no statistics n AI is both logic and statistics, but completely disjoint n Find combinations of the two worlds? l l l Statistics in the logic? Statistics to control the logic? Statistics to define the semantics of the logic? 46
Statistics in the logic? Fuzzy DL n (Talks. By. Frank v Interesting. Talks) ¸ 0. 7 n (Turkey: European. Country) · 0. 2 n young. Person = Person u 9 age. Young(x) = 1 0 10 yr 30 yr n very. Young. Person = Person u 9 age. very(Young) 1 0 47 Umberto Straccia
Statistics to control the logic? n query: A v B ? n B = B 1 u B 2 u B 3 A v B 1 , A v B 2 , A v B 3 ? B 2 B 1 A B 3 48
Statistics to control the logic? n Use “Google distance” to decide which ones are reasonable to focus on n Google distance ≈ symmetric conditional probability of co-occurrence ≈ estimate of semantic distance ≈ estimate of “contribution” to A v B 1 u B 2 u B 3 B 2 B 1 This work by Riste Gligorov A B 3 49
Statistics to define semantics? n Many peers have many mappings on many terms to many other peers n Mapping is good if results of “whispering game” are truthful n Punish mappings that contribute to bad whispering results n Network will converge to set of good mappings (or at least: consistent) This work by Karl Aberer 50
Statistics to define semantics? n Meaning of terms = relations to other terms n Determined by stochastic process n Meaning ≈ stable state of self-organising system n statistics = getting a system to a meaning-defining stable state n logic = description of such a stable state n Note: meaning is still binary, classical truth-value n Note: same system may have multiple stable states… 51
4 examples of “where does it break? ” Traditional complexity that no longer hold, n old assumptions measures don’t work ncompleteness, decidability, complexity n. Sometimes “hard in theory, no longer work old approaches that easy in practice” n. Q/A over inconsistent ontologies is easy, but why? Meaning dependent on context nmeaning determined by background knowledge Logic versus statistics nstatistics in the logic nstatistics to control the logic nstatistics to determine semantics
Final comments n These 4 “broken assumptions/old methods” were just examples. There are many more. (e. g. Hayes, Halpin on identity, equality and reference) n Notice that they are interlinked, e. g hard theory/easy practice meaning in context & & complexity logic/statistics n Working on these will not be Sem. Web work per se, but l l they will be inspired by Sem. Web challenges they will help the Sem. Web effort (either V 1 or V 2) 53
Have fun with the puzzles! 54
9f0469a11199b2997694a35ea1ce2eb4.ppt