
269beb9b36b11d7402e7919d6c976a71.ppt
- Количество слайдов: 78
Ontology-based approaches to providing a semantic infrastructure for Linked Data Mark Bide Godfrey Rust Rightscom Limited British Library Linked Data Workshop – 27 May 2010
2 Agenda • Who are we? – a brief introduction • What are the issues as we see them? • “Joined-up semantics”: two use cases British Library Linked Data Workshop 27 May 2010
3 Who are we? • Godfrey Rust ▫ Director/Chief Data Architect Rightscom ▫ 30 years data modelling/management in the content industry ▫ Builder of the National Discography (a long time ago) • Mark Bide ▫ Director/Senior Consultant Rightscom ▫ Executive Director of EDIt. EUR ▫ A publisher (a long time ago) • Rightscom ▫ A specialist London-based media consultancy ▫ Specialists in the management of content online ▫ Particular expertise in issues of identity and metadata management British Library Linked Data Workshop 27 May 2010
4 Linked Data: the issues as we see them Mark Bide British Library Linked Data Workshop 27 May 2010
The “research challenges” of Linked Data From: Bizer, Heath & Berners Lee (2009) Linked Data - The Story So Far International Journal on Semantic Web and Information Systems (Special Issue) • User interfaces – how to present data from heterogeneous sources to users • Application architectures – scalability of “on the fly” link traversal • Schema mapping and data fusion • Link maintenance – unmaintained URIs • Licensing – automating access and use management • Trust, quality and relevance - representation of provenance and trustworthiness • Privacy – integration of personal data from multiple sources
The “research challenges” of Linked Data From: Bizer, Heath & Berners Lee (2009) Linked Data - The Story So Far International Journal on Semantic Web and Information Systems (Special Issue) • User interfaces – how to present data from heterogeneous sources to users • Application architectures – scalability of “on the fly” link traversal • Schema mapping and data fusion • Link maintenance – unmaintained URIs • Licensing – automating access and use management • Trust, quality and relevance - representation of provenance and trustworthiness • Privacy – integration of personal data from multiple sources
7 Schema mapping… • Data needs to be “integrated in a meaningful way before it is displayed to users” ▫ Requires “mapping of terms from different vocabularies” • W 3 C recommendations…“define basic terminology…[but are] too coarse grained to properly transfer data between schemata” ▫ Structural and semantic heterogeneity • Requirement: “languages for more fine-grained mappings” ▫ Including capability to manage partial mappings to cover cases where data sources mix terminology British Library Linked Data Workshop 27 May 2010
8 …and data fusion • “The process of integrating multiple data items representing the same real world object into a single consistent and clean representation” • “Main challenge”: the resolution of data conflicts ▫ Different values for the same property ▫ [although is the “main challenge”? ] • All of these issues are perhaps less acute for human mediated metadata driven discovery processes ▫ But metadata isn’t only about human-mediated discovery British Library Linked Data Workshop 27 May 2010
9 The challenge: data integration • “The ultimate goal of Linked Data is to be able to use the Web like a single global database. ” [Bizer, Heath & Berners Lee] • Challenges of integration are the same whether looking at silos of information within an enterprise or more widely at silos of information between enterprises ▫ The key problem lies in the data… ▫ …not in the software (and even less in the hardware) • The “main challenge” in system integration is invariably consistency of the semantics of different systems • You would not attempt to create a single enterprise database by simple amalgamation British Library Linked Data Workshop 27 May 2010
10 Linked Data: a new label for an old idea? • All databases are Linked Data ▫ Linking primary to foreign keys • The <indecs> definition of metadata [2000] ▫ “An item of metadata is a relationship which someone claims to exist between two referents” [Rust & Bide] • Semantic Web tools (RDF, OWL etc) provide a level of sophistication for (meta)data management which has been missing in relational models ▫ Make the links explicit: “first class objects” ▫ Overcome restrictions of fixed predefined objects/entities British Library Linked Data Workshop 27 May 2010
11 But these tools take us only so far • RDF, OWL deal only with infrastructure and logical relations ▫ …not meaning • The equivalent of a database software platform ▫ Oracle, SQL ▫ Nothing to say about the data that populates the database • Existing web namespaces (eg dc: foaf: ) of some value ▫ …but there are challenges British Library Linked Data Workshop 27 May 2010
12 A simple example: dc: creator • Beethoven’s Fifth Symphony ▫ dc: creator: Ludwig van Beethoven ▫ dc: publisher: Bärenreiter • A recording of Beethoven’s Fifth Symphony ▫ dc: creator: Ludwig van Beethoven (or dc: contributor? ) ▫ dc: creator: Herbert von Karajan (or dc: contributor? ) ▫ dc: creator: Berlin Philharmonic (or dc: contributor? ) ▫ dc: publisher: Deutsche Grammophon (or…what? ) ▫ dc: publisher: Bärenreiter (or…what? ) • Crowd sourced solutions? ▫ Gracenote demonstrates both the potential and the challenges…. • Ultimately meaning is always contextual not universal ▫ Managing semantics is at the root of successful integration whether at enterprise or network level British Library Linked Data Workshop 27 May 2010
The semantic challenge in general • Different names (and codes/languages) for the same value (your "Author" = my "Creator") • Different specializations ("Editor", "Contributing Editor", "Managing Editor", "Copy Editor", "Sub Editor", "Guest Editor", "Editor-inchief", "Film Editor", "Magazine Editor", "Series Editor" etc) • Different ways of expressing the same concept ("Edit", "Editor", "Edited By", "Has Editor", "Edition", "Editing") • Approximate matches (dc: creator or dc: contributor? ) • Different structures and levels of indirection (your "Composer. Name=Beethoven" = my "Name [Link] of Composer [Link] of Work [Link] in Recording")
14 The semantic challenge…continued • How do you deal with semantic gaps? ▫ “Your schema has nothing that even vaguely matches something in mine” • …before we start on issues of ▫ Authority – who said that this was so? ▫ Time – is this still true? ▫ Place – where is this statement valid? ▫ …etc • These challenges are the commonplace challenges of one-to-one schema mapping in an enterprise…. • …and they don’t disappear with Linked Data – they simply get more severe because of scale British Library Linked Data Workshop 27 May 2010
15 The Web of Data as a single database? • An “enterprise data model” for the web is not a real solution ▫ Could never be imposed ▫ Would never satisfy everyone’s requirement • Must allow for the use of many existing standards – and for many new ones ▫ Many different metadata standards already in use even within communities: MARC (and its many variations); DC; FRBR; RDA…. ▫ …. and the whole point of Linked Data is that it provides mechanisms to integrate (link) between different domains British Library Linked Data Workshop 27 May 2010
16 Towards some answers? • Demonstrate how “joined-up semantics” may be achieved within Linked Data: first by mapping different vocabularies into a contextual ontology ▫ vmf (Vocabulary Mapping Framework) • and using semantic web tools within an enterprise ▫ coati British Library Linked Data Workshop 27 May 2010
17 Joined-up semantics: Two use cases Godfrey Rust British Library Linked Data Workshop 27 May 2010
Semantic standards in Linked Data ? Intended meaning ? Data model structure OWL/RDFS RDF Logical semantics Triple syntax “Joined-up semantics”: Identities can be linked without the meanings connecting properly. Joined-up semantics is when the meanings of concepts are well matched throughout the chain of links. ‘Joined-up semantics’
Missing semantics in Linked Data – data structure Metadata is much more complicated than a group of simple triples like this – it needs a richer structure to make good sense of it id: 123 Is. Something. To. Do. With id: 456 In other systems we use “objects”, “tables” or “schemas” to give us structure – so how do you do find some common structure in a sea of linked triples? (beyond the logical semantics of RDF/OWL)
Missing semantics in Linked Data – intended meaning The key to meaning in linked data lies in the intended meanings of the relators and allowed values id: 123 Has Format Film Meaning is granular and contextual. People will never all use the same metadata standards – in fact we are becoming more, not less, diverse in choice of schemes and vocabularies. Lack of structure + multiplicity of meaning = semantic challenge (in a single enterprise or in a community).
Towards joined-up semantics – two use cases Use cases show: (1) meanings mapped through an ontology (vmf) (2) data transformed into a common semantic structure (coati) Together they demonstrate: • A rich semantic model for linked data, with a small number of core elements (structure) and an ontology (meaning). • A way to automatically transform data from any existing schema or model (triples or not) in or out of this common triple structure, which can then be used for any purpose. The same model (COA) is used in both. Both are in development – coati being implemented with the first client, vmf successfully completed “proof of concept” in Dec 2009.
Use case 1: the Vocabulary Mapping Framework Background: In 2005 BL and EDIt. EUR supported the “RDA/ONIX Framework for Resource Categorization”, a joint initiative across those two library and publishing metadata standards. It was published in 2006 and successfully used as a basis for developing RDA categories. A more ambitious follow-up was envisaged. This emerged as the Vocabulary Mapping Framework (vmf) in 2009, backed by representatives of many major content metadata schemes. The VMF is contained in the VMF matrix, an RDF/OWL ontology.
VMF goal Automatically compute the “best fit” mappings between any two pre -defined vocabularies. vocab 2 vocab 1 vmf vocab 4 vocab 3
VMF to date Created the matrix and mapped initial vocabularies SPARQL queries to generate scheme to scheme mappings (successful proof of concept, Dec 2009). Next task: establish VMF as an ongoing resource (working with International DOI Foundation and others). VMF has Advisory Board including representatives from RDA, DC, ONIX, DDEX, FRBR, MARC, DOI. vocab 2 vocab 1 vmf vocab 4 vocab 3
Initial schemes, partially mapped RDA (libraries) ONIX (book/serials publishing) DDEX (recorded music) Dublin Core (web metadata) FRBR (libraries) LOM SCORM (education) MARC 21 (libraries) DOI (any content) CIDOC CRM (museums and archives) MPEG 21 RDD (digital rights) RDA ONIX Framework
Matrix stats Approximately: 10 schemes 53 vocabularies mapped in whole or part 1000+ concept families 20000+ unique terms 100, 000+ RDF triples This is quite a large ontology – because VMF automatically generates most of its own terms.
How does VMF work? Terms are mapped into an ontology (the VMF matrix) built up from “families” of concepts based on verbs. An ontology is a structured data dictionary, where carefully defined concepts are linked by logical relationships that allow meaning to pass from one to another within a computer system. An ontology is way of making terms behave themselves. The matrix can be queried to get the “best fit” match from one term or vocabulary to another. For example… vocab 2 vocab 1 vmf vocab 4 vocab 3
A Concept Family
A Concept Family Context A Concept Family starts with a verb Create (or Creating Event)
A Concept Family Context Create (or Creating Event) Create (to Make something (as a human being) The definition of the verb provides the core meaning of the concept
A Concept Family Context Create (or Creating Event) Parent Make (to bring something into existence) Create (to Make something (as a human being)
A Concept Family Context Create (or Creating Event) Parent Make (to bring something into existence) Create (to Make something (as a human being) Children Conceive, Originate, Derive, Create Work, Create Perceivable Resource, Create with Tool, Create with Material, Direct, Contribute etc…
A Concept Family Context Create (or Creating Event) Parent Make (to bring something into existence) Create (to Make something (as a human being) Children Conceive, Originate, Derive, Create Work, Create Perceivable Resource, Create with Tool, Create with Material, Direct, Contribute etc…
A Concept Family Context Create (or Creating Event)
A Concept Family Context Agent Resource Creator Creation
A Concept Family Context Agent Resource Creator Creation Relator Creator_Creation
A Concept Family Context Agent Resource Creator Creation Relator Creator_Creation_Creator
A Concept Family Context Agent Resource Creator Creation Relator Creator_Creation Create_Creator Creation_Create Creator_Create
A Concept Family provides a complete set of terms that describe a type of Event or State (“Context”), always based on a verb. Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation Every term in the VMF matrix is a member of a Concept Family. All relationships (and so most meanings) are based on Events, so this is a good place to start. Creation_Creator Creation_Create Creator_Create
A Concept Family Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation_Creator Creation_Create Creator_Create How do these terms relate to the vocabularies we are mapping?
onix: Code. List 17 A Concept Family Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation Created by Creation_Creator Creation_Create Creator_Create
onix: Code. List 17 A Concept Family Created by marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation_Creator Creation_Create Creator_Create
onix: Code. List 17 A Concept Family Created by marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation lom: role_lifecycle author Creation_Create Creator_Create
onix: Code. List 17 A Concept Family Created by marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation lom: role_lifecycle author Creation_Create Creator_Create Dc: dc 15 Creator
onix: Code. List 17 A Concept Family Created by marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation lom: role_lifecycle author Creation_Create Creator_Create crm: property was created by Dc: dc 15 Creator
onix: Code. List 17 A Concept Family Created by marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation lom: role_lifecycle author Creation_Create Creator_Create crm: property was created by crm: property has created Dc: dc 15 Creator
onix: Code. List 17 A Concept Family Created by marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation lom: role_lifecycle author Creation_Create Creator_Create crm: property was created by crm: class Man-made object crm: property has created Dc: dc 15 Creator
onix: Code. List 17 Created by A Concept Family marc 21: Relationship Creator Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation lom: role_lifecycle author frbr: Endeavour Creation_Creator Creation_Create Creator_Create crm: property was created by crm: class Man-made object crm: property has created Dc: dc 15 Creator
onix: Code. List 17 A Concept Family Created by rdd: verbs Make Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation marc 21: Relationship Creator lom: role_lifecycle author frbr: Endeavour Creation_Creator Creation_Create Creator_Create crm: property was created by crm: class Man-made object crm: property has created Dc: dc 15 Creator
onix: Code. List 17 A Concept Family Created by rdd: verbs Make Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation marc 21: Relationship Creator lom: role_lifecycle author frbr: Endeavour Creation_Creator Creation_Create Creator_Create Dc: dc 15 crm: property was created by crm: class Man-made object Creator crm: property has created ddex: (nothing) rda: (nothing) frad: (nothing)
onix: Code. List 17 Created by A Concept Family rdd: verbs Make Context Agent Resource Creator Creation Relator Relator Creator_Creation Create_Creator_Creator Creation_Creation marc 21: Relationship Creator lom: role_lifecycle author frbr: Endeavour Creation_Creator Creation_Create Creator_Create Dc: dc 15 crm: property was created by crm: class Man-made object Creator crm: property has created ddex: (nothing) rda: (nothing) Every term in a vocabulary maps onto a term in some Concept Family. frad: (nothing)
A Concept Family “Create” is a simple Concept Family. They can be as complex as needed to reflect the any specific concepts.
Some lower level verbs Make From Material Create Part Of Film Create Parody Of FRBR Expression Smell Moderate Panel Create Content As Librettist Conduct Experiment Design Cover Or Cover Artwork Take Moving Photograph Be Human Perform Music As Primary Performer See With Microform Reader Be Changeable Necessarily Add Ink Outline Or Add Color To Visual Be HTML Realize On Mirror Site However granular or obscure, everything belongs to a Concept Family. The matrix should be able to include almost any concept at any level of detail – but is extended on a “need to know basis” (the principle of Functional Granularity).
Value of the Concept Family The Concept Families provide all possible points (“nodes”) in the matrix for other terms to be mapped. Large numbers of terms can be generated in the matrix very efficiently. A single concept will typically produce 15 -25 terms in a largely automated way. Less than a quarter of those terms will directly map onto vocabularies, but most will be used as pathways in computing relationships between different terms.
Mapping to the matrix vmf: Words. Creator vmf: Adaptor vmf: Words. Adaptor vmf: Commentator vmf: Translator vmf: Subtitles. Translator vmf: Translator. And. Commentator
Mapping scheme to scheme Every term in a vocabulary is given an equivalent term in a VMF concept family… vmf: Words. Creator vmf: Adaptor vmf: Words. Adaptor onix: Translated by vmf: Commentator ddex: Translator vmf: Subtitles. Translator Ddex: Subtitles. Translator vmf: Translator. And. Commentator onix: Translated with commentary by
Mapping scheme to scheme vmf: Words. Creator Queries can then be used to find the “best fit” mappings between two terms or complete vocabularies. vmf: Adaptor vmf: Words. Adaptor onix: Translated by vmf: Commentator ddex: Translator vmf: Subtitles. Translator Ddex: Subtitles. Translator vmf: Translator. And. Commentator onix: Translated with commentary by
How to get the “best fit”? There is rarely a “right” answer to a complete vocabulary mapping – we’re looking for the “best fit”. No mapping can create meaning that is not already in the target scheme – it can only try to find it. Do we want “Recommended” mappings for particular vocabulary pairs? Yes? Then with what rules, and who agrees the results? Do we need standard mappings for metadata standards? There are many possibilities for refining the ontology structure and query methods, including the addition of conditional rules that go beyond the basic OWL axioms. Authority becomes a critical issue. Governance for VMF is under consideration by the Advisory Board.
Vocabularies unlinked… Until recently, vocabulary terms have normally been used in the schemes or systems for which they were designed. An ONIX term would be used in an ONIX message, or a MARC term in a MARC record, etc There has been value in crosswalks between complete schemes and vocabularies, but not individual terms across random schemes. What changes in Linked Data: individual terms are used outside of the schemes and messages for which they were designed. We need ways of discovering the meaning of terms from all kinds of schemes, vocabularies and languages when they occur on their own in isolated triples, by “translating” them into terms from schemes which we (whoever we are) do recognize. Joined-up semantics in Linked Data will need services like VMF.
Use case 2: coati – Joined-up semantics in an enterprise Although this use case applies within a single enterprise, this approach could operate in a distributed environment, using a collective ontology such as VMF, working with private or public data, or a mixture. And this uses one particular data model: the fine details are not critical – it’s the general approach.
Use case 2: coati – Joined-up semantics in an enterprise A metadata management system using an RDF database. First client: international rights intermediary operating on four continents, licensing rights and products (principally music and text). Bibliographic approach to metadata. Integrating data from internal systems and thousands of owners and users. Problem: Expanding - new territories, uses, content, media, agreements, more granular, more languages – existing system can’t cope. Need to search, query and process metadata in many different ways, and link up with others. Cannot predict requirements for medium or long term. Solution: RDF database, configured and constrained by SPARQL rules in an ontology, built on the coa data model. (coa = “Contextual Ontology Architecture”, Rightscom’s data architecture developed from the <indecs> framework, with much in common with FRBR / CIDOC models).
Joined-up semantics: an approach A common data structure cannot be a “lowest common denominator” into which data must be “dumbed down”. It must be at least as rich in structure and meaning as any model or schema to which it is mapped. The model we have used has two main parts: - a small, fixed set of five core elements, with specific relators and constraint rules, as building blocks. This is a complete set, may apply to any kind of entity and provides structure. - a hierarchical ontology of terms, based on a generic context model (illustrated briefly in vmf). This can be extended to any level of specialization and provides meaning. In coati the ontology is expressed in RDF (TTL) triples alongside the data. In vmf the ontology is the data.
Joined-up semantics: core elements The coa data model has five core elements…a Link and four kinds of Attribute
Joined-up semantics: core elements The coa data model has five core elements…a Link and four kinds of Attribute Entity Link Entity A Link connects two independent Entities eg “Moby Dick” Has. Author “Herman Melville”.
Joined-up semantics: core elements The coa data model has five core elements…a Link and four kinds of Attribute Entity Link A categorization of an Entity with a fully controlled data value eg Genre=Novel Entity Category Descriptor A Name, Identifier or Annotation of an Entity in the form of an uncontrolled or partially controlled data value eg Title. With. Subtitle=“Moby Dick, Or, The White Whale” All metadata can be transformed into this model. How…? A Link connects two independent Entities eg “Moby Dick” Has. Author “Herman Melville”. Time A Time associated with an Entity. eg Date. Of. Creation=1851 Quantity A measure of some aspect of an Entity. eg Height=140 cm Attributes each belong to a single Entity
Links and Attributes as Entities The key is to treat Links and Attributes as Entities (with their own Identifiers) in their own right. Links and Attributes can have their own Links and Attributes, and so on as far as needed. For example, a resource may have a Link which has a valid period (Time) which has a description (Descriptor) which has a Category which has a name (Descriptor) which applies in a particular territory (Link). This is horrible to model in traditional relational databases – but in linked data it’s natural, so long as you have the computing power - and that is no longer a major issue. The benefits are obvious. Entities are built up by combining any number of core elements (a “molecular structure”) in repeating patterns, using a small number of defined relators. This approach also makes the data very expressive…
Example: Link as an Entity Links can look simple. For example: id: 123 part of id: 456 Problem: there may be a lot more to say about this Link, such as: - Exactly what role something plays in the Link (eg “chapter”, “track”) - Sequence number in a set of Links (“track 4”) - how much of one thing is included in another (eg “duration 1: 00”) - where in one thing another is located (eg “start point 13: 24”) - a Category of the Link (“Hidden track”) - who created the data? - who says that this Link is true? etc. Some links are simple, but many have to be much richer.
Transformation into core elements: Link So in coati this Link is transformed. This: id: 123 part of id: 456
Transformation into core elements: Link …becomes a new Entity, connecting the two others. id: 987 Id: 987 Has. Entity 1 Has. Entity 2 id: 123 id: 456
Transformation into core elements: Link The Link has a type (derived from the original “part of” relator)… id: 987 Has. Entity 1 Has. Entity 2 Has. Link. Type id: 123 id: 456 Part Link
Transformation into core elements: Link And each linked entity may play a distinct role… id: 987 Has. Entity 1 Has. Entity 2 Has. Link. Type Has. Link. Role 2 id: 123 id: 456 Part Link Chapter The relators here are part of the standard core element set, and are used for every link. The type and role (“Part. Link”, “Chapter”) are provided by the ontology. The transformation is done with SPARQL “mapping” rules for the source data schema (unless the Link is created directly in coati). And you can now add other statements about this link, by rules or data entry, adding Links or Attributes such as sequence number (Descriptor), duration (Quantity), start point (Time) and so on, using the core elements. All your available semantics can be joined-up.
Rich Links We call this a “rich” Link - a Link which is an entity in its own right. id: 987 Has. Entity 1 Has. Entity 2 Has. Link. Type Has. Link. Role 2 id: 123 id: 456 Part Link Chapter For some types of Link, it is unnecessary ever to make extra statements and so it may be mapped as a simple triple…
Flat Links We call this a “flat” Link. id: 123 Creating. Event_Creator id: 456 Notice that the relators are in the same form as VMF – because these relators are derived from the same ontology. Flat Links are fine under certain logical conditions. (There isn’t time to unpack this here, but the important point is that this is not arbitrary: there are logic-based rules which determine whether a link should be rich or flat, so anyone doing the mapping can be expected to get the same result).
Other core elements The four Attribute core elements behave in a similar way to Links. Like Links, each Attribute element also has a “flat” structure which can be used when there is no need or possibility of it having Links or Attributes of its own.
Transforming data into coa triples Each field/element in the source data schema is mapped by creating SPARQL rules to convert it into coa triples. Metadata of any complexity can be represented in this way. A type of Entity (like a book, a person, a postal address, an event, or an invoice) is defined as a group of core elements with rules constraining the types of each element they may possess. Note: a source “schema” can be as simple as an Excel spreadsheet or a few dc: relators, or as complex as an ONIX or MARC XML schema or a 100 table relational database (the client mostly uses SQL). This approach can be used within a closed or open world, within a single enterprise, a community (commercial, bibliographic…), or with open linked data – or any combination. Source data Transform SPARQL mapping rules coa triples
Creating coa triples with a UI Data can also be created directly as coa triples. Coati is a metadata management system, so much data will be created and maintained by people at screens. This creates the coa triples directly so no transformation is necessary. User interface Source data Transform SPARQL mapping rules coa triples
What’s the point? So that data of many schemas and types can be linked in a way that can be searched/queried/processed as if it had been created in the same way to begin with. If Linked Data is assembled using a small number of generic structures and relators like this, then the patterns for rules and queries can be reused very effectively, and new kinds of data can be added without needing to extend the model structures. Finally let’s look at a few coati screens to demonstrate this….
Everything should be made as simple as possible, but not simpler. Albert Einstein
269beb9b36b11d7402e7919d6c976a71.ppt