Hi Martin

We discussed this issue a bit in the developer hangout, sadly to few people
are usually present.

On Tue, Jan 27, 2015 at 12:33 PM, Martin Brümmer <
bruem...@informatik.uni-leipzig.de> wrote:

> Hi DBpedians!
>
> As you surely have noticed, Google has abandoned Freebase and it will
> merge with Wikidata [1]. I searched the list, but did not find a
> discussion about it. So here goes my point of view:
>
> When Wikidata was started, I hoped it would quickly become a major
> contributor of quality data to the LOD cloud. But although the project
> has a potentially massive crowd and is backed by Wikimedia, it does not
> really care about the Linked Data paradigm as established in the
> Semantic Web. RDF is more of an afterthought than a central concept. It
> was a bit disappointing to see that Wikidata's impact on the LOD
> community is lacking because of this.
>

 I think it's more of a resource/implementation problem for them.
Publishing linked data requires a major commitment and the tools for it are
more than lacking in refinement.


> Now Freebase will be integrated into Wikidata as a curated, Google
> engineering hardened knowledge base not foreign to RDF and Linked Data.
> How the integration will be realized is not yet clear it seems. One
> consequence is hopefully, that the LOD cloud grows by a significant
> amount of quality data. But I wonder what the consequences for the
> DBpedia project will be? If Wikimedia gets their own knowledge graph,
> possible curated by their crowd, where is the place for the DBpedia? Can
> DBpedia stay relevant with all the problems of an open source project,
> all the difficulties with mapping heterogeneous data in many different
> languages, the resulting struggle with data quality and consistency and
> so on?
>

Wikidata and DBpedia are 2 different beasts. Wikidata is a wiki for
structured data while DBpedia is an Information Extraction Framework with a
crowdsourced component that is the mappings wiki. While wikidata might gain
a lot of data from Freebase, it won't help them that much if Google does
not give the Information Extraction framework behind Freebase. It would
mean that the data would get old very fast and the community won't be able
to update and maintain it. Though What exactly Google will do remains to be
seen.


> So I propose being proactive about it:
>

I agree with being proactive, we have a lot of problems in DBpedia that
need to be addressed.


> I see a large problem of the DBpedia with restrictions of the RDF data
> model. Triples limit our ability to make statements about statements. I
> cannot easily address a fact in the DBpedia and annotate it. This means:
>

DBpedia is not only available in triples but also in N-quads.


>     -I cannot denote the provenance of a statement. I especially cannot
> denote the source data it comes from. Resource level provenance is not
> sufficient if further datasets are to be integrated into DBpedia in the
> future.

    -I cannot denote a timespan that limits the validity of a statement.
> Consider the fact that Barack Obama is the president of the USA. This
> fact was not valid at a point in the past and won't be valid at some
> point in the future. Now I might link the DBpedia page of Barack Obama
> for this fact. Now if a DBpedia version is published after the next
> president of the USA was elected, this fact might be missing from the
> DBpedia and my link becomes moot.     -This is a problem with
> persistency. Being able to download old dumps of DBpedia is not a
> sufficient model of persistency. The community struggles to increase
> data quality, but as soon as a new version is published, it drops some
> of the progress made in favour of whatever facts are found in the
> Wikipedia dumps at the time of extraction. The old facts should persist,
> not only in some dump files, but as linkable data.


> Being able to address these problems would also mean being able to fully
> import Wikidata, including provenance statements and validity timespans,
> and combine it with the DBpedia ontology (which already is an important
> focus of development and rightfully so). It also means a persistent
> DBpedia that does not start over in the next version.
>
> So how can it be realized? With reification of course! But most of us
> resent the problems reification brings with it, the complications in
> querying etc. The reification model itself is also unclear. There are
> different proposals, blank nodes, reification vocabulary, graph names,
> creating unique subproperties for each triple etc. Now I won't propose
> using one of these models, this will surely be subject to discussion.
> But the DBpedia can propose a model and the LOD community will adapt,
> due to DBpedia's state and impact. I think it is time to up the standard
> of handling provenance and persistence in the LOD cloud and DBpedia
> should make the start. Especially in the face of Freebase and Wikidata
> merging, I believe it is imperative for the DBpedia to move forward.
>

The problem of different changes during time in Wikipedia has been
addressed in DBpedia Live and a demo has been presented at the last meeting
in Leipzig under the title Versioning DBpedia Live using Memento  [3]

As you mentioned RDF reification can has drawbacks regarding performance
and verbosity. We've had a similar need in one of the applications we
developed, reified statements were simply impractical due to their
verbosity and performance impact. The solution we came up with was using
N-quads to use the 4th quad as an ID for an index. By looking up the ID you
can find out information regarding provenance, time etc. I think this is
more of a Graph Database problem. We should look at ways it can be
implemented effectively in RDF-stores and then propose modifications to the
RDF/Sparql standard if needed. Maybe the people from OpenLink or other
RDF-Store researchers have some ideas on this issue.

Cheers,
Alexandru

[1] http://sw.deri.org/2008/07/n-quads/
[2] http://patterns.dataincubator.org/book/reified-statement.html
[3] http://wiki.dbpedia.org/meetings/Leipzig2014
------------------------------------------------------------------------------
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion

Reply via email to