Hi Alexandru,

Am 27.01.2015 um 13:46 schrieb Alexandru Todor:
> Hi Martin
>
> We discussed this issue a bit in the developer hangout, sadly to few
> people are usually present.
>
> On Tue, Jan 27, 2015 at 12:33 PM, Martin Brümmer
> <bruem...@informatik.uni-leipzig.de
> <mailto:bruem...@informatik.uni-leipzig.de>> wrote:
>
>     Hi DBpedians!
>
>     As you surely have noticed, Google has abandoned Freebase and it will
>     merge with Wikidata [1]. I searched the list, but did not find a
>     discussion about it. So here goes my point of view:
>
>     When Wikidata was started, I hoped it would quickly become a major
>     contributor of quality data to the LOD cloud. But although the project
>     has a potentially massive crowd and is backed by Wikimedia, it
>     does not
>     really care about the Linked Data paradigm as established in the
>     Semantic Web. RDF is more of an afterthought than a central
>     concept. It
>     was a bit disappointing to see that Wikidata's impact on the LOD
>     community is lacking because of this.
>
>
>  I think it's more of a resource/implementation problem for them.
> Publishing linked data requires a major commitment and the tools for
> it are more than lacking in refinement.
>
>
>     Now Freebase will be integrated into Wikidata as a curated, Google
>     engineering hardened knowledge base not foreign to RDF and Linked
>     Data.
>     How the integration will be realized is not yet clear it seems. One
>     consequence is hopefully, that the LOD cloud grows by a significant
>     amount of quality data. But I wonder what the consequences for the
>     DBpedia project will be? If Wikimedia gets their own knowledge graph,
>     possible curated by their crowd, where is the place for the
>     DBpedia? Can
>     DBpedia stay relevant with all the problems of an open source project,
>     all the difficulties with mapping heterogeneous data in many different
>     languages, the resulting struggle with data quality and
>     consistency and
>     so on?
>
>
> Wikidata and DBpedia are 2 different beasts. Wikidata is a wiki for
> structured data while DBpedia is an Information Extraction Framework
> with a crowdsourced component that is the mappings wiki. While
> wikidata might gain a lot of data from Freebase, it won't help them
> that much if Google does not give the Information Extraction framework
> behind Freebase. It would mean that the data would get old very fast
> and the community won't be able to update and maintain it. Though What
> exactly Google will do remains to be seen.

I kind of disagree with you here. I regard and use DBpedia as a source
of machine-readable linked data first. Because of its nature as
derivative project extracting Wikipedia data, it is endangered by a
potential future in which the Wikipedia crowd maintains their own
machine-readable linked data to feed (among others) info boxes the
DBpedia seeks to extract. I fear that, with Freebase becoming a part of
Wikidata, this future becomes a little more likely to happen, even if we
don't know what Google does, as you rightfully say.

>
>
>     So I propose being proactive about it:
>
>
> I agree with being proactive, we have a lot of problems in DBpedia
> that need to be addressed. 
>
>
>     I see a large problem of the DBpedia with restrictions of the RDF data
>     model. Triples limit our ability to make statements about
>     statements. I
>     cannot easily address a fact in the DBpedia and annotate it. This
>     means:
>
>
> DBpedia is not only available in triples but also in N-quads.
>
>
>         -I cannot denote the provenance of a statement. I especially
>     cannot
>     denote the source data it comes from. Resource level provenance is not
>     sufficient if further datasets are to be integrated into DBpedia
>     in the
>     future.
>
>         -I cannot denote a timespan that limits the validity of a
>     statement.
>     Consider the fact that Barack Obama is the president of the USA. This
>     fact was not valid at a point in the past and won't be valid at some
>     point in the future. Now I might link the DBpedia page of Barack Obama
>     for this fact. Now if a DBpedia version is published after the next
>     president of the USA was elected, this fact might be missing from the
>     DBpedia and my link becomes moot.     -This is a problem with
>     persistency. Being able to download old dumps of DBpedia is not a
>     sufficient model of persistency. The community struggles to increase
>     data quality, but as soon as a new version is published, it drops some
>     of the progress made in favour of whatever facts are found in the
>     Wikipedia dumps at the time of extraction. The old facts should
>     persist,
>     not only in some dump files, but as linkable data.
>
>
>     Being able to address these problems would also mean being able to
>     fully
>     import Wikidata, including provenance statements and validity
>     timespans,
>     and combine it with the DBpedia ontology (which already is an
>     important
>     focus of development and rightfully so). It also means a persistent
>     DBpedia that does not start over in the next version.
>
>     So how can it be realized? With reification of course! But most of us
>     resent the problems reification brings with it, the complications in
>     querying etc. The reification model itself is also unclear. There are
>     different proposals, blank nodes, reification vocabulary, graph names,
>     creating unique subproperties for each triple etc. Now I won't propose
>     using one of these models, this will surely be subject to discussion.
>     But the DBpedia can propose a model and the LOD community will adapt,
>     due to DBpedia's state and impact. I think it is time to up the
>     standard
>     of handling provenance and persistence in the LOD cloud and DBpedia
>     should make the start. Especially in the face of Freebase and Wikidata
>     merging, I believe it is imperative for the DBpedia to move forward.
>
>
> The problem of different changes during time in Wikipedia has been
> addressed in DBpedia Live and a demo has been presented at the last
> meeting in Leipzig under the title Versioning DBpedia Live using
> Memento  [3] 
>
> As you mentioned RDF reification can has drawbacks regarding
> performance and verbosity. We've had a similar need in one of the
> applications we developed, reified statements were simply impractical
> due to their verbosity and performance impact. The solution we came up
> with was using N-quads to use the 4th quad as an ID for an index. By
> looking up the ID you can find out information regarding provenance,
> time etc. I think this is more of a Graph Database problem. We should
> look at ways it can be implemented effectively in RDF-stores and then
> propose modifications to the RDF/Sparql standard if needed. Maybe the
> people from OpenLink or other RDF-Store researchers have some ideas on
> this issue.

I did not know Memento and it's an interesting project, thanks. I would
really like to see the problems addressed natively in the DBpedia,
though. If DBpedia with its ever changing factual base, thanks to its
data source, could present a way of handling persistence of facts and
versioning, it could address the concerns of some EU projects that focus
these points, like PRELIDA [1] and DIACHRON [2].

I totally agree with you regarding reification drawbacks. I thought
about using graph name of the triple (that's what I understood the 4th
URI in N-quads was commonly used for) as a statement identifier and
would regard this as a better solution than using the RDF reification
vocabulary, but I'm not sure how it impacts SPARQL querying and graph
database performance. Dimitris made me aware of this proposal [3] by
Olaf Hartig that needs an extension to turtle and SPARQL. Does anyone
know if there is a comprehensive overview over the many different ways
to do reification, addressing advantages and drawbacks of these models?

I also agree that this is a graph database problem, but how can DBpedia
tackle this issue, except OpenLink possibly reading these emails?

regards,
Martin

[1] http://www.prelida.eu/
[2] http://www.diachron-fp7.eu/
[3] http://arxiv.org/pdf/1406.3399.pdf

>
> Cheers,
> Alexandru
>
> [1] http://sw.deri.org/2008/07/n-quads/
> [2] http://patterns.dataincubator.org/book/reified-statement.html
> [3] http://wiki.dbpedia.org/meetings/Leipzig2014

------------------------------------------------------------------------------
Dive into the World of Parallel Programming. The Go Parallel Website,
sponsored by Intel and developed in partnership with Slashdot Media, is your
hub for all things parallel software development, from weekly thought
leadership blogs to news, videos, case studies, tutorials and more. Take a
look and join the conversation now. http://goparallel.sourceforge.net/
_______________________________________________
Dbpedia-discussion mailing list
Dbpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion

Reply via email to