On 06/08/2016 11:46 AM, james anderson wrote:
if the goal is to leave room for the judgement call,
assuming that dimension is free, to place each in its own graph gives
one the latitude to make the judgement call, develop some systematic
which depends on provenance, reflect the question to a manu
You've got it!
What matters is what your system believes is owl:sameAs based on its
viewpoint, which could be based on who you trust to say owl:sameAs. If
you are worried about "inference crashes" pruning this data is the place to
start.
You might want to apply algorithm X to a graph, but data
good afternoon;
> On 2016-06-08, at 17:17, David Booth wrote:
>
> On 06/08/2016 08:55 AM, Martynas Jusevičius wrote:
>> So I think it would be
>> wrong to ignore the "older" description -- or any "other" description
>> in general.
>
> This gets into the whole area of what data you choose to bel
Hi Martynas,
Hybrid solutions do exist that can do 3. (see, e.g., [1]). However, 1.
is definitely the most scalable approach (see, e.g., [2,3]). I'd suggest
running 1. and 3. in parallel to ensure maximal user satisfaction.
Best,
Axel
[1] http://aderis.linkedopendata.net
[2] http://aksw.org/
On 06/08/2016 08:55 AM, Martynas Jusevičius wrote:
So I think it would be
wrong to ignore the "older" description -- or any "other" description
in general.
This gets into the whole area of what data you choose to believe. Some
data is just plain wrong, and lots of data is "correct" (i.e. usab
Hi Martynas,
We worked on that problem in [0] and used a merging strategy to consolidate
entities.
In [1] you find a more detailed description with a screenshot [2], how this was
presented to the user.
In essence, the user saw that the resulting entity was merged from several
separate co-refe
The vanilla RDF answer is that the data gathering module ought to pack all
of the graphs it got into named graphs that are part of a data set and then
pass that towards the consumer.
You can union the named graphs for a primitive but effective kind of
"merge" or put in some module downstream that
Mikel, a lot of them do, but they are not required to. Both
datasources work as expected, it is only when trying to combine both
of them that one runs into this situation.
I agree that each of the descriptions could go into separate named
graphs, where the graph name could be the source URI. That
Hi Martynas;
I thought that the majority of Linked Data servers work like Pubby, i.e.,
they serve Linked Data resources by doing a DESCRIBE on a Triple Store,
therefore serving the same triples. But it seems like you have encountered
the opposite (Different triples served) in many systems, do you
Hi
Option 3 seems sensible, particularly if you keep them in separate graphs.
However shouldn’t you consider the provenance of the sources and prioritise
them on how recent they were updated?
Alasdair
On 8 Jun 2016, at 13:06, Martynas Jusevičius
mailto:marty...@graphity.org>> wrote:
Hey all,
Hey all,
we are developing software that consumes data both from Linked Data
and SPARQL endpoints.
Most of the time, these technologies complement each other. We've come
across an issue though, which occurs in situations where RDF
description of the same resources is available using both of them.
11 matches
Mail list logo