Thank you Dave for the detailed response!

I made a mistake in my question, which I want to clarify, so people
wouldn't be confused reading this thread in the future.

> Therefore I tried to use a separate named graph for each partition and
> I was able to configure RDFS inference for them.

>But the problem is that I can't find a way to make all the added
> triples persistent between the restart of the Fuseki.

I managed to store dynamically added named graph in TDB, but didn't
managed to enable inference in them. Here's a minimal configuration:

[] a fuseki:Server ;
    fuseki:services ( <#service1> ) .

<#service1> a fuseki:Service ;
    fuseki:name                         "ds" ;
    fuseki:serviceQuery                 "sparql" ;
    fuseki:serviceQuery                 "query" ;
    fuseki:dataset                      <#dataset> .

<#dataset> rdf:type tdb:DatasetTDB ;
    tdb:location "./fuseki-db" .

Maxim

On Thu, May 19, 2016 at 7:19 PM, Dave Reynolds
<dave.e.reyno...@gmail.com> wrote:
> Hi,
>
> On 19/05/16 16:42, Maxim Kolchin wrote:
>>
>> Hi all,
>>
>> (I found several threads where people asked similar questions, but the
>> answers don't provide a solution for my use case. So I've decided to
>> ask my question in a new thread.)
>>
>> I need to be able to add partitions of triples (with the INSERT
>> queries) and sometimes delete some of the partitions, so the other
>> partitions would stay untouched. Also I need the RDFS inference for
>> each partitions.
>>
>> Initially I stored everything in the default graph, but then I
>> understood that it's almost impossible to write a DELETE query to
>> delete a specific partition of triples and leave the rest untouched.
>> Therefore I tried to use a separate named graph for each partition and
>> I was able to configure RDFS inference for them.
>>
>> But the problem is that I can't find a way to make all the added
>> triples persistent between the restart of the Fuseki. Is it supported
>> by Fuseki at all, to have inference and persistence for named graphs?
>
>
> Short ans: no, no persistence of inferred graphs automatically.
>
> Longer ans:
>
> Fueski obviously supports persistence and named graphs through TDB, so
> without inference you would be fine.
>
> However, the inference engines are not aware of named graphs (they work at
> the level of individual graphs, not datasets) and they are not persistent.
>
> So your options are:
>
> (1) Generate the inference closures yourself after each update and persist
> those in the store.
>
> (2) Use purely backward rules which will compute the inferences you want
> on-demand. That doesn't give you persistence of the inferences (the work is
> redone each time) but at least you don't have stale materialized inferences
> lying around in your store giving you wrong answers.
>
> Option (1) can be made to work in simple cases but will take code.
> Typically you would arrange for your default graph to be a union of all the
> named graphs so it can see the base data graphs plus any graphs storing the
> materialized inference closures.
>
> However, a lot will be depend on the nature of your partitions and what
> inferences you need. If you can treat each partition separately then you
> could have an inference closure for each partition, whenever you do an
> update then build a new inference closure (in memory) and update the
> persisted inference graph for just that partition. If you delete a partition
> you delete the corresponding inference graph.
>
> That would work but you would have have to build the code to manage that
> yourself, it's not just a matter of configuring fuseki.
>
> If your inferences cross partitions so that any change to any partition
> might mean recomputing all of the inferences then it becomes less practical.
>
> With option (2) you have none of this data management problem and you could
> provide a backward chaining rule set without writing code (a "small matter"
> of figuring out the right assembler files). However, performance of the
> backward chainer over persistent stores is poor and it is not
> named-graph-aware.
>
> Dave

Reply via email to