On 10/03/2017 12:35, Dave Reynolds wrote:
> 
> On 10/03/17 10:03, George News wrote:
>> Hi again,
>> 
>> I forgot to mention that the model is retrieved from a TDB. ¿Should
>> I always create the OntModel from the TDB?
>> 
>> I don't really know how I should do that to work.
> 
> Your OntModel needs to see both the data and the ontology.
> 
> From your code snippets I can't see how either of those are getting 
> into your OntModel.

Sorry for not pasting the full code. The OntModel is generated using the
spec and adding the normal model with all the data (including ontology
definition)

> If they are all in the same TDB store and accessible as the default 
> model then you could construct the OntModel over a TDB-backed model. 
> Though be warned that inference over persistent storage is slow.

I've noticed that ;) After playing around a bit I was able to make it
working on a very small TDB. I was going to post my answer but you were
quicker.

> That performance issue can sometimes be addressed by caching your 
> data to an in-memory model and reasoning over that and performing 
> inference over that and/or by materializing the relevant inferences 
> ahead of time and persisting them to the TDB store (normally in a 
> separate graph) for later query.

Could you further explain that? How do I cache in memory? And how do I
perform inference over a bit of data and then persist it?

This sounds pretty interesting. From what I understand, the idea is that
as I'm adding data to the database I should be inferring and storing not
only the original data but also the inferred. Am I right? Any help on
how to do that would be great!

Currently my TDB handles two named graph, that I fill with data
depending on the nature of the incoming. Then I also have another graph
which is the merging of both name graphs. Taking that into
consideration, how can I implement what you suggests? I'm pretty new to
the inference procedures.

> I'm afraid that Jena doesn't currently have very smooth provision
> for inference over large and/or database backed stores.

That's a pity. Should I try to use Virtuoso or others?

> Dave
> 
>> 
>> Regards Jorge
>> 
>> On 10/03/2017 10:51, George News wrote:
>>> Hi,
>>> 
>>> I have the following properties defined in my ontology:
>>> 
>>> <owl:ObjectProperty rdf:about="http://mydomain.com#by";> 
>>> <rdfs:domain rdf:resource="http://mydomain.com#data"/> 
>>> <owl:inverseOf rdf:resource="http://mydomain.com#made"/> 
>>> <rdfs:range rdf:resource="http://mydomain.com#node"/> 
>>> </owl:ObjectProperty>
>>> 
>>> <owl:ObjectProperty rdf:about="http://mydomain.com#made";> 
>>> <owl:inverseOf rdf:resource="http://mydomain.com#by"/> 
>>> <rdfs:domain rdf:resource="http://mydomain.com#node"/> 
>>> <rdfs:range rdf:resource="http://mydomain.com#data"/> 
>>> </owl:ObjectProperty>
>>> 
>>> And I have registered multiple entity data and nodes using a 
>>> JSON-LD document similar to the one below: { "@context": { "my":
>>>  "http://mydomain.com#"; },
>>> 
>>> "@graph": [ { "@id": "_:b81", "@type": "my:data", "my:by": { 
>>> "@id": "Node1" }, }, { "@id": "Node1", "@type": "my:node", } ] }
>>> 
>>> I'm using: OntModel ontModel = 
>>> ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM_MICRO_RULE_INF);
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> 
And this is the model I'm injecting to the QueryExecution, etc.
>>> 
>>> Then I try to run SPARQL sentences on the TDB, but I'm facing 
>>> issues with the inference. If I run
>>> 
>>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX
>>>  rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX my: 
>>> <http://mydomain.com#>
>>> 
>>> SELECT  ?d ?n WHERE { ?n rdf:type/rdfs:subClassOf my:node . ?d 
>>> my:by ?n . }
>>> 
>>> I'm getting results, but if I run
>>> 
>>> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX
>>>  rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX my: 
>>> <http://mydomain.com#>
>>> 
>>> SELECT  ?d ?n WHERE { ?n rdf:type/rdfs:subClassOf my:node . ?n 
>>> my:made ?d . }
>>> 
>>> I don't get any. The inference model should be infering that made
>>> is the inverse of and I should be getting the same results, 
>>> shouldn't I? Or should I create a InfModel base on the desired 
>>> reasoner and then inject it to the QueryExecution? From my 
>>> understanding the first option should be working.
>>> 
>>> Thanks for your help. Jorge
>>> 
> 

Reply via email to