We've been down this road before, let me review the old e-mails and see if I 
have any other questions...


Jeff,



whenever you have a MultiUnion graph (or OntModel) that consists of more than 
one sub-graph, then the performance of the SPARQL engine might go down 
significantly. This is because the triple matches may need to dynamically merge 
partial results from multiple sub-graphs. On the other hand, if you just have a 
single graph (incl a single SDB graph), then the system can exploit native 
optimizations and do complex graph patterns with a single, optimized operation. 
In practice my guess is that you will have best performance if you put all 
sub-graphs into the same SDB (possibly split into named graphs) and then 
operate on the union graph (via the named graph <urn:x-arq:UnionGraph>).



Holger



Jeff

From: [email protected] [mailto:[email protected]] 
On Behalf Of Schmitz, Jeffrey A
Sent: Friday, March 26, 2010 1:00 PM
To: [email protected]
Subject: [topbraid-users] SPINInferences.run with OntModel

Hello,
   I had a question about the SPINInferences.run operation.  For some reason, 
when I pass an OntModel in as the first parameter (i.e. the model to be 
queried) the operation takes a LONG time to complete.  But when I pass in a 
Model, that (I think) is the equivalent of the full OntModel, it is very fast.  
For example, I have an OntModel (ontModel) that I would like to run the SPIN 
Rules on to generate the inferred triples into it.  Currently, I have to copy 
the complete OntModel into a Model using the following code:

            Model model = ModelFactory.createDefaultModel();
            model.notifyEvent(GraphEvents.startRead);
            try {
                  model.add(ontModel);
            } finally {
                  model.notifyEvent(GraphEvents.finishRead);
            }

Then, I can call SPINInference.run on the Model...

SPINInferences.run(model, spinInfModel, _spinRulesClass2QueryMap,
                        initialTemplateBindings, exp, _spinRuleStats, true, 
inferenceType.inferenceProp(),
                        comparator, null);

and it runs very fast.  However, if I try to cut the copy out of the equation 
and just pass in the OntModel directly to SPINInferences.run...

SPINInferences.run (ontModel, spinInfModel, _spinRulesClass2QueryMap,
                        initialTemplateBindings, exp, _spinRuleStats, true, 
inferenceType.inferenceProp(),
                        comparator, null);

It runs very slowly.  Any ideas on what's going on here?

So,
--
You received this message because you are subscribed to the Google
Group "TopBraid Suite Users", the topics of which include TopBraid Composer,
TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
To post to this group, send email to
[email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/topbraid-composer-users?hl=en

To unsubscribe from this group, send email to 
topbraid-users+unsubscribegooglegroups.com or reply to this email with the 
words "REMOVE ME" as the subject.

-- 
You received this message because you are subscribed to the Google
Group "TopBraid Suite Users", the topics of which include TopBraid Composer,
TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
To post to this group, send email to
[email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/topbraid-composer-users?hl=en

To unsubscribe from this group, send email to 
topbraid-users+unsubscribegooglegroups.com or reply to this email with the 
words "REMOVE ME" as the subject.

Reply via email to