Holger, Thanks, I'm looking forward to the new release!
Regards, Gerrit On Apr 8, 8:42 am, Holger Knublauch <[email protected]> wrote: > Gerrit, > > just a holding response for now. I am currently preparing a new version of > the SPIN API that will (hopefully) have better examples to clarify those > issues. With regards to OWL RL, you will currently not be able to use the > current SPIN API for that anyhow because an smf: function is missing. I will > clean this up as well. Meanwhile your patience is appreciated. > > Thanks, > Holger > > On Apr 7, 2011, at 7:32 PM, Gerrit wrote: > > > > > > > > > Holger, > > > At the moment I'm adding my domain ontology and the OWL RL models into > > an OntModel: > > > private OntModel model; > > model = ModelFactory.createOntologyModel(OntModelSpec.OWL_MEM); > > > InputStream input = FileManager.get().open("/home/gerrit/code/ > > TBCFreeWorkspace/TopBraid/BondingDevice3.owl"); > > model.read(input,"http://sofia.gotdns.com/ontologies/ > > BondingDevice.owl"); > > model.read("http://topbraid.org/spin/owlrl-all"); > > model.read("http://topbraid.org/spin/owlrl"); > > > and then invoke the inference engine: > > > SPINModuleRegistry.get().init(); > > model.addSubModel(newTriples); > > SPINModuleRegistry.get().registerAll(model); > > SPINInferences.run(model, newTriples, null, null, false, null); > > System.out.println("Inferred SPIN triples: " + newTriples.size()); > > > On each query to the triple store I first invoke the inferencing > > engine and then add the new triples to the model: > > > SPINInferences.run(model, newTriples, null, null, false, null); > > System.out.println("Inferred SPIN triples: " + newTriples.size()); > > model.addSubModel(newTriples); > > > Obviously this is horribly ineffective, but I can't quite follow your > > explanation below on how to use a hash map to store or pre-compute the > > rules. > > > Regards, > > Gerrit > > > On Apr 6, 9:54 am, Holger Knublauch <[email protected]> wrote: > >> On Apr 6, 2011, at 5:38 PM, Gerrit wrote: > > >>> It looks like the SPIN API performs a lock while reasoning, and has to > >>> do this multiple times in order to perform all the inferments that is > >>> produced by adding topbraid/spin/owlrl-all and topbraid/spin/owlrl. > > >> I am not aware of the SPIN API performing a lock. Do you have any insights > >> as to which Java methods would do this? > > >>> Adding just those two files to the model creates 10491 inferred > >>> triples. Is this the kind of behavior I should expect when using SPIN > >>> rules to infer OWL2 RL? > > >> This depends on your set-up, i.e. how you invoke the inference engine. If > >> you just put all triples including the OWL RL models into the same Jena > >> Model, then the system will run inferences over all triples, including the > >> OWL RL ontology, SPIN system triples etc. This will include a lot of > >> uninteresting inferences and slow down the whole process. > > >> What I usually do is to create a Jena OntModel / MultiUnion graph that has > >> the domain ontology as base graph, and the OWL RL models and other rule > >> bases as "imports". Then it's possible to build up a complete list of > >> available rules, and put it into a HashMap that is used as input to the > >> actual inferencing step. The following code (from SPINInferences.run()) > >> shows how to get those rules > > >> Map<CommandWrapper, Map<String,RDFNode>> > >> initialTemplateBindings = new HashMap<CommandWrapper, > >> Map<String,RDFNode>>(); > >> Map<Resource,List<CommandWrapper>> cls2Query = > >> SPINQueryFinder.getClass2QueryMap(queryModel, queryModel, rulePredicate, > >> true, initialTemplateBindings, false); > >> Map<Resource,List<CommandWrapper>> cls2Constructor = > >> SPINQueryFinder.getClass2QueryMap(queryModel, queryModel, > >> SPIN.constructor, true, initialTemplateBindings, false); > >> SPINRuleComparator comparator = new > >> DefaultSPINRuleComparator(queryModel); > >> return run(queryModel, newTriples, cls2Query, > >> cls2Constructor, initialTemplateBindings, explanations, statistics, > >> singlePass, rulePredicate, comparator, monitor); > > >> Then, the queryModel itself can be a sub-set of the whole union model. It > >> could simply be the base model that you want to query. TopSPIN usually > >> does this as well, and uses all spin:LibraryOntologies to collect the > >> rules (and constraints) only, but ignores them in the query model. This > >> way, you can even pre-compute all rules once and keep them in a HashMap > >> for the life time of your application. > > >> I will try to clarify this with a fully worked out example for the next > >> SPIN API release, but the ideas above may point you in the right direction > >> in the meantime. > > >> Regards, > >> Holger > > > -- > > You received this message because you are subscribed to the Google > > Group "TopBraid Suite Users", the topics of which include TopBraid Composer, > > TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN. > > To post to this group, send email to > > [email protected] > > To unsubscribe from this group, send email to > > [email protected] > > For more options, visit this group at > >http://groups.google.com/group/topbraid-users?hl=en -- You received this message because you are subscribed to the Google Group "TopBraid Suite Users", the topics of which include TopBraid Composer, TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/topbraid-users?hl=en
