One thing that matches my requirements well is https://jena.apache.org/documentation/inference/#RULEconfiguration
but the odd thing to me is that there is there is not much enthusiasm from others from this kind of production rule system. Allegedly people want to use RDFS and OWL inference, but I don't believe it. Practically people bitch about the performance while still finding it does not do what they really need. If your data is "big" in any way you don't need to spend any time, memory at all entailing facts you don't need be it through backwards or forward chaining For instance, apply any "conventional" rules for RDFS, OWL and such and the focus is on merging data from various sources, but none of them face up to situations such as * various sources publish temperatures that could reasonably be in Fahrenheit or Centigrade, or Kelvin or Rankine or some raw value from a sensor: same for distances, time and other units This is an absolute requirement in the field, but we've crossed the line where the system can do math thus, if you are some insaneic like Gödel who wants to blow up the system of course you can do it but that is not responsible behavior for us application programmers, character designers and re-writers of the best classics we can find who need to satisfy requirements, fill seats, meet deadlines, make money, have fun and all that. In the end I have so many systems that timed out, filled a disc, had nodes go down and otherwise failed due to human and mechanical frailty. "Expert Systems" were possible in the 1970s because expert performance is not a matter of reasoning from first principles, but is also the use of scripts and strategies. If you go to a lawyer to write a will or a corporate formation document they are not going to start from first principles, but they are going to start with a canned document template which they parameterize and modify. This way you get a quality, affordable service that maximizes your utility function and avoids extreme outcomes (I have talked about contacts I have signed with a judge and we both agreed that the contract was ill-formed, invalid,unsatisfiable,uncomputable,unparsable and impossible to interpret and thus irrelevant. At least I get something for my taxes and thirst for justice.) People on this list have voiced their disgust for Drools and it's competitors, and I do not blame them, particularly in the area of error handling. What I *do* know is that commonsense knowledge has a fractal structure in the sense that errors (quality breaks) are distributed in a power-law distribution of severity, thus the kind of "if X, then do this unless..." thinking behind http://inform7.com/ so it is like earthquakes, shipwrecks, nuclear meltdowns and the other catastrophes that weren't modeled by catastrophe theory. On an abstract level it is like that, but it interacts with the handling of time and that is what it is. There are the axioms of a science and there are the procedures and they work side by side Drools, iLog and all those do reification of rules in the sense of priorities, groups and agendas but, in RDFWorld we have data sets, for instance this is the T-Box and that is the A-Box, and these facts are accepted and those are rejected, etc. In RDFWorld we have data sets, and isn't that good? There is one graph of what Mary thinks about what John thinks, and another of what was said by NBC News, etc. So far production rules have a 50% success rate in the enterprise but maybe we can make it as reliable as Java or Python if we use data sets to implement modal, contingent and other relationships. -- Paul Houle *Applying Schemas for Natural Language Processing, Distributed Systems, Classification and Text Mining and Data Lakes* (607) 539 6254 paul.houle on Skype ontolo...@gmail.com :BaseKB -- Query Freebase Data With SPARQL http://basekb.com/gold/ Legal Entity Identifier Lookup https://legalentityidentifier.info/lei/lookup/ <http://legalentityidentifier.info/lei/lookup/> Join our Data Lakes group on LinkedIn https://www.linkedin.com/grp/home?gid=8267275