Unification in a parallel cluster is a difficult problem.  Writing very
large scale unification programs is an even harder problem.

What problem are you trying to solve?

One option would be that you need to evaluate a conventionally-sized
rulebase against many inputs.  Map-reduce should be trivially capable of
this.

Another option would be that you want to evaluate a huge rulebase against a
few inputs.  It isn't clear that this would be useful given the problems of
huge rulebases and the typically super-linear cost of resolution algorithms.

Another option is that you want to evaluate many conventionally-sized
rulebases against one or many inputs in order to implement a boosted rule
engine.  Map-reduce should be relatively trivial for this as well.

What is it that you are trying to do?

On Fri, Oct 19, 2012 at 12:25 PM, Luangsay Sourygna <luang...@gmail.com>wrote:

> Hi,
>
> Does anyone know any (opensource) project that builds a rules engine
> (based on RETE) on top Hadoop?
> Searching a bit on the net, I have only seen a small reference to
> Concord/IBM but there is barely any information available (and surely
> it is not open source).
>
> Alpha and beta memories would be stored on HBase. Should be possible, no?
>
> Regards,
>
> Sourygna
>

Reply via email to