You may want to read this post regarding Spark with Drools:
http://blog.cloudera.com/blog/2015/11/how-to-build-a-complex-event-processing-app-on-apache-spark-and-drools/



On Wed, Nov 4, 2015 at 8:05 PM, Daniel Mahler <dmah...@gmail.com> wrote:

> I am not familiar with any rule engines on Spark Streaming or even plain
> Spark
> Conceptually closest things I am aware of are Datomic and Bloom-lang.
> Neither of them are Spark based but they implement Datalog like languages
> over distributed stores.
>
>    - http://www.datomic.com/
>    - http://bloom-lang.net/
>
> There is somewhat of a mismatch between streaming data and and rule based
> systems since the preconditions of a rule can be satisfied by data that is
> far apart in the stream.
> This is further compounded by the fact that rules can chain arbitrarily,
> potentially recursively.
> Traditionally practical rule based systems rely heavily on indexing and
> and agenda mechanisms like RETE, TREAT and LEAPS:
>
>    - http://www.cs.utexas.edu/ftp/predator/tr-94-28.pdf
>    - https://en.wikipedia.org/wiki/Rete_algorithm
>    - http://www.cs.utexas.edu/~miranker/treator.htm
>
> This entails keeping track of the data you have seen in the past.
>
> I have not worked in this area for some time though and do not know if
> there has been recent progress on this.
>
> cheers
> Daniel
>
> On Wed, Nov 4, 2015 at 6:44 PM, Cheng, Hao <hao.ch...@intel.com> wrote:
>
>> Or try Streaming SQL? Which is a simple layer on top of the Spark
>> Streaming. J
>>
>>
>>
>> https://github.com/Intel-bigdata/spark-streamingsql
>>
>>
>>
>>
>>
>> *From:* Cassa L [mailto:lcas...@gmail.com]
>> *Sent:* Thursday, November 5, 2015 8:09 AM
>> *To:* Adrian Tanase
>> *Cc:* Stefano Baghino; user
>> *Subject:* Re: Rule Engine for Spark
>>
>>
>>
>> Thanks for reply. How about DROOLs. Does it worj with Spark?
>>
>> LCassa
>>
>>
>>
>> On Wed, Nov 4, 2015 at 3:02 AM, Adrian Tanase <atan...@adobe.com> wrote:
>>
>> Another way to do it is to extract your filters as SQL code and load it
>> in a transform – which allows you to change the filters at runtime.
>>
>>
>>
>> Inside the transform you could apply the filters by goind RDD -> DF ->
>> SQL -> RDD.
>>
>>
>>
>> Lastly, depending on how complex your filters are, you could skip SQL and
>> create your own mini-DSL that runs inside transform. I’d definitely start
>> here if the filter predicates are simple enough…
>>
>>
>>
>> -adrian
>>
>>
>>
>> *From: *Stefano Baghino
>> *Date: *Wednesday, November 4, 2015 at 10:15 AM
>> *To: *Cassa L
>> *Cc: *user
>> *Subject: *Re: Rule Engine for Spark
>>
>>
>>
>> Hi LCassa,
>>
>> unfortunately I don't have actual experience on this matter, however for
>> a similar use case I have briefly evaluated Decision
>> <https://github.com/Stratio/Decision> (then called literally Streaming
>> CEP Engine) and it looked interesting. I hope it may help.
>>
>>
>>
>> On Wed, Nov 4, 2015 at 1:42 AM, Cassa L <lcas...@gmail.com> wrote:
>>
>> Hi,
>>
>>  Has anyone used rule engine with spark streaming? I have a case where
>> data is streaming from Kafka and I need to apply some rules on it (instead
>> of hard coding in a code).
>>
>> Thanks,
>>
>> LCassa
>>
>>
>>
>>
>>
>> --
>>
>> BR,
>>
>> Stefano Baghino
>>
>> Software Engineer @ Radicalbit
>>
>>
>>
>
>

Reply via email to