Re: [rules-users] Shadow facts(was JRules\Drools benchmarking)

2008-05-15 Thread Mark Proctor

Hehl, Thomas wrote:


OK, I did. So that means I need to disable shadow facts?

sequential mode already does, as there is no inference modify doesn't do 
anything and thus shadow facts aren't needed.


Mark


 

 




*From:* [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] *On Behalf Of *Mark Proctor

*Sent:* Thursday, May 15, 2008 12:22 PM
*To:* Rules Users List
*Subject:* Re: [rules-users] Shadow facts(was JRules\Drools benchmarking)

 


Hehl, Thomas wrote:

I did some reading about shadow facts and I was thinking about turning 
them off, but it seems to be that if I use stateless sessions 
(session.execute()) that shadow facts are irrelevant. Is this true?


No, only if you turn on sequential mode.

Mark

 




*From:* [EMAIL PROTECTED] 
 
[mailto:[EMAIL PROTECTED] *On Behalf Of *Edson Tirelli

*Sent:* Thursday, May 15, 2008 10:00 AM
*To:* Rules Users List
*Subject:* Re: [rules-users] JRules\Drools benchmarking...

 



   It seems you are using a good strategy to do your tests. But still, 
it is difficult to explain why one is slower than the other without 
seeing the actual test code. This is because all the engines have 
stronger and weaker spots. Just to mention one example, some engines 
(not talking specifically about drools and jrules, but about all 
engines) implement faster alpha evaluation, others implement faster 
beta (join) evaluation, others implement good optimizations for not() 
while others may focus on eval(), etc. It is up to the point that when 
comparing 2 engines, one performs better in hardware with a bigger L2 
cache while the other performs better in hardware with a smaller L2 cache.


   So, best I can do without looking at the actual tests is provide 
you some tips:


1. First of all, are you using Drools 4.0.7? It is very important that 
you use this version over the previous ones.


2. Are you using stateful or stateless sessions? If you are using 
stateful sessions are you calling dispose() after using the session? 
If not, you are inflating your memory and certainly causing the engine 
to run slower over time.


3. Are you sharing the rulebase among multiple requests? The drools 
rulebase is designed to be shared and the compilation process is eager 
and pretty heavy compared to session creation. So, it pays off to 
create the rulebase once and share among requests.


4. Did you disabled shadow facts? Test cases usually use a really 
small fact base, so would not be much affected by shadow facts, but 
still, disabling them improves performance, but require some best 
practices to be followed.


5. Do your rules follow best practices (similar to SQL writing best 
practices), i.e., write the most constraining patterns first, write 
the most constraining restrictions first, etc? Do you write patterns 
in the same order among rules to maximize node sharing? I guess you 
do, but worth mentioning anyway.


   Anyway, just some tips.

   Regarding the jrules blog, I know it, but I make a bet with you. 
Download the manners benchmark to your machine, make sure the rules 
are the correct ones (not cheated ones), run the test on both engines 
and share the results. I pay you a beer if you get results similar to 
those published in the blog. :)
   My point is not that we are faster (what I know we are) or them are 
faster. My point is that perf benchmarks for rules engines are a 
really tricky matter, with lots of variables involved, that make every 
test case configuration unique. Try to reproduce in a different 
environment and you will get different performance rates between the 
engines.


   That is why, our recommendation is to always do what you are doing: 
try your own use case. Now, whatever you are trying, I'm sure it is 
possible to optimize if we see the test case, but is it worth it? Or 
the perf as it is already meets your requirements?


   Cheers,
   Edson

PS: I'm serious about the beer... ;) run and share the results with us...


2008/5/15 mmquelo massi <[EMAIL PROTECTED] >:

You r right...

I have to tell you what I have done...

I did not define a "stand-alone" benchmark like the "Manners" one.

I benchmarked a real j2ee application.

I have got jrules deployed with a resource adapter and drools deployed
with simple jars libraries plus jbrms.

Jrules uses a "bres" module which does the same trick jbrms does.

Both of them are deployed on the same AS, in the same time, same
machine (my laptop: dual core 2 duo 1.66, 2GB).

Using the inversion of control pattern I found out how to "switch the
rule engine" at run-time. So I can easily choose then rule engine to
use between drools and jrules.

Ofcourse thay have got two separate rule repositories but both of them
persist the rules on the same DB which is Derby.

The j2ee application I benchmarked sends a request object to t

RE: [rules-users] Shadow facts(was JRules\Drools benchmarking)

2008-05-15 Thread Hehl, Thomas
OK, I did. So that means I need to disable shadow facts?

 

 

  _  

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mark Proctor
Sent: Thursday, May 15, 2008 12:22 PM
To: Rules Users List
Subject: Re: [rules-users] Shadow facts(was JRules\Drools benchmarking)

 

Hehl, Thomas wrote: 

I did some reading about shadow facts and I was thinking about turning them
off, but it seems to be that if I use stateless sessions (session.execute())
that shadow facts are irrelevant. Is this true?

No, only if you turn on sequential mode.

Mark



 

  _  

From: [EMAIL PROTECTED]

[mailto:[EMAIL PROTECTED]
 ] On Behalf Of Edson Tirelli
Sent: Thursday, May 15, 2008 10:00 AM
To: Rules Users List
Subject: Re: [rules-users] JRules\Drools benchmarking...

 


   It seems you are using a good strategy to do your tests. But still, it is
difficult to explain why one is slower than the other without seeing the
actual test code. This is because all the engines have stronger and weaker
spots. Just to mention one example, some engines (not talking specifically
about drools and jrules, but about all engines) implement faster alpha
evaluation, others implement faster beta (join) evaluation, others implement
good optimizations for not() while others may focus on eval(), etc. It is up
to the point that when comparing 2 engines, one performs better in hardware
with a bigger L2 cache while the other performs better in hardware with a
smaller L2 cache.

   So, best I can do without looking at the actual tests is provide you some
tips:

1. First of all, are you using Drools 4.0.7? It is very important that you
use this version over the previous ones.

2. Are you using stateful or stateless sessions? If you are using stateful
sessions are you calling dispose() after using the session? If not, you are
inflating your memory and certainly causing the engine to run slower over
time.

3. Are you sharing the rulebase among multiple requests? The drools rulebase
is designed to be shared and the compilation process is eager and pretty
heavy compared to session creation. So, it pays off to create the rulebase
once and share among requests.

4. Did you disabled shadow facts? Test cases usually use a really small fact
base, so would not be much affected by shadow facts, but still, disabling
them improves performance, but require some best practices to be followed.

5. Do your rules follow best practices (similar to SQL writing best
practices), i.e., write the most constraining patterns first, write the most
constraining restrictions first, etc? Do you write patterns in the same
order among rules to maximize node sharing? I guess you do, but worth
mentioning anyway.

   Anyway, just some tips.

   Regarding the jrules blog, I know it, but I make a bet with you. Download
the manners benchmark to your machine, make sure the rules are the correct
ones (not cheated ones), run the test on both engines and share the results.
I pay you a beer if you get results similar to those published in the blog.
:) 
   My point is not that we are faster (what I know we are) or them are
faster. My point is that perf benchmarks for rules engines are a really
tricky matter, with lots of variables involved, that make every test case
configuration unique. Try to reproduce in a different environment and you
will get different performance rates between the engines.

   That is why, our recommendation is to always do what you are doing: try
your own use case. Now, whatever you are trying, I'm sure it is possible to
optimize if we see the test case, but is it worth it? Or the perf as it is
already meets your requirements?

   Cheers,
   Edson

PS: I'm serious about the beer... ;) run and share the results with us... 




2008/5/15 mmquelo massi <[EMAIL PROTECTED]  >:

You r right...

I have to tell you what I have done...

I did not define a "stand-alone" benchmark like the "Manners" one.

I benchmarked a real j2ee application.

I have got jrules deployed with a resource adapter and drools deployed
with simple jars libraries plus jbrms.

Jrules uses a "bres" module which does the same trick jbrms does.

Both of them are deployed on the same AS, in the same time, same
machine (my laptop: dual core 2 duo 1.66, 2GB).

Using the inversion of control pattern I found out how to "switch the
rule engine" at run-time. So I can easily choose then rule engine to
use between drools and jrules.

Ofcourse thay have got two separate rule repositories but both of them
persist the rules on the same DB which is Derby.

The j2ee application I benchmarked sends a request object to the
current rule engine and get back a reply from it. I just measured the
elapsed time between the request and reply generation using drools
first and the jrules.

I did the measurements tens of times.

Both rule engines implement the same rules and the Drools rules (which
I personally implemented) are at least as opt

Re: [rules-users] Shadow facts(was JRules\Drools benchmarking)

2008-05-15 Thread Mark Proctor

Hehl, Thomas wrote:


I did some reading about shadow facts and I was thinking about turning 
them off, but it seems to be that if I use stateless sessions 
(session.execute()) that shadow facts are irrelevant. Is this true?



No, only if you turn on sequential mode.

Mark


 




*From:* [EMAIL PROTECTED] 
[mailto:[EMAIL PROTECTED] *On Behalf Of *Edson Tirelli

*Sent:* Thursday, May 15, 2008 10:00 AM
*To:* Rules Users List
*Subject:* Re: [rules-users] JRules\Drools benchmarking...

 



   It seems you are using a good strategy to do your tests. But still, 
it is difficult to explain why one is slower than the other without 
seeing the actual test code. This is because all the engines have 
stronger and weaker spots. Just to mention one example, some engines 
(not talking specifically about drools and jrules, but about all 
engines) implement faster alpha evaluation, others implement faster 
beta (join) evaluation, others implement good optimizations for not() 
while others may focus on eval(), etc. It is up to the point that when 
comparing 2 engines, one performs better in hardware with a bigger L2 
cache while the other performs better in hardware with a smaller L2 cache.


   So, best I can do without looking at the actual tests is provide 
you some tips:


1. First of all, are you using Drools 4.0.7? It is very important that 
you use this version over the previous ones.


2. Are you using stateful or stateless sessions? If you are using 
stateful sessions are you calling dispose() after using the session? 
If not, you are inflating your memory and certainly causing the engine 
to run slower over time.


3. Are you sharing the rulebase among multiple requests? The drools 
rulebase is designed to be shared and the compilation process is eager 
and pretty heavy compared to session creation. So, it pays off to 
create the rulebase once and share among requests.


4. Did you disabled shadow facts? Test cases usually use a really 
small fact base, so would not be much affected by shadow facts, but 
still, disabling them improves performance, but require some best 
practices to be followed.


5. Do your rules follow best practices (similar to SQL writing best 
practices), i.e., write the most constraining patterns first, write 
the most constraining restrictions first, etc? Do you write patterns 
in the same order among rules to maximize node sharing? I guess you 
do, but worth mentioning anyway.


   Anyway, just some tips.

   Regarding the jrules blog, I know it, but I make a bet with you. 
Download the manners benchmark to your machine, make sure the rules 
are the correct ones (not cheated ones), run the test on both engines 
and share the results. I pay you a beer if you get results similar to 
those published in the blog. :)
   My point is not that we are faster (what I know we are) or them are 
faster. My point is that perf benchmarks for rules engines are a 
really tricky matter, with lots of variables involved, that make every 
test case configuration unique. Try to reproduce in a different 
environment and you will get different performance rates between the 
engines.


   That is why, our recommendation is to always do what you are doing: 
try your own use case. Now, whatever you are trying, I'm sure it is 
possible to optimize if we see the test case, but is it worth it? Or 
the perf as it is already meets your requirements?


   Cheers,
   Edson

PS: I'm serious about the beer... ;) run and share the results with us...

2008/5/15 mmquelo massi <[EMAIL PROTECTED] >:

You r right...

I have to tell you what I have done...

I did not define a "stand-alone" benchmark like the "Manners" one.

I benchmarked a real j2ee application.

I have got jrules deployed with a resource adapter and drools deployed
with simple jars libraries plus jbrms.

Jrules uses a "bres" module which does the same trick jbrms does.

Both of them are deployed on the same AS, in the same time, same
machine (my laptop: dual core 2 duo 1.66, 2GB).

Using the inversion of control pattern I found out how to "switch the
rule engine" at run-time. So I can easily choose then rule engine to
use between drools and jrules.

Ofcourse thay have got two separate rule repositories but both of them
persist the rules on the same DB which is Derby.

The j2ee application I benchmarked sends a request object to the
current rule engine and get back a reply from it. I just measured the
elapsed time between the request and reply generation using drools
first and the jrules.

I did the measurements tens of times.

Both rule engines implement the same rules and the Drools rules (which
I personally implemented) are at least as optimized as the jrules
ones. In the Jrules version of the rules there are a lot of
"Eval(...)" blocks in the Drools version I did not use the "Eval()" at
all but I just did pattern matching.

If you want i can send you a

[rules-users] Shadow facts(was JRules\Drools benchmarking)

2008-05-15 Thread Hehl, Thomas
I did some reading about shadow facts and I was thinking about turning them
off, but it seems to be that if I use stateless sessions (session.execute())
that shadow facts are irrelevant. Is this true?

 

  _  

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Edson Tirelli
Sent: Thursday, May 15, 2008 10:00 AM
To: Rules Users List
Subject: Re: [rules-users] JRules\Drools benchmarking...

 


   It seems you are using a good strategy to do your tests. But still, it is
difficult to explain why one is slower than the other without seeing the
actual test code. This is because all the engines have stronger and weaker
spots. Just to mention one example, some engines (not talking specifically
about drools and jrules, but about all engines) implement faster alpha
evaluation, others implement faster beta (join) evaluation, others implement
good optimizations for not() while others may focus on eval(), etc. It is up
to the point that when comparing 2 engines, one performs better in hardware
with a bigger L2 cache while the other performs better in hardware with a
smaller L2 cache.

   So, best I can do without looking at the actual tests is provide you some
tips:

1. First of all, are you using Drools 4.0.7? It is very important that you
use this version over the previous ones.

2. Are you using stateful or stateless sessions? If you are using stateful
sessions are you calling dispose() after using the session? If not, you are
inflating your memory and certainly causing the engine to run slower over
time.

3. Are you sharing the rulebase among multiple requests? The drools rulebase
is designed to be shared and the compilation process is eager and pretty
heavy compared to session creation. So, it pays off to create the rulebase
once and share among requests.

4. Did you disabled shadow facts? Test cases usually use a really small fact
base, so would not be much affected by shadow facts, but still, disabling
them improves performance, but require some best practices to be followed.

5. Do your rules follow best practices (similar to SQL writing best
practices), i.e., write the most constraining patterns first, write the most
constraining restrictions first, etc? Do you write patterns in the same
order among rules to maximize node sharing? I guess you do, but worth
mentioning anyway.

   Anyway, just some tips.

   Regarding the jrules blog, I know it, but I make a bet with you. Download
the manners benchmark to your machine, make sure the rules are the correct
ones (not cheated ones), run the test on both engines and share the results.
I pay you a beer if you get results similar to those published in the blog.
:) 
   My point is not that we are faster (what I know we are) or them are
faster. My point is that perf benchmarks for rules engines are a really
tricky matter, with lots of variables involved, that make every test case
configuration unique. Try to reproduce in a different environment and you
will get different performance rates between the engines.

   That is why, our recommendation is to always do what you are doing: try
your own use case. Now, whatever you are trying, I'm sure it is possible to
optimize if we see the test case, but is it worth it? Or the perf as it is
already meets your requirements?

   Cheers,
   Edson

PS: I'm serious about the beer... ;) run and share the results with us... 



2008/5/15 mmquelo massi <[EMAIL PROTECTED]  >:

You r right...

I have to tell you what I have done...

I did not define a "stand-alone" benchmark like the "Manners" one.

I benchmarked a real j2ee application.

I have got jrules deployed with a resource adapter and drools deployed
with simple jars libraries plus jbrms.

Jrules uses a "bres" module which does the same trick jbrms does.

Both of them are deployed on the same AS, in the same time, same
machine (my laptop: dual core 2 duo 1.66, 2GB).

Using the inversion of control pattern I found out how to "switch the
rule engine" at run-time. So I can easily choose then rule engine to
use between drools and jrules.

Ofcourse thay have got two separate rule repositories but both of them
persist the rules on the same DB which is Derby.

The j2ee application I benchmarked sends a request object to the
current rule engine and get back a reply from it. I just measured the
elapsed time between the request and reply generation using drools
first and the jrules.

I did the measurements tens of times.

Both rule engines implement the same rules and the Drools rules (which
I personally implemented) are at least as optimized as the jrules
ones. In the Jrules version of the rules there are a lot of
"Eval(...)" blocks in the Drools version I did not use the "Eval()" at
all but I just did pattern matching.

If you want i can send you a more specific documentation but I hope
this explanation will be enough to show you that the measurements I
have done are not that bad.

In any case I noticed that after a warming-up phase, the

RE: [rules-users] Multithreading Rulebase Parsing Threadsafe

2008-05-15 Thread Knapp, Barry
Perfect answer.  Thanks!

 

 



Barry Knapp

[EMAIL PROTECTED]

office   919.651.5039

cell   919.995.0396

MSN [EMAIL PROTECTED] 

AIM  BarryRKnapp

Yahoo  BarryKnapp

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Edson Tirelli
Sent: Thursday, May 15, 2008 10:05 AM
To: Rules Users List
Subject: Re: [rules-users] Multithreading Rulebase Parsing Threadsafe

 


   Barry,

   We fixed all threading issues we were aware of in 4.0.5-4.0.7. So, it
is safe to use it now. Just make sure the only thing you eventually
share among threads is the rulebase. Do not share session or packages
among threads.

   []s
   Edson

2008/5/14 Barry K <[EMAIL PROTECTED]>:


We have an application with 2 rules engines and we load the rules on
startup.
Is creation of a rulebase threadsafe?
Can I load both rulebases using the code below concurrently in 4.07 ?

I encountered issues multithreading this in 4.01 but in prototyping for
our
4.07 upgrade I have not encountered the same issues.

PackageBuilder builder = new PackageBuilder(pkgBuilderCfg);
builder.addPackageFromDrl( new StringReader(ruleset) );
RuleBase ruleBase = RuleBaseFactory.newRuleBase();

Thanks
Barry


--
View this message in context: 
http://www.nabble.com/Multithreading-Rulebase-Parsing-Threadsafe-tp17239
987p17239987.html
Sent from the drools - user mailing list archive at Nabble.com.

___
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users




-- 
Edson Tirelli
JBoss Drools Core Development
Office: +55 11 3529-6000
Mobile: +55 11 9287-5646
JBoss, a division of Red Hat @ www.jboss.com 

<>___
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users


Re: [rules-users] Multithreading Rulebase Parsing Threadsafe

2008-05-15 Thread Edson Tirelli
   Barry,

   We fixed all threading issues we were aware of in 4.0.5-4.0.7. So, it is
safe to use it now. Just make sure the only thing you eventually share among
threads is the rulebase. Do not share session or packages among threads.

   []s
   Edson

2008/5/14 Barry K <[EMAIL PROTECTED]>:

>
> We have an application with 2 rules engines and we load the rules on
> startup.
> Is creation of a rulebase threadsafe?
> Can I load both rulebases using the code below concurrently in 4.07 ?
>
> I encountered issues multithreading this in 4.01 but in prototyping for our
> 4.07 upgrade I have not encountered the same issues.
>
> PackageBuilder builder = new PackageBuilder(pkgBuilderCfg);
> builder.addPackageFromDrl( new StringReader(ruleset) );
> RuleBase ruleBase = RuleBaseFactory.newRuleBase();
>
> Thanks
> Barry
>
>
> --
> View this message in context:
> http://www.nabble.com/Multithreading-Rulebase-Parsing-Threadsafe-tp17239987p17239987.html
> Sent from the drools - user mailing list archive at Nabble.com.
>
> ___
> rules-users mailing list
> rules-users@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/rules-users
>



-- 
Edson Tirelli
JBoss Drools Core Development
Office: +55 11 3529-6000
Mobile: +55 11 9287-5646
JBoss, a division of Red Hat @ www.jboss.com
___
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users


Re: [rules-users] JRules\Drools benchmarking...

2008-05-15 Thread Edson Tirelli
   It seems you are using a good strategy to do your tests. But still, it is
difficult to explain why one is slower than the other without seeing the
actual test code. This is because all the engines have stronger and weaker
spots. Just to mention one example, some engines (not talking specifically
about drools and jrules, but about all engines) implement faster alpha
evaluation, others implement faster beta (join) evaluation, others implement
good optimizations for not() while others may focus on eval(), etc. It is up
to the point that when comparing 2 engines, one performs better in hardware
with a bigger L2 cache while the other performs better in hardware with a
smaller L2 cache.

   So, best I can do without looking at the actual tests is provide you some
tips:

1. First of all, are you using Drools 4.0.7? It is very important that you
use this version over the previous ones.

2. Are you using stateful or stateless sessions? If you are using stateful
sessions are you calling dispose() after using the session? If not, you are
inflating your memory and certainly causing the engine to run slower over
time.

3. Are you sharing the rulebase among multiple requests? The drools rulebase
is designed to be shared and the compilation process is eager and pretty
heavy compared to session creation. So, it pays off to create the rulebase
once and share among requests.

4. Did you disabled shadow facts? Test cases usually use a really small fact
base, so would not be much affected by shadow facts, but still, disabling
them improves performance, but require some best practices to be followed.

5. Do your rules follow best practices (similar to SQL writing best
practices), i.e., write the most constraining patterns first, write the most
constraining restrictions first, etc? Do you write patterns in the same
order among rules to maximize node sharing? I guess you do, but worth
mentioning anyway.

   Anyway, just some tips.

   Regarding the jrules blog, I know it, but I make a bet with you. Download
the manners benchmark to your machine, make sure the rules are the correct
ones (not cheated ones), run the test on both engines and share the results.
I pay you a beer if you get results similar to those published in the blog.
:)
   My point is not that we are faster (what I know we are) or them are
faster. My point is that perf benchmarks for rules engines are a really
tricky matter, with lots of variables involved, that make every test case
configuration unique. Try to reproduce in a different environment and you
will get different performance rates between the engines.

   That is why, our recommendation is to always do what you are doing: try
your own use case. Now, whatever you are trying, I'm sure it is possible to
optimize if we see the test case, but is it worth it? Or the perf as it is
already meets your requirements?

   Cheers,
   Edson

PS: I'm serious about the beer... ;) run and share the results with us...


2008/5/15 mmquelo massi <[EMAIL PROTECTED]>:

> You r right...
>
> I have to tell you what I have done...
>
> I did not define a "stand-alone" benchmark like the "Manners" one.
>
> I benchmarked a real j2ee application.
>
> I have got jrules deployed with a resource adapter and drools deployed
> with simple jars libraries plus jbrms.
>
> Jrules uses a "bres" module which does the same trick jbrms does.
>
> Both of them are deployed on the same AS, in the same time, same
> machine (my laptop: dual core 2 duo 1.66, 2GB).
>
> Using the inversion of control pattern I found out how to "switch the
> rule engine" at run-time. So I can easily choose then rule engine to
> use between drools and jrules.
>
> Ofcourse thay have got two separate rule repositories but both of them
> persist the rules on the same DB which is Derby.
>
> The j2ee application I benchmarked sends a request object to the
> current rule engine and get back a reply from it. I just measured the
> elapsed time between the request and reply generation using drools
> first and the jrules.
>
> I did the measurements tens of times.
>
> Both rule engines implement the same rules and the Drools rules (which
> I personally implemented) are at least as optimized as the jrules
> ones. In the Jrules version of the rules there are a lot of
> "Eval(...)" blocks in the Drools version I did not use the "Eval()" at
> all but I just did pattern matching.
>
> If you want i can send you a more specific documentation but I hope
> this explanation will be enough to show you that the measurements I
> have done are not that bad.
>
> In any case I noticed that after a warming-up phase, the drools engine
> gives a reply back 3 times slower than the jrules engine.
>
> The link I have sent show you something related to it, It reports the
> manners execution time using drools and jrules. As you can see the
> difference is a 1,5x factorso I was wrong... drools is not that
> slow. In anycase seems to be slower that jrules.
>
> Look at this:
>
>
> http://blogs.

[rules-users] Re: Java vs mvel dialects

2008-05-15 Thread Krishna Satya
>
> Edson, Thanks so much for the reply.  It makes perfect sense to me.  I
> really appreciate it.

  - Krishna



>   Krishna,
>
>   The dialect configuration affects only semantic code blocks. I.e.,
> consequences, eval() blocks, etc.
>   They are designed to be interchangeable. That is why the examples have
> rules using each of the dialects.
>
>   It is mostly a matter of taste, but MVEL is a script language and as so
> has syntax sugar for nested object access, collections, maps, arrays etc...
> nothing more than that. Also, MVEL supports java syntax anyway. For
> instance, assuming you have a class:
>
>   Person {
> Map addresses;
> // gets/sets
>}
>
>   The following consequence should run just fine, both in java and MVEL:
>
> then
>$person.getAddresses().get("home").setStreetName("my street");
> end
>
>   Although, MVEL allows you to use a cleaner syntax:
>
> then
>$person.addresses["home"].streetName = "my street";
> end
>
>   It is mostly a matter of taste.
>
>   []s
>   Edson
>
> 2008/5/14 Krishna Satya <[EMAIL PROTECTED]>:
>
> > Hi, I am trying to understand the difference in how drl rules are
> expressed
> > via the java or mvel dialects.  Looking at the drools-examples it is not
> > exactly clear.  I was looking at the PetStore.drl which seems to specify
> the
> > dialects for various rules using both java and mvel.  Are there any
> > references to examples which showcase a rule that is expressed both
> through
> > java and mvel dialects so it is clear what the differences are.  The
> rules
> > in the PetStore.drl which specify java or mvel syntactically seem to look
> > the same.
> >
> > Also, are there any general suggestions as to when a rule author should
> use
> > the java or the mvel dialect.
> >
> > Thanks.
> > - K
> >
> >
> > ___
> > rules-users mailing list
> > rules-users@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/rules-users
> >
> >
>
>
> --
> Edson Tirelli
> JBoss Drools Core Development
> Office: +55 11 3529-6000
> Mobile: +55 11 9287-5646
> JBoss, a division of Red Hat @ www.jboss.com
> -- next part --
> An HTML attachment was scrubbed...
> URL:
> http://lists.jboss.org/pipermail/rules-users/attachments/20080514/57a78c73/attachment-0001.html
>
>
___
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users


[rules-users] Writing a business-Rule (Promod George)

2008-05-15 Thread pramod george
Hi.
I'm new to Drools and also to this group.
Can anyone give me some pointers on how
to write a business rule in a drl language?
Ie:- any pdf or link that talks about 
(in depth) analysing a business scenario
and then converting it into a drl format?

Thank you.
-Promod

___
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users


Re: [rules-users] JRules\Drools benchmarking...

2008-05-15 Thread mmquelo massi
You r right...

I have to tell you what I have done...

I did not define a "stand-alone" benchmark like the "Manners" one.

I benchmarked a real j2ee application.

I have got jrules deployed with a resource adapter and drools deployed
with simple jars libraries plus jbrms.

Jrules uses a "bres" module which does the same trick jbrms does.

Both of them are deployed on the same AS, in the same time, same
machine (my laptop: dual core 2 duo 1.66, 2GB).

Using the inversion of control pattern I found out how to "switch the
rule engine" at run-time. So I can easily choose then rule engine to
use between drools and jrules.

Ofcourse thay have got two separate rule repositories but both of them
persist the rules on the same DB which is Derby.

The j2ee application I benchmarked sends a request object to the
current rule engine and get back a reply from it. I just measured the
elapsed time between the request and reply generation using drools
first and the jrules.

I did the measurements tens of times.

Both rule engines implement the same rules and the Drools rules (which
I personally implemented) are at least as optimized as the jrules
ones. In the Jrules version of the rules there are a lot of
"Eval(...)" blocks in the Drools version I did not use the "Eval()" at
all but I just did pattern matching.

If you want i can send you a more specific documentation but I hope
this explanation will be enough to show you that the measurements I
have done are not that bad.

In any case I noticed that after a warming-up phase, the drools engine
gives a reply back 3 times slower than the jrules engine.

The link I have sent show you something related to it, It reports the
manners execution time using drools and jrules. As you can see the
difference is a 1,5x factorso I was wrong... drools is not that
slow. In anycase seems to be slower that jrules.

Look at this:

http://blogs.ilog.com/brms/wp-content/uploads/2007/10/jrules-perf-manners.png

Massimiliano



On 5/15/08, Edson Tirelli <[EMAIL PROTECTED]> wrote:
>The old recurring performance evaluation question... :)
>
>You know that an explanation can only be made after having looked at the
> tests used in the benchmark, the actual rules used by both products,
> hardware specs, etc... so, not quite sure what answer do you want?
>
>For instance, there are a lot of people that think exactly the contrary.
> Just one example:
> http://blog.athico.com/2007/08/drools-vs-jrules-performance-and-future.html
>
>My preferred answer is still:
>
> "In 99% of the applications, the bottleneck is IO: databases, network, etc.
> So, test your use case with both products, make sure it performs well
> enough, add to your analysis the products feature set, expressiveness power,
> product flexibility, cost, professionals availability, support quality, etc,
> and choose the one that best fits you."
>
>That is because I'm sure, whatever your rules are, in whatever product
> you try them, they can be further optimized by having a product expert
> looking into them. But what is the point?
>
>Cheers,
>   Edson
>
>
>
> 2008/5/14 mmquelo massi <[EMAIL PROTECTED]>:
>
>>
>> Hi everybody,
>>
>> I did a benchmark on Drools\Jrules.
>>
>> I found out that drools is about 2,5-3 times slower than Jrules.
>>
>> How comes?
>>
>> The results I got are quite similar to the ones in:
>>
>>
>> http://images.google.com/imgres?imgurl=http://blogs.ilog.com/brms/wp-content/uploads/2007/10/jrules-perf-manners.png&imgrefurl=http://blogs.ilog.com/brms/category/jrules/&h=516&w=722&sz=19&hl=it&start=1&um=1&tbnid=YBqwC0nwaSLxwM:&tbnh=100&tbnw=140&prev=/images%3Fq%3Dbrms%2Bbencmark%26um%3D1%26hl%3Dit
>>
>> Any explanations?
>>
>> Thank you.
>>
>> Bye
>>
>> Massi
>>
>> ___
>> rules-users mailing list
>> rules-users@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/rules-users
>>
>>
>
>
> --
> Edson Tirelli
> JBoss Drools Core Development
> Office: +55 11 3529-6000
> Mobile: +55 11 9287-5646
> JBoss, a division of Red Hat @ www.jboss.com
>
___
rules-users mailing list
rules-users@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/rules-users