In the interest of exploration and education, here's a few random thoughts.
The approach wolfgang mentions is an old knowledge base technique. I like to use it for data that is "constant-like". Things like code reference tables. One example of this is auto insurance rating. Many companies rate based on zip code and assign a numeric value for each zip code. If a developer hard codes the constant in the rule, it means every time the value changes, the rule would need to be updated. The problem is actually worse than that. Say a company sells insurance in 5 states and there's a total of 500 zip codes. Using the hard coded approach, the developer would need to write 500 rules. If on the other hand, the developer uses facts to match the insurance policy to a zip code rating fact, we can easily change the rating value for a zipcode and not affect the rules. The downside is we need to have good management tools for editing the reference data. All of the reference data should be versioned and basic validation should be performed on the data before it is deployed to production. Unfortunately, all of the products on the market today do not provide robust tools for managing knowledge base data. In the past, I've written custom applications for managing knowledge data in financial applications. For applications that have lots of code reference data that changes regularly, I strongly recommend using fact data instead of hard coding constants. The other benefit is tends to reduce lots of procedural code in the RHS of the rule. Instead of using lots of if/then/else, the rule tends to match the facts in the LHS and use the reference data in the RHS. To put it another way, some of the calculations have become pre-calculated codes. This can make the rules easier to read and maintain. Also, by precalculating some of the data, we reduce the amount of work the rule engine has to perform each time. Using these types of knowledge base techniques also makes it easier to write proof and validation routines to make sure the pre-calculated tables are accurate. I would recommend reading books on knowledge base and expert systems to get a broader understanding of rule programming. Most of the business rule books out there are really just user manuals and don't go into details on the design patterns used in expert systems. Gary Riley's book is a great place to start. As wolfgang stated, On Sat, Jan 8, 2011 at 10:39 AM, Wolfgang Laun <wolfgang.l...@gmail.com> wrote: > All the pundits advocate to put the decision logic into the LHS. Opinions > vary a little wrt. to the use of static facts to > reduce the number of rules. Some say, "The more rules the merrier." I feel > that using static facts for lookup of data is justified, considering this: > > Fact data is easier to change than rule code. > Putting the data into facts is one way of avoiding hard coded constants. > Less code means less room for bugs. > > Now to the one rule that does it all, posted by Derek, with the cascading if > on the RHS. Looking at the code, I've seen a few things which might be worth > noting. > > calcAgeAsOf is called many times. Why not call once and save the result? > Age limits and costs should be kept in a list to avoi code repetition by > using a loop. globals are one way of making such data available. > Setting a variable depending on ?rel and a single logic avoids code > duplication. > The cost result is stored in the object using a call of some Java method. > But bypassing Jess when modifying facts is not a good idea, most of the > time. > > Below is the much reduced code. Of course, it still has the major defeciency > of too much logic on the RHS, > > > (defglobal ?*limAge* = (list 30 40 50 60 70 ) > ?*cstEmp* = (list 11.55 18.95 38.35 65.95 104.35) > ?*cstRel* = (list 4.20 6.05 10.90 17.80 27.40) > ) > > (defrule setCalculatedCostGCI20k5k > ?hbj <- (HrBenefitJoin (hrBenefitConfigId "00001-0000000076") > (benefitJoinId ?bjid) > ;; (calculatedCost ?cost) > (calculatedCost 0) > ;; (OBJECT ?obj) > (coveredPersonId ?cPer) > (payingPersonId ?pPer) > (relationshipId ?rel)) > (Person (personId ?cPer) > (dob ?dob) > ;; (OBJECT ?objP) > ) > => > ; call once, bind result > (bind ?years (call ?objP calcAgeAsOf(call com.arahant.utils.DateUtils now > ))) > > ; avoids cut-and-paste repettion of code > (bind ?costs (if (eq ?rel nil) then ?*cstEmp* else ?*cstRel* fi)) > > (bind ?cost nil) > (for (bind ?i 1) (<= ?i (length$ ?*limAge*)) (++ ?i) > (if (< ?years (nth$ ?i ?*limAge*)) then > (bind ?cost (nth$ ?i ?costs)) > (break) > ) > ) > > (if (eq ?cost nil) then > (printout t "age greater than 69... " ?years " for " ?bjid " " crlf) > else > ; Bypassing Jess when modifying is not a good idea, most of the time. > ; (call ?obj overrideAgeCost ?cost) > (modify ?hbj (calculatedCost ?cost)) > ) > ) > > -W > > > On 7 January 2011 19:44, Jason Morris <jason.c.mor...@gmail.com> wrote: >> >> Hi Derek, >> >> OK OK... I wasn't going to say anything either but with Peter and James >> added to Wolfgang, I have to pile too :-) >> >> IMHO there are a number of maxims of rule-based programming that you're >> breaking here. Ernest has put the first one most succinctly in the past: >> Use many smaller rules that do one thing well rather than one Über Rule that >> boils the ocean. Declarative programming should say what action(s) are to >> be performed when certain facts are present, but not attempt to implement >> those actions directly on the RHS. >> >> The second, as James and Peter pointed out, is not to do Java programming >> on the RHS. If you make the RHS a series of method calls that take the >> variable bindings from the LHS as arguments, your rules will be much cleaner >> and maintainable. >> >> Finally -- and this is just a plain programming nit -- I noticed the >> apparent use of magic numbers in your conditions. What about all these >> calls to overrideAgeCost()? Where are those args coming from? Are they >> part of some policy? What if that policy changes? Then your rules would be >> in a "dirty" form. Wouldn't it be better to get those values from some >> cache or database with most current values? >> >> Something that might help you: >> I've been collecting rule-based metaphors lately -- different ways of >> thinking about using rules. One metaphor that has been particularly >> productive has been thinking about "digestion" and the passing of data >> through a series of modules like a "digestive tract". Obviously, if you >> carry the metaphor too far ... well you see where the "garbage in/ garbage >> out" saying comes into play. :-) But the idea is that you are moving the >> data through different states, partially processing it each time a new rule >> module has a crack at it. This has precedent in UNIX/LINUX with pipes -- >> same idea. Old wine in a new bottle? Perhaps. But this way, you can >> clearly separate out concerns, add pre and post processing functions, and >> test partial results by simply disabling/enabling certain modules in the >> sequence. >> >> Cheers, >> Jason >> >> ------------------------------------------------------ >> Jason Morris >> Chairman, Rules Fest 2010/2011 >> http://www.rulesfest.org >> Morris Technical Solutions LLC >> consult...@morris-technical-solutions.com >> (517) 304-5883 > > -------------------------------------------------------------------- To unsubscribe, send the words 'unsubscribe jess-users y...@address.com' in the BODY of a message to majord...@sandia.gov, NOT to the list (use your own address!) List problems? Notify owner-jess-us...@sandia.gov. --------------------------------------------------------------------