In message <[email protected]>, [email protected] writes:
> On Tue, 12 Oct 2010, Morris, Christopher wrote: >> As you can guess -- besides a maintenance problem, there is also a >> performance problem as each message may be examined by lots of >> rules. I've mitigated this problem to some degree by creating >> sections in our configuration file... There is a section at the >> top called 'high-volume' that intercepts and handles events that >> are known to be voluminous in nature. And then there's a >> 'temporary' section to deal with short term or unusual conditions. >> Finally, the remaining rules are ordered by the syslog 'tag'. I >> sometimes use SIGUSR1 to dump the rule-usage-frequency and may will >> reorder the rules to reduce the load on the system and to improve >> message processing. But of course this adds to the maintenance >> burden... (BTW, our loghost is a Sun R240 system.) I usually suggest doing exactly what you are doing along with breaking the rules up into multiple sets (files) once you get more than 30 or 40 rules in a file. At that point the overhead associated with using a framework starts to pay off. >> I've experimented with creating rules in separate config files - >> but this has two problems from my perspective: One is that it >> doesn't help the problem of unnecessary examination of events by >> rules that don't really apply to that ruleset -- It can with the proper framework. See below. >> and (2) is that my >> 'philosophy/strategy' of using a set of rules to filter out known >> events and reporting exceptions breaks down when I use multiple >> config files... Or does it? No it doesn't have to. You just have to use contexts to control the applications of the additional rules files. Your catchall rules live at the end of the rule processing chain. E.G. http://www.cs.umb.edu/~rouilj/sec/rulesets/99rules_reset.sr >> I guess I'm asking if there are some ideas/suggestions for >> rules/patters/strategies that I might be able to adopt to make our >> SEC configuration better and to make management & maintenance >> easier given the style and goals we have. > >if you put the rules in multiple files, group them by something and then >have the first rule in the file be to match that something, otherwise >skip checking all other rules in that file An older way to do this is to use a framework that I wrote about in 2004. See: http://www.cs.umb.edu/~rouilj/sec/sec_paper_full.pdf section 4 "Strategies to Improve Performance" linked from http://www.cs.umb.edu/~rouilj/sec/ and also: http://www.cs.umb.edu/~rouilj/sec/rulesets for rulesets implementing the framework. This was written before the jump rule was implemented in SEC, but a single file of jump rules could be written that processes all events. (Of course you would want to put the jump rules in order of the most frequent matches.) Then the jump rules are used to categorize the event (by source host, application or other criteria) and submit the event to a single or multiple file set of rules that are specific for the category (look for cfset in the man page of a recent SEC release). You can implement catchall rules in one of two ways using the jump mechanism: When using a single ruleset, add a jump command at the end of the ruleset that jumps to the catchall ruleset. This will fire only if none of the rules above it matched the event. append the catchall ruleset to the end of the multiple file set of rules used by the categorizing ruleset. (Note that you would still need the framework I presented if there are multiple rules files in the set accessed by the jump rule. Each file in the multi-file ruleset is processed virtually in parallel just like SEC's normal processing of multiple rules files.) Splitting a large single ruleset of 200 rules into 20 files using just my framework results in 20 or so rules being applied before the catchall rules is triggered (assuming the event doesn't match any ruleset at all). Compare that to 200 rule applications when everything was in a single file. If the event does match a ruleset, then the number of applications is 20 plus the average number of rules matched in that ruleset plus 3 or 4 for the framework. Using a jump rule to select a particular ruleset would apply about the same number of rules, but I think using jump rules would make the decision tree easier to visualize and manage. Also using jump rules you could more easily refine the path the event takes through the rulesets creating a binary tree or hash like mechanism to reduce the number of rules that the event needed to be tested against. >has anyone thought about making a variation of SEC that could compile the >rule matches down to a parse tree instead of doing a series of regex >matches? at low traffic volumes it doesn't matter, but for high traffic >volumes and large numbers of rules (like Christopher has) it could make a >huge difference. Cutting down regexp matches is the idea behind Risto's recent proposal of a new rule type. Basically there would be a template rule that if it matched the event would parse the event into named fields (e.g. SSHD_EVENT_TEMPLATE). Other rules can check to see if the template was defined (because it matched the event) and if so use the named fields in their actions. So multiple rules could use the same template for an event that is parsed once but used to control actions multiple times. -- -- rouilj John Rouillard =========================================================================== My employers don't acknowledge my existence much less my opinions. ------------------------------------------------------------------------------ Beautiful is writing same markup. Internet Explorer 9 supports standards for HTML5, CSS3, SVG 1.1, ECMAScript5, and DOM L2 & L3. Spend less time writing and rewriting code and more time creating great experiences on the web. Be a part of the beta today. http://p.sf.net/sfu/beautyoftheweb _______________________________________________ Simple-evcorr-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/simple-evcorr-users
