Hi Ryan, hi all,
my experience with the core-rules is only limited. I installed the rules and
tweaked
them to match my needs and generally use my own rules for research/toy
purposes. Based
on that limited experience, I'd like to start the discussion a little
beforehand, as I
believe the fundamentals need to be *right* and *transparent* before mangling
the
existing rules into more complicated ones.
So, I'd like to start with what I am missing a bit in the current rule sets,
trying to
compile a list of objectives we should all agree upon, before getting our hands
dirty:
(1) Transparency
To a high degree the concept of the rules need to be transparent to the
user. If
it takes one more than two days, to read/understand the structure of the
rules, it
might shrug people off from using the rules in the first step.
Likewise, the outcome (alerts) of the rules needs to be understandable to
anyone.
The switch from the old (pre 2.x) version to anomaly scoring was a good
idea, but
the communication was suboptimal.
Any central place where we have a documentation on the scale of the
anomaly score?
What is a "good" threshold meant to be? Can we fix this beforehand? Can
you share
experiences with tests, e.g. the average score when hitting some app with
sqlmap?
How are the severity levels being used? I know they exist, but I do not
have a clue
on how consistent and with which intention they're spread in the rules.
(2) Modularity & Documentation
The current rules have a modular structure, but to most people it is not
clear how
to exactly adjust rules. Though I do like the blog-posts we have about
tweaking the
rules, we need to have some central, clean and well-structured
documentation. This
needs to be maintainable and easy to understand.
I'd propose to use a simply style (e.g. markdown) to document the rules
inline and
additionally provide tools to automatically generate docs from that.
The idea of the rule-doc template is nice, but we need a nice and *clean*
looking
central page for that (the current homepage at owasp looks terrible
confusing).
For example, if I'm hit by rule-ID 960012 - it would be perfect to go to
http://www.modsecurity.org/crs/rule/960012
and immediately have an explanation of that rule.
I'd be happy to directly create a link to such a URL into the
AuditConsole. A similar
thing could be done for rule-tags.
For example, the AuditConsole provides a "goto" link based on hashtags,
e.g. if an
event is tagged as "#sql-injection", then one can define a target such as
http://my.internal.wiki.org/security/sql-injection
(3) Usage Statistics
Ryan once provided the idea of gathering usage statistics on rules. Some
central
place to simply collect "the hits for rule X", "the average TX-score per
request"
or the like.
Would anyone be interested in sharing such data if a central place would
exist?
I shortly discussed the option to include some "report-false-positive"
button
into the AuditConsole. That might e.g. send a report including an
obfuscated audit
event to the false-positive-report-mailing list.
Would anyone use such a thing?
What kind of information is one willing to provide?
If there is a requirement of having a central place/application to gather
such
information, I'd be interested to come to assistance.
(4) Provide packages for RPM/Debian
Josh and me had a discussion on "the perfect setup" a while ago, thinking
about the
best structure on how to setup ModSecurity rules and configurations.
Having a simple
structure provided within RPM and Debian packages would ease the setup and
could be
valuable as upgrade-tool as well.
From my perspective it would be perfect to follow Ivan's ModSecurity
Handbook in this
perspective (e.g. the /opt/modsecurity/ directory structure).
This could go hand-in-hand with signing the packages with official
GPG-keys.
In addition to providing a ZIP/alternative package (e.g. for Windows), of
course.
(5) Regression Testing / Evaluation
The current CRS rules provide a regression-test environment. Did anyone
ever use
that? Does there exist some official test-data set? Can we assemble one?
Maybe we can run automatic tests and have a way for documentating the test
outcomes?
I've been working on a similar testing-environment to replay HTTP traffic
with the
jwall-tools. The idea was like this:
- allow for tagging events as "false-positives" or "true positives" in
the AuditConsole
- downloading these events into a local test-dataset
- specify expectations (e.g. the expected score/threshold) for each event
- replay the events/requests and test the outcoming score against the
expected one
If there's interest in completing that, I'd be happy to further extend the
stuff I
already have.
@Ryan, please don't take anything of this as personal offence. This is by no
means
intended. I know about the difficulty of writing *generic* rules and highly
appreciate
all the work you've put into the rules.
Looking forward to contributing to a revised core-rules set.
Chris
Am 13.02.2012 um 18:00 schrieb Ryan Barnett:
> Greetings everyone,
> Please excuse the cross-posting but I wanted to make sure that everyone saw
> this post. We are going to start an initiative to re-architect the OWASP
> ModSecurity Core Rule Set. We, SpiderLabs, want this to be a
> "community-based" effort where we openly discuss various methods of
> architecting the CRS so that they provide the most value. Here are a few
> goals -
>
> 1. To make the CRS more accurate – which means to significantly reduce the #
> of false positives. Most users want move to a blocking mode but can't until
> they are comfortable with the accuracy of the rules.
> 2. To make exceptions easier – there are a number of scenarios where
> exceptions need to be made to exclude certain parameters or URLs from
> inspection.
> 3. To increase the security coverage – which means to reduce the # of false
> negatives. We don't want to miss any legitimate attacks.
>
> We will be starting a string of discussion threads on the OWASP ModSecurity
> CRS mail-list -
> https://lists.owasp.org/mailman/listinfo/owasp-modsecurity-core-rule-set
>
> If you would like to participate in this project – I suggest that you sign up
> for the mail-list. We want feedback from all different types of ModSecurity
> users – home users, corporate users, government, education, hosting
> providers, etc… Let us know what your challenges are so that we can fix them!
>
> --
> Ryan Barnett
> Trustwave SpiderLabs
> ModSecurity Project Leader
> OWASP ModSecurity CRS Project Leader
>
> ________________________________
> This transmission may contain information that is privileged, confidential,
> and/or exempt from disclosure under applicable law. If you are not the
> intended recipient, you are hereby notified that any disclosure, copying,
> distribution, or use of the information contained herein (including any
> reliance thereon) is STRICTLY PROHIBITED. If you received this transmission
> in error, please immediately contact the sender and destroy the material in
> its entirety, whether in electronic or hard copy format.
>
> _______________________________________________
> Owasp-modsecurity-core-rule-set mailing list
> [email protected]
> https://lists.owasp.org/mailman/listinfo/owasp-modsecurity-core-rule-set
_______________________________________________
Owasp-modsecurity-core-rule-set mailing list
[email protected]
https://lists.owasp.org/mailman/listinfo/owasp-modsecurity-core-rule-set