Re: [pfSense-discussion] IDS yet?

2006-11-03 Thread Travis H.

On 10/6/06, Chris Buechler <[EMAIL PROTECTED]> wrote:

Scott Ullrich wrote:
> It is a delayed IDS.   Generally an IPS hooks into the network stack
> directly and does not allow the traffic to pass through until its
> scanned.


Yep, sometimes these are called intrusion reaction systems, reactive
firewalls, or other sundry terms.


And generally you probably aren't going to want to hook snort into your
network stack like that, because of the limitations of PC hardware.


You could, theoretically, disable routing, then let the BPF read
packets on one side and inject them on the other.  However, the
performance penalty of moving into userspace, through an application
(scheduler latency), and then out to kernel space again, is probably
prohibitive.  But at least you know when you're hitting your limit
without risking dropped packets.

What you really want to do is be able to load the matching up into the
kernel using some sort of sandboxing so that the complicated decoders
and such don't cause a kernel panic.  Some recent research papers show
that this can be done with a ~17% performance penalty on x86 hardware
with instruction re-writing.  Then you can do all your work without
incurring a copy/remap between kernelspace and userland.
--
"It's not like I'm encrypting... it's just that my communications
developed a massive entropy deficiency." -><-
http://www.subspacefield.org/~travis/>
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484


Re: [pfSense-discussion] IDS yet?

2006-11-03 Thread Travis H.

Going through some old email, sorry for the anachronism.

On 10/4/06, Bill Marquette <[EMAIL PROTECTED]> wrote:

> Sorry, but I do not agree totally with you: the thing I love with pfSense is
> that it is possible to install it everywhere, so it could be a _real_
> competitor to enterprise products (like Cisco ASA). So, I think that
> CPU-power should not be a limit.


Yep, and when you deploy end-to-end encryption, the "One Big NIDS"
isn't going to help you very much.  Plus, there's plenty of evasion
techniques that rely on the NIDS not knowing the network topology or
how the endpoint will interpret a series of packets.  So the end
result is either false alarms, or having to enter details about your
endpoints and topology, or features to "learn" that information.  But
it's all rearranging deck chairs on the Titanic.

Let me put it this way; if I send you the last fragment in a packet,
wait n seconds, then send you the other fragments, how does the NIDS
know whether the endpoint has timed out the fragment reassembly, in
which case the packet is not passed to the application, versus if it
kept the fragment, in which case the packet is passed to the
application?  Now think about every single parameter in the stack and
you'll start to get an idea of how impossible this is.  If your NIDS
tries to handle all possibilities, not only do you have lots of false
positives, but there's a state explosion that I can use to DoS your
NIDS.

Let me put it this way; if all your clients starts up massive
transfers, then a single NIDS at the gateway must be prepared to
handle it - basically, you have to have capacity for the whole uplink.
 But with NIDS being done at the endpoint, which I call a distributed
host-based IDS, then each machine is  responsible for its own traffic.
You don't have to pay for your uplink speed; each endpoint transfers
at the rate it can handle, automatically throttling itself if the IDS
portion takes a CPU hit.  The gateway NIDS can then be used to monitor
only those machines which can't run it, and for attacks against the
network stack itself.


We have a serious disadvantage against hardware firewalls.  Where they
can crank out ASICs tuned to specific needs (which comes with a
disadvantage we don't have...flexibility), we're stuck with general
purpose CPU's which aren't necessarily fast.


What an ASIC can do now, in ~3 years a general-purpose CPU can do it.
So all that investment in ASIC design gets expensive; but if you don't
keep pumping money into it, then the general-purpose CPUs will catch
up, offering returns to scale that you can't match no matter how
popular your NIDS.

What really matters the most is the parallelization, in that you can
have a board with 8 ASICs.  But SMP can play that game too, with more
cost-effective pieces, that have more utility than that one task.


Let us also not forget that CPU's aren't getting faster, they're
scaling wider


Mostly that's due to the limited bandwidth of RAM, not a problem with
CPU technology.  A modern CPU can have a thread execute instructions
as fast as RAM can deliver them.  So you have things like
dual-channel, where you've got two parallel paths to RAM, so hopefully
it can deliver two instruction streams at full speed.  And so if the
RAM isn't speeding up, but getting wider, of course the CPU is going
to go that way, because RAM is the bottleneck.

The question is whether packet inspection could benefit from this;
i.e. is the problem parallelizable?  I think that it is, at least for
different sessions.  Right now people doing heavy-duty centralized IDS
have to buy mutliple overpriced boards with proprietary designs,
populated with many ASICs, and then have to get a session-aware load
balancer to distribute them.  So yes, it is parallelizable.  And
networks are getting faster; perhaps not at the acceleration of CPU
power, but WAY faster than RAM and disk.  This means that there's
going to be more pressure on the NIDS system.

For more information, and some rules of thumb, check out:
http://storagemojo.com/?page_id=207
--
"It's not like I'm encrypting... it's just that my communications
developed a massive entropy deficiency." -><-
http://www.subspacefield.org/~travis/>
GPG fingerprint: 9D3F 395A DAC5 5CCC 9066  151D 0A6B 4098 0C55 1484