Yes, Alfredo came up with a more elegant solution than mine :) I've been
using it for the last month or so instead of my version.

The only problem multiple apps caused, I think, was when a bug in ARGUS
(now fixed) caused it to actually modify the packet data for IPv6
packets; zero-copy makes such things affect all the other apps. If
you're inline, it's even more important this sort of thing doesn't
happen, I guess!

Best Wishes,
Chris

On 10/09/13 23:07, Alfredo Cardigliano wrote:
> Yes
> 
> Alfredo
> 
> On Sep 11, 2013, at 12:05 AM, Craig Merchant <[email protected]> wrote:
> 
>> Thanks, Alfredo…
>>  
>> Does that mean that pfdnacluster_master with –n 28,1 will split traffic up 
>> between queues 0-27 and then have a full copy of all traffic on 28?
>>  
>> Thanks.
>>
>> Craig
>>  
>> From: [email protected] 
>> [mailto:[email protected]] On Behalf Of Alfredo 
>> Cardigliano
>> Sent: Tuesday, September 10, 2013 3:04 PM
>> To: [email protected]
>> Subject: Re: [Ntop-misc] inline snort with dna + libzero
>>  
>> Hi Craig and Keith
>> please note that since PF_RING 5.5.3 pfdnacluster_master natively supports 
>> multiple applications with multiple instances (passing a comma-separated 
>> list of number of instances per application to -n).
>> However in order to run snort inline on top of the Libzero DNA Cluster a 
>> "libzero-aware" daq is needed, because it needs to use a specific API for 
>> zero-copy packet forwarding (see for instance pfdnabounce with -m 1)
>>  
>> Best Regards
>> Alfredo
>>  
>> On Sep 10, 2013, at 10:52 PM, Craig Merchant <[email protected]> wrote:
>>
>>
>> Our company uses redBorder’s product for managing Snort.  It’s pretty 
>> awesome (and free).  We are also using Argus for netflow monitoring.
>>  
>> Thanks to a modified version of pfdnacluster_master that Chris Wakelin 
>> provided to either this group or the Argus mailing list (can’t remember 
>> which), we are able to take one copy of our traffic and split it up among 28 
>> snort instances and then take another complete copy of the traffic and send 
>> it to the Argus daemon.  Since Argus doesn’t do any packet analysis, it 
>> really only needs one thread.  But Snort definitely needs all 28 threads to 
>> keep up with 4-8 Gbps of traffic.
>>  
>> The configuration is pretty simple. You use a couple command line switches 
>> to tell pfdnacluster_master how many queues you want created and give it a 
>> cluster ID.   Snort instances then use something like “-i dnacluster:10@0” 
>> or “-i dnacluster:10@27”.  That would tell that Snort instance to listen on 
>> queue 0 or queue 27. 
>>  
>> Chris was totally willing to share his code with me.  If you want to do 
>> something like what we’re doing, contact me offline and I’ll check in with 
>> him to see if it’s OK to share it.
>>  
>> Thanks.
>>
>> C
>>  
>> From: [email protected] 
>> [mailto:[email protected]] On Behalf Of Keith Forbus
>> Sent: Tuesday, September 10, 2013 1:36 PM
>> To: [email protected]
>> Subject: [Ntop-misc] inline snort with dna + libzero
>>  
>> Hi all,
>>
>> I'm currently running snort 2.9.5.3 inline on my network using pf_ring 5.6.1 
>> along with the igb DNA drivers and the pfring_dna DAQ.  I'm starting each 
>> instance of snort with something along the lines of "/usr/local/bin/snort 
>> --daq-dir /usr/local/lib/daq --daq pfring_dna -i dna0:dna1..."
>>
>> This has been working great, so no complaints there.  I was hoping to be 
>> able to introduce other applications that would need to see the traffic, 
>> such as OpenFPC for full packet captures.  I've read that libzero can be 
>> used for allowing multiple apps to access the traffic.  Most of the research 
>> I've done on the Internet show examples of it being used with a passive 
>> snort installation.  
>>
>> My question is can libzero be used with snort instances that are running in 
>> inline mode?  If not, any takes on how I should handle this?  Just wanted to 
>> get a feel for how others are handling this type of situation and any 
>> pointers you might have.
>>
>> Thanks.
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>  
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> 
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 


-- 
--+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+-
Christopher Wakelin,                           [email protected]
IT Services Centre, The University of Reading,  Tel: +44 (0)118 378 8439
Whiteknights, Reading, RG6 2AF, UK              Fax: +44 (0)118 975 3094
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to