> On 02 Dec 2015, at 16:08, James <[email protected]> wrote:
> 
> Ah.. so if only 16 queues, I should go back to only 16 copies of snort?

Yes, or you can consider using zbalance_ipc for load balance in software to 
more queues, 
but I do not what is the performance you can reach with 48 queues, you should 
run some test.

Alfredo

> 
> On 2 December 2015 at 15:06, Alfredo Cardigliano <[email protected] 
> <mailto:[email protected]>> wrote:
> Sorry, forgot to tell you RSS on ixgbe supports up to 16 queues.
> 
> Alfredo
> 
>> On 02 Dec 2015, at 16:06, Alfredo Cardigliano <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> It looks fine, you can omit MQ.
>> 
>> Alfredo
>> 
>>> On 02 Dec 2015, at 16:04, James <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Many thanks for the help Alfredo. So I'll crank things up to use all CPU's 
>>> and that gives me (I've converted it to a for loop):
>>> 
>>> for i in `seq 0 1 48`; do
>>> snort -q --pid-path /var/run --create-pidfile -D -c /etc/snort/snort.conf 
>>> -l /logs/snort/eth4_eth5/instance-$1 --daq-dir=/usr/local/lib/daq --daq 
>>> pfring_zc --daq-mode passive -i zc:eth4@$1,zc:eth5@$1 --daq-var 
>>> clusterid=$1 --daq-var idsbridge=1 --daq-var bindcpu=$1
>>> done
>>> 
>>> Is this the correct load_driver.sh setting to match? I'm not sure about the 
>>> MQ values?
>>> insmod ./ixgbe.ko MQ=1,1,1,1 RSS=48,48,48,48
>>> 
>>> On 2 December 2015 at 14:15, Alfredo Cardigliano <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> Please use README.1st as reference.
>>> What you need to know:
>>> 1. Use --daq-var clusterid=K where K is a unique number per snort instance, 
>>> used for resource allocation
>>> 2. Use --daq-var bindcpu=K where K is the core id for affinity, please 
>>> ignore interrupts affinity with ZC
>>> 3. Use “,” in -i in please of “+” for interfaces aggregation, “+” is used 
>>> for IPS/IDS-bridge mode
>>> 4. We usually recommend using only the CPU where the NIC is connected, 
>>> however since snort is (likely) the bottleneck, feel free to use all the 
>>> cores available, setting RSS=N,N where N is the number of cores and the 
>>> number of snort instances.
>>> 
>>> Alfredo
>>> 
>>>> On 02 Dec 2015, at 15:08, James <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> Follow-up question - should I use the cluster-id parameter?
>>>> 
>>>> This uses it:
>>>> https://svn.ntop.org/svn/ntop/trunk/attic/PF_RING/userland/snort/pfring-daq-module-zc/README.1st
>>>>  
>>>> <https://svn.ntop.org/svn/ntop/trunk/attic/PF_RING/userland/snort/pfring-daq-module-zc/README.1st>
>>>> 
>>>> But this does not:
>>>> http://www.ntop.org/pf_ring/accelerating-snort-with-pf_ring-dna/ 
>>>> <http://www.ntop.org/pf_ring/accelerating-snort-with-pf_ring-dna/>
>>>> 
>>>> On 2 December 2015 at 14:01, James <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> Hi all,
>>>> 
>>>> I posted a few weeks ago and have since got pf_ring with ZC working. I'm 
>>>> now trying to decide how best to configure snort (in IDS mode). My server 
>>>> has 4 X 12 core CPU's and two NIC's which are being fed one half each of a 
>>>> 10Gb connection.
>>>> 
>>>> I have a few key questions:
>>>> - Within the ixgbe zc load_drive.sh script, would the default 16 queue 
>>>> option do, or would you choose something different: insmod ./ixgbe.ko 
>>>> MQ=1,1,1,1 RSS=16,16,16,16
>>>> 
>>>> - Assuming the choice of 16 above, should I start 16 copies of Snort like 
>>>> this (variation on the example from ntop website)?
>>>> snort -q --pid-path /var/run --create-pidfile -D -c /etc/snort/snort.conf 
>>>> -l /var/log/snort/eth4_eth5/instance-1 --daq-dir=/usr/local/lib/daq --daq 
>>>> pfring_zc --daq-mode passive -i zc:eth4@0+zc:eth5@0 --daq-var idsbridge=1 
>>>> --daq-var bindcpu=0
>>>> 
>>>> The information on http://www.metaflows.com/features/pf_ring 
>>>> <http://www.metaflows.com/features/pf_ring> about CPU affinity and 
>>>> interrupts has confused me somewhat.
>>>> 
>>>> Thanks
>>>> J.
>>>> 
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> 
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> 
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected] <mailto:[email protected]>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to