node means NUMA node (i.e. CPU)

Alfredo

> On 04 Dec 2015, at 10:41, James <[email protected]> wrote:
> 
> Hopefully my last stupid question - is node the same as processes and queues 
> in this context? So I should do that all the way up to 15?
> 
> On 4 December 2015 at 09:32, Alfredo Cardigliano <[email protected] 
> <mailto:[email protected]>> wrote:
> 
>> On 04 Dec 2015, at 10:27, James <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Thanks Alfredo, even more that you reply so quickly! I respect the "teach a 
>> man to fish.." method of helping, but that's a lot of parameters and options 
>> and I'd be making complete guesses at which ones to change and to what 
>> values. Would it be possible to recommend what you'd change based on the 
>> spec of my system?
> 
> In essence if you have 4 nodes, you should set the number of huge pages per 
> node with:
> 
>   $ echo 1024 > 
> /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
>   $ echo 1024 > 
> /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
>   $ echo 1024 > 
> /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
>   $ echo 1024 > 
> /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
> 
>> I've also just noticed that the load_drive script should be changed to 
>> "insmod ./ixgbe.ko RSS=16,16" because I'm only monitoring two NIC's, is that 
>> correct?
> 
> Correct
> 
> Alfredo
> 
>> 
>> Thank you again
>> J.
>> 
>> On 4 December 2015 at 09:04, Alfredo Cardigliano <[email protected] 
>> <mailto:[email protected]>> wrote:
>> Please note the total amount of pages is divided by the nodes, please take a 
>> look at https://github.com/ntop/PF_RING/blob/dev/doc/README.hugepages 
>> <https://github.com/ntop/PF_RING/blob/dev/doc/README.hugepages>
>> 
>> Alfredo
>> 
>>> On 04 Dec 2015, at 10:00, James <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> In case it helps anyone else reading this, my startup script needed some 
>>> corrections, ending up with:
>>> 
>>> for i in `seq 0 1 15`; do
>>> snort -q -u snort -g snort --pid-path /var/run --create-pidfile -D -c 
>>> /etc/snort/snort.conf -l /logs/snort/eth4_eth5/instance-$i 
>>> --daq-dir=/usr/local/lib/daq --daq pfring_zc --daq-mode passive -i 
>>> zc:eth4@$i,zc:eth5@$i --daq-var clusterid=$i --daq-var bindcpu=$i
>>> done
>>> 
>>> i.e. I needed to remove idsbridge=1 (might be a mistake that the IDS with 
>>> multiqueue example does have this set in README.1st I linked earlier?) and 
>>> I needed to change my variable to be $i instead of $1.
>>> 
>>> However when I run this script it only start's 4 "daemon child", then gives 
>>> a "bus error" on the next 4 and then returns to the command line with no 
>>> mention of the other 8. /var/log/messages tells me:
>>> snort[4888]: FATAL ERROR: Can't initialize DAQ pfring_zc (-1) - 
>>> pfring_zc_daq_initialize: Cluster failed: No buffer space available (error 
>>> 105)
>>> ZC[4897]: error mmap'ing hugepage /mnt/huge/pfring_zc_14: Cannot allocate 
>>> memory
>>> ZC[4897]: error mmap'ing 128 hugepages of 2048 KB
>>> 
>>> If it's relevant, when I run the ZC load_driver script (I took the 
>>> MQ=1,1,1,1 out as advised, so that just has "insmod ./ixgbe.ko 
>>> RSS=16,16,16,16") that says:
>>> Warning: 512 hugepages available, 1024 requested
>>> 
>>> Things I have checked are:
>>> sudo more /sys/kernel/mm/transparent_hugepage/enabled
>>> always madvise [never]
>>> 
>>> sudo more /proc/sys/vm/nr_hugepages
>>> 1024
>>> 
>>> sudo more /proc/meminfo
>>> MemTotal:       32748700 kB
>>> MemFree:        27563604 kB
>>> <snip>
>>> AnonHugePages:         0 kB
>>> HugePages_Total:    1024
>>> HugePages_Free:     1024
>>> HugePages_Rsvd:        0
>>> HugePages_Surp:        0
>>> Hugepagesize:       2048 kB
>>> 
>>> Any ideas on how I can fix this please?
>>> 
>>> Thanks
>>> J.
>>> 
>>> On 2 December 2015 at 15:10, Alfredo Cardigliano <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>>> On 02 Dec 2015, at 16:08, James <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> Ah.. so if only 16 queues, I should go back to only 16 copies of snort?
>>> 
>>> Yes, or you can consider using zbalance_ipc for load balance in software to 
>>> more queues, 
>>> but I do not what is the performance you can reach with 48 queues, you 
>>> should run some test.
>>> 
>>> Alfredo
>>> 
>>>> 
>>>> On 2 December 2015 at 15:06, Alfredo Cardigliano <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> Sorry, forgot to tell you RSS on ixgbe supports up to 16 queues.
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 02 Dec 2015, at 16:06, Alfredo Cardigliano <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> It looks fine, you can omit MQ.
>>>>> 
>>>>> Alfredo
>>>>> 
>>>>>> On 02 Dec 2015, at 16:04, James <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> Many thanks for the help Alfredo. So I'll crank things up to use all 
>>>>>> CPU's and that gives me (I've converted it to a for loop):
>>>>>> 
>>>>>> for i in `seq 0 1 48`; do
>>>>>> snort -q --pid-path /var/run --create-pidfile -D -c 
>>>>>> /etc/snort/snort.conf -l /logs/snort/eth4_eth5/instance-$1 
>>>>>> --daq-dir=/usr/local/lib/daq --daq pfring_zc --daq-mode passive -i 
>>>>>> zc:eth4@$1,zc:eth5@$1 --daq-var clusterid=$1 --daq-var idsbridge=1 
>>>>>> --daq-var bindcpu=$1
>>>>>> done
>>>>>> 
>>>>>> Is this the correct load_driver.sh setting to match? I'm not sure about 
>>>>>> the MQ values?
>>>>>> insmod ./ixgbe.ko MQ=1,1,1,1 RSS=48,48,48,48
>>>>>> 
>>>>>> On 2 December 2015 at 14:15, Alfredo Cardigliano <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> Please use README.1st as reference.
>>>>>> What you need to know:
>>>>>> 1. Use --daq-var clusterid=K where K is a unique number per snort 
>>>>>> instance, used for resource allocation
>>>>>> 2. Use --daq-var bindcpu=K where K is the core id for affinity, please 
>>>>>> ignore interrupts affinity with ZC
>>>>>> 3. Use “,” in -i in please of “+” for interfaces aggregation, “+” is 
>>>>>> used for IPS/IDS-bridge mode
>>>>>> 4. We usually recommend using only the CPU where the NIC is connected, 
>>>>>> however since snort is (likely) the bottleneck, feel free to use all the 
>>>>>> cores available, setting RSS=N,N where N is the number of cores and the 
>>>>>> number of snort instances.
>>>>>> 
>>>>>> Alfredo
>>>>>> 
>>>>>>> On 02 Dec 2015, at 15:08, James <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> 
>>>>>>> Follow-up question - should I use the cluster-id parameter?
>>>>>>> 
>>>>>>> This uses it:
>>>>>>> https://svn.ntop.org/svn/ntop/trunk/attic/PF_RING/userland/snort/pfring-daq-module-zc/README.1st
>>>>>>>  
>>>>>>> <https://svn.ntop.org/svn/ntop/trunk/attic/PF_RING/userland/snort/pfring-daq-module-zc/README.1st>
>>>>>>> 
>>>>>>> But this does not:
>>>>>>> http://www.ntop.org/pf_ring/accelerating-snort-with-pf_ring-dna/ 
>>>>>>> <http://www.ntop.org/pf_ring/accelerating-snort-with-pf_ring-dna/>
>>>>>>> 
>>>>>>> On 2 December 2015 at 14:01, James <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> Hi all,
>>>>>>> 
>>>>>>> I posted a few weeks ago and have since got pf_ring with ZC working. 
>>>>>>> I'm now trying to decide how best to configure snort (in IDS mode). My 
>>>>>>> server has 4 X 12 core CPU's and two NIC's which are being fed one half 
>>>>>>> each of a 10Gb connection.
>>>>>>> 
>>>>>>> I have a few key questions:
>>>>>>> - Within the ixgbe zc load_drive.sh script, would the default 16 queue 
>>>>>>> option do, or would you choose something different: insmod ./ixgbe.ko 
>>>>>>> MQ=1,1,1,1 RSS=16,16,16,16
>>>>>>> 
>>>>>>> - Assuming the choice of 16 above, should I start 16 copies of Snort 
>>>>>>> like this (variation on the example from ntop website)?
>>>>>>> snort -q --pid-path /var/run --create-pidfile -D -c 
>>>>>>> /etc/snort/snort.conf -l /var/log/snort/eth4_eth5/instance-1 
>>>>>>> --daq-dir=/usr/local/lib/daq --daq pfring_zc --daq-mode passive -i 
>>>>>>> zc:eth4@0+zc:eth5@0 --daq-var idsbridge=1 --daq-var bindcpu=0
>>>>>>> 
>>>>>>> The information on http://www.metaflows.com/features/pf_ring 
>>>>>>> <http://www.metaflows.com/features/pf_ring> about CPU affinity and 
>>>>>>> interrupts has confused me somewhat.
>>>>>>> 
>>>>>>> Thanks
>>>>>>> J.
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> Ntop-misc mailing list
>>>>>>> [email protected] <mailto:[email protected]>
>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> [email protected] <mailto:[email protected]>
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>>>> 
>>>>>> _______________________________________________
>>>>>> Ntop-misc mailing list
>>>>>> [email protected] <mailto:[email protected]>
>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> 
>>>> 
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> 
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> 
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> 
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected] <mailto:[email protected]>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected] <mailto:[email protected]>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected] <mailto:[email protected]>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to