With RSS i can only have 16 queues (hardware limitation), so I need to use zbalance_ipc. I'm testing it tomorrow and let you know the results.
Thanks again. On Tue, Jun 30, 2015 at 2:42 PM, Jose Vila <[email protected]> wrote: > I've been able to raise the hugepages number to 2048, so I can start all > the Snort instances I need, but the libnuma "node 9 not allowed" warning > still persist. > > Any ideas or advice on this? > > Regards, > > Jose Vila. > > On Tue, Jun 30, 2015 at 2:02 PM, Jose Vila <[email protected]> wrote: > >> Hi Alfredo, and thank you very much for your answer. >> >> I've tested the RSS queues option (loaded the ixgbe kernel module with >> parameter "RSS=22", as I need 22 Snort instances running), but I still have >> several problems. >> >> * On one hand I'm still getting the libnuma "Warning: node 9 not allowed" >> error. I have no idea on how to solve this. >> * On the other, I can only start 8 Snort instances. Snort takes all the >> available hugepages and doesn't let further instances to be run. I've tried >> configuring a higher number of Hugepages (up to 8192, via sysctl and >> via /etc/pf_ring/hugepages) but I seem to only have 512 hugepages available >> (It says so on /proc/meminfo). I'm on CentOS 6.6 with >> kernel 2.6.32-504.23.4.el6.x86_64. >> >> Did I load the ixgbe kernel module the right way? Am I missing something >> else to get more hugepages? >> >> Each of the snort instances is launched with this command (changing >> interface's RSS queue, bindcpu, clusterid and logdir): >> >> /usr/local/snort/bin/snort -c /usr/local/snort/etc/snort.conf -i zc:eth0@1 >> --daq pfring_zc --daq-mode passive --daq-dir /usr/local/lib/daq/ --daq-var >> bindcpu=1 --daq-var clusterid=1 -R .RED1 -l /var/log/snort/red1 -G 1 -u >> root -g root -D >> >> >> The output of my startup script: >> >> # /etc/init.d/granjero_zc start >> Lanzando Snorts. >> Lanzando Snort 1 >> libnuma: Warning: node 9 not allowed >> numa_sched_setaffinity_v2_int() failed; abort >> : Invalid argument >> set_mempolicy: Invalid argument >> Spawning daemon child... >> My daemon child 9415 lives... >> Daemon parent exiting (0) >> Lanzando Snort 2 >> libnuma: Warning: node 9 not allowed >> numa_sched_setaffinity_v2_int() failed; abort >> : Invalid argument >> set_mempolicy: Invalid argument >> Spawning daemon child... >> My daemon child 9458 lives... >> Daemon parent exiting (0) >> Lanzando Snort 3 >> libnuma: Warning: node 9 not allowed >> numa_sched_setaffinity_v2_int() failed; abort >> : Invalid argument >> set_mempolicy: Invalid argument >> Spawning daemon child... >> My daemon child 9469 lives... >> Daemon parent exiting (0) >> [ ... ] >> Lanzando Snort 9 >> libnuma: Warning: node 9 not allowed >> numa_sched_setaffinity_v2_int() failed; abort >> : Invalid argument >> set_mempolicy: Invalid argument >> *** error retrieving hugepages info *** >> Lanzando Snort 10 >> libnuma: Warning: node 9 not allowed >> numa_sched_setaffinity_v2_int() failed; abort >> : Invalid argument >> set_mempolicy: Invalid argument >> *** error retrieving hugepages info *** >> Lanzando Snort 11 >> libnuma: Warning: node 9 not allowed >> numa_sched_setaffinity_v2_int() failed; abort >> : Invalid argument >> set_mempolicy: Invalid argument >> *** error retrieving hugepages info *** >> [ ... ] >> >> >> >> >> On Mon, Jun 29, 2015 at 6:48 PM, Alfredo Cardigliano < >> [email protected]> wrote: >> >>> Hi Jose >>> since ZC is a kernel-bypass technology, which directly access the >>> network card, only 1 application at a time can access a device/queue. >>> You have 2 options in order to distribute the load across multiple snort >>> instances: >>> 1. load the driver with multiple RSS queues, then start one snort >>> instance per queue: zc:eth0@0, zc:eth0@1, zc:eth0@2, and so on >>> 2. load the driver with a single queue, then use zbalance_ipc to >>> distribute the traffic across multiple software SPSC queues >>> >>> Alfredo >>> >>> >>> _______________________________________________ >>> Ntop-misc mailing list >>> [email protected] >>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>> >> >> >
_______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
