Hello Alfredo,

I've had this configuration running for a couple of days, but today found a
problem.

I want to daily update the ruleset, and to do so I have to restart all the
snort instances.

I've always had an init script to restart all the snort instances, with a
code similar to:

  restart)
        for COUNTER in $(seq 1 $INSTANCES)
        do
                do_stop_snort $COUNTER
                do_stop_by2 $COUNTER
                sleep until snort & by2 die
                do_start_snort $COUNTER
                do_start_by2 $COUNTER
        done
        ;;


The problem is that in "do_start_snort", it always fails with the following
error:

Jul  7 10:55:07 myids snort[9283]: FATAL ERROR: Can't initialize DAQ
pfring_zc (-1) - pfring_zc_daq_initialize: pfring_zc_ipc_attach_buffer_pool
error Resource temporarily unavailable(11), please check that cluster 99 is
running#012


I've double checked the parameters and are the same both in old and new
snort runs. Also double checked that the old snort has finished before
starting the new one (with ps -p <pidfile>).

I cannot get the software queues to be attached to new processes even if
the old process they were stuck to finishes and they are unused.

Even manually starting a snort instance in a queue that hasn't been used
for about an hour gives the same error.

I want to do it this way because stopping the full setup down and
restarting it from the beginning takes about 20 minutes and I wanted to
minimize the offline time.

What can I do?

Thank you very much.

Regards,

Jose Vila.




On Wed, Jul 1, 2015 at 6:49 PM, Alfredo Cardigliano <[email protected]>
wrote:

> Hi Jose
> please read below
>
> On 01 Jul 2015, at 13:36, Jose Vila <[email protected]> wrote:
>
> Hi Alfredo,
>
> I've tested my configuration with zbalance_ipc, and it seems to work.
>
> On one hand, I've loaded zbalance_ipc with the following parameters:
>
> /usr/local/bin/zbalance_ipc -i zc:eth0 -c 99 -n 22 -m 1 -S 0 -g 1 -d -P
> /var/run/zbalance_ipc.pid
>
>
> On the other, my 22 instances of Snort with following parameters (changing
> zc queue, bindcpu and log directory where necessary):
>
> /usr/local/snort/bin/snort -c /usr/local/snort/etc/snort.conf -i zc:99@0
> --daq pfring_zc --daq-mode passive --daq-dir /usr/local/lib/daq/ --daq-var
> bindcpu=2 -R .RED1 -l /var/log/snort/red1 -G 1 -u root -g root -D
>
>
> Regarding this setup, do you see any evident problem regarding
> optimisation?
>
>
> It looks fine.
>
> Some additional questions:
> * We've executed "cat /proc/interrupts | egrep \"CPU|eth0\"" and have seen
> that only 1 or 2 interrupts per second are generated. This is normal? Is it
> because the kernel being bypassed and the interrupt count not being logged
> at all?
>
>
> 1/2 per second are ok.
>
> * The zbalance_ipc process gets 100% CPU usage in core 0 (parameter "-S
> 0”),
>
>
> This is the timestamping thread, it is normal. Actually we could add an
> option to reduce the load, because snort does not need very precise
> timestamps, adding this to the TODO queue.
>
> and about 20-30% CPU usage in core 1 (parameter "-g 1”).
>
>
> This is packet processing/distribution.
>
> Is this normal?
>
>
> Yes.
>
> Do we need the timestamping thread?
>
>
> Yes, snort needs packet time.
>
> Is it related to [1]? What are its benefits, considering we only want to
> use Snort in IDS mode?
>
>
> Without timestamps you will not see the time in the alerts.
>
> Alfredo
>
> Thank you very much.
>
> [1]
> http://www.ntop.org/pf_ring/who-really-needs-sub-microsecond-packet-timestamps/
>
> On Tue, Jun 30, 2015 at 3:09 PM, Jose Vila <[email protected]> wrote:
>
>> With RSS i can only have 16 queues (hardware limitation), so I need to
>> use zbalance_ipc. I'm testing it tomorrow and let you know the results.
>>
>> Thanks again.
>>
>>>
>>>> On Mon, Jun 29, 2015 at 6:48 PM, Alfredo Cardigliano <
>>>> [email protected]> wrote:
>>>>
>>>>> Hi Jose
>>>>> since ZC is a kernel-bypass technology, which directly access the
>>>>> network card, only 1 application at a time can access a device/queue.
>>>>> You have 2 options in order to distribute the load across multiple
>>>>> snort instances:
>>>>> 1. load the driver with multiple RSS queues, then start one snort
>>>>> instance per queue: zc:eth0@0, zc:eth0@1, zc:eth0@2, and so on
>>>>> 2. load the driver with a single queue, then use zbalance_ipc to
>>>>> distribute the traffic across multiple software SPSC queues
>>>>>
>>>>> Alfredo
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> [email protected]
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>
>>>>
>>>>
>>>
>>
>
>
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to