Hi
the hashing provided by the kernel module does not support mpls at the moment.

Best Regards
Alfredo

On 17 Feb 2014, at 05:12, Packet Hack <[email protected]> wrote:

> Turns out our network has recently implemented MPLS. We were able to turn it
> off on one of our sensors and it appears that traffic is being properly 
> load-balanced
> again. 
> 
> Is PF_RING not able to properly hash MPLS packets?
> 
> -- pckthck
> 
> 
> On Thu, Jan 30, 2014 at 9:57 AM, Packet Hack <[email protected]> wrote:
> The traffic may be uneven - what's the best way to tell?
> 
> I have 2 12-core cpus and I was running all the snorts on one processor. I 
> split them up
> between processors and the packet loss dropped to around 50% for the busy 
> snort. 
> 
> Is there a good way to get the busy snort on a processor by itself and have 
> the rest on the other? My init script uses a bash for loop to assign the cpu, 
> but the 
> busy processor seems to be bound to different processors on each invocation 
> of the 
> init script. 
> 
> Thanks,
> 
> -- pckthck
> 
> 
> On Thu, Jan 30, 2014 at 1:44 AM, Luca Deri <[email protected]> wrote:
> Hi,
> is your traffic really balanceable evenly? I think this is the problem.
> 
> This said, if you use HT and put two snort instances onto the same physical 
> processor, they fight for CPU and in essence this also decreases the 
> performance
> 
> Luca
> 
> On 29 Jan 2014, at 23:13, Packet Hack <[email protected]> wrote:
> 
>> We seem to be having a problem with the hashing functionality of PF_RING. 
>> One snort process appears to be getting the lions share of the packets,
>> giving it a high drop rate (the percentages below are questionable). 
>> 
>>     Jan 29 11:22:03 snorthost snort[12300]:    Analyzed:    271306688 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12300]:     Dropped:          712 (  
>> 0.000%)
>>     Jan 29 11:22:03 snorthost snort[12302]:    Analyzed:    316147617 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12302]:     Dropped:      1127688 (  
>> 0.355%)
>>     Jan 29 11:22:03 snorthost snort[12304]:    Analyzed:   2154918764 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12304]:     Dropped:        82205 (  
>> 0.004%)
>> 
>> **  Jan 29 11:22:03 snorthost snort[12306]:    Analyzed:   1559887127 
>> (100.000%)
>> **  Jan 29 11:22:03 snorthost snort[12306]:     Dropped:   2889701486 ( 
>> 64.943%)
>> 
>>     Jan 29 11:22:03 snorthost snort[12308]:    Analyzed:    278222877 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12308]:     Dropped:         5283 (  
>> 0.002%)
>>     Jan 29 11:22:03 snorthost snort[12310]:    Analyzed:    500304473 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12310]:     Dropped:            0 (  
>> 0.000%)
>>     Jan 29 11:22:03 snorthost snort[12312]:    Analyzed:    476476420 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12312]:     Dropped:         2872 (  
>> 0.001%)
>>     Jan 29 11:22:03 snorthost snort[12314]:    Analyzed:    310040648 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12314]:     Dropped:         8970 (  
>> 0.003%)
>>     Jan 29 11:22:03 snorthost snort[12316]:    Analyzed:    275970056 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12316]:     Dropped:            0 (  
>> 0.000%)
>>     Jan 29 11:22:03 snorthost snort[12318]:    Analyzed:    268692346 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12318]:     Dropped:            0 (  
>> 0.000%)
>>     Jan 29 11:22:03 snorthost snort[12320]:    Analyzed:    472844029 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12320]:     Dropped:        16234 (  
>> 0.003%)
>>     Jan 29 11:22:03 snorthost snort[12322]:    Analyzed:    414535582 
>> (100.000%)
>>     Jan 29 11:22:03 snorthost snort[12322]:     Dropped:            0 (  
>> 0.000%)
>> 
>> We're running 12 snorts like so: 
>> 
>>     snort -D -i eth6 --daq pfring --daq-var clustermode=5 --daq-var 
>> clusterid=44 
>>     --daq-var bindcpu=1 -c /etc/snort/snort.conf -l /var/log/snort1 -R 1
>>     
>>     snort -D -i eth6 --daq pfring --daq-var clustermode=5 --daq-var 
>> clusterid=44 
>>     --daq-var bindcpu=2 -c /etc/snort/snort.conf -l /var/log/snort2 -R 2
>> 
>>     snort -D -i eth6 --daq pfring --daq-var clustermode=5 --daq-var 
>> clusterid=44 
>>     --daq-var bindcpu=3 -c /etc/snort/snort.conf -l /var/log/snort3 -R 3
>>     
>>     snort -D -i eth6 --daq pfring --daq-var clustermode=5 --daq-var 
>> clusterid=44 
>>     --daq-var bindcpu=4 -c /etc/snort/snort.conf -l /var/log/snort4 -R 4
>> 
>> etc...
>> 
>> I've tried various settings for the clustermode and the result seems to be 
>> the
>> same. Varying the number of snort processes also doesn't seem to make a 
>> difference, and neither did changing enable_frag_coherence when insmodding 
>> the pf_ring kernel module. 
>> 
>> Anyone have any ideas?
>> 
>> PF_RING : 5.6.1
>> snort   : 2.9.5.6
>> 
>> % ethtool -k eth6            
>> Offload parameters for eth6:
>> rx-checksumming: off
>> tx-checksumming: off
>> scatter-gather: off
>> tcp-segmentation-offload: off
>> udp-fragmentation-offload: off
>> generic-segmentation-offload: off
>> generic-receive-offload: off
>> large-receive-offload: off
>> 
>> Thanks,
>> 
>> -- pckthck
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to