Re: [Ntop-misc] cluster_2_tuple not working as expected

2016-11-10 Thread Chandrika Gautam
Hi Alfredo, I tested with latest pfring from github but still packets are segregated to different applications. After your latest change, We need to use cluster_per_flow_2_tuple only right to segregate traffic on outer ip addresses ? Should we load pfring module with enable_frag_coherence=1? I ha

Re: [Ntop-misc] cluster_2_tuple not working as expected

2016-11-10 Thread Chandrika Gautam
Thanks Alfredo for an update. I will update you once merge with latest PFRing. Regards, Gautam Sent from my iPhone > On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano > wrote: > > Hi Gautam > your traffic is GTP traffic and the hash was computed on the inner headers > when present, > I did c

Re: [Ntop-misc] n2disk dumping with RHEL6 Kernel 2.6.32-642.4.2.el6.x86_64

2016-11-10 Thread Luca Deri
Derek, this problem is odd because we run continue tests with docker (see https://github.com/ntop/packager) and we have never seen this problem. I have now analysed the use on centos 6.8 and this is what I have deri@centos6 205> ldd n2disk linux-vdso.so.1 => (0x7fff75ddd000)

Re: [Ntop-misc] n2disk dumping with RHEL6 Kernel 2.6.32-642.4.2.el6.x86_64

2016-11-10 Thread Spransy, Derek
Hi Luca, Yes, although I fully removed n2disk and pfring before reinstalling everything from RPMs (via yum). Thanks, Derek From: ntop-misc-boun...@listgateway.unipi.it on behalf of Luca Deri Sent: Thursday, November 10, 2016 3:33 PM To: ntop-misc@listgate

Re: [Ntop-misc] n2disk dumping with RHEL6 Kernel 2.6.32-642.4.2.el6.x86_64

2016-11-10 Thread Luca Deri
Derek how did you update? Via yum? Luca > On 10 Nov 2016, at 20:11, Spransy, Derek wrote: > > Hi Alfredo, > > I updated, but now I've run into a different problem. It looks like the new > version of n2disk10g requires glibc 2.14? Was that just changed in this > version? I'm on RHEL 6 and hav

Re: [Ntop-misc] n2disk dumping with RHEL6 Kernel 2.6.32-642.4.2.el6.x86_64

2016-11-10 Thread Spransy, Derek
Hi Alfredo, I updated, but now I've run into a different problem. It looks like the new version of n2disk10g requires glibc 2.14? Was that just changed in this version? I'm on RHEL 6 and have 2.12: $ sudo /usr/local/bin/n2disk10g /etc/n2disk/n2disk-eth5.conf /usr/local/bin/n2disk10g: /lib64/l

Re: [Ntop-misc] n2disk dumping with RHEL6 Kernel 2.6.32-642.4.2.el6.x86_64

2016-11-10 Thread Alfredo Cardigliano
Hi Derek I tought I updated you on this, but it seems it is not the case, could you try using -R with latest release? Best Regards Alfredo > On 3 Nov 2016, at 18:53, Alfredo Cardigliano wrote: > > Hi Derek > I need to check if it’s a bug in the n2disk version with -R. Please use it > without -

Re: [Ntop-misc] cluster_2_tuple not working as expected

2016-11-10 Thread Alfredo Cardigliano
Hi Gautam your traffic is GTP traffic and the hash was computed on the inner headers when present, I did change the behaviour computing the hash on the outer header when using cluster_per_flow_2_tuple, and introduced new hash types cluster_per_inner_* for computing hash on inner header, when pre

Re: [Ntop-misc] cluster_2_tuple not working as expected

2016-11-10 Thread Chandrika Gautam
Hi Alfredo PFA the traces having vlan and not vlan. To add more details to this, there are 2 observations - 1. We ran a bigger file of 1 lakh packets, out of which fragments of same packet got distributed across application 2. We ran with the attached file and observed that the 2 packets were go

Re: [Ntop-misc] cluster_2_tuple not working as expected

2016-11-10 Thread Alfredo Cardigliano
Hi Gautam could you provide a pcap we can use to reproduce this? Alfredo > On 10 Nov 2016, at 11:22, Chandrika Gautam > wrote: > > Hi, > > We are using PFRING cluster feature and using cluster_2_tuple and 2 > applications > are reading from same cluster id. > > We have observed that the pa

[Ntop-misc] cluster_2_tuple not working as expected

2016-11-10 Thread Chandrika Gautam
Hi, We are using PFRING cluster feature and using cluster_2_tuple and 2 applications are reading from same cluster id. We have observed that the packets having same source and destination ip addresses are getting distributed across 2 applications which has completely tossed our logic as we are tr