If you check the outer src and dst IP addresses of all these 6 packets are same, then shouldn't all these 6 packets go to 1 pfcount instance if we have chosen cluster_type as cluster_per_2_flow?
Regards, Gautam On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <[email protected]> wrote: > This is what I am receiving, it looks correct as they are distributed by > 2-tuple: > > # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m > Using PF_RING v.6.5.0 > Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: > 10000Mb/s] > # Device RX channels: 1 > # Polling threads: 1 > pfring_set_cluster returned 0 > Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14 > 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 -> > 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152] > [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][ > 216.58.194.110:443 -> 100.83.201.244:43485] > [hash=4182140810][tos=0][tcp_seq_num=0] > [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_ > offset=34][payload_offset=42] > 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 -> > 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0] > [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] > [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_ > offset=34][payload_offset=0] > 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 -> > 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152] > [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][ > 216.58.194.97:443 -> 100.83.201.244:55379] > [hash=4182140810][tos=0][tcp_seq_num=0] > [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_ > offset=34][payload_offset=42] > 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 -> > 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0] > [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] > [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_ > offset=34][payload_offset=0] > ^CLeaving... > > # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m > Using PF_RING v.6.5.0 > Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: > 10000Mb/s] > # Device RX channels: 1 > # Polling threads: 1 > pfring_set_cluster returned 0 > Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15 > 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 -> > 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152] > [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80 > -> 10.160.153.151:60856] [hash=2820071437][tos=104][tcp_seq_num=0] > [caplen=128][len=1514][eth_offset=0][l3_offset=14][l4_ > offset=34][payload_offset=42] > 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 -> > 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> > 203.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0] > [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_ > offset=38][payload_offset=0] > ^CLeaving... > > Alfredo > > On 11 Nov 2016, at 10:41, Chandrika Gautam <[email protected]> > wrote: > > I tried with above. I found the same result one instance of pfcount > receiving 2 packets and 6 in other instance for the file shared > multiple_fragments_id35515_wo_vlan.pcap. > > Are you receiving all 6 packets in one pfcount instance ? > > Regards, > Chandrika > > On Fri, Nov 11, 2016 at 3:02 PM, Alfredo Cardigliano <[email protected] > > wrote: > >> >> On 11 Nov 2016, at 10:31, Chandrika Gautam <[email protected]> >> wrote: >> >> Hi Alfredo, >> >> I have not used any packages. >> I downloaded the latest PFRING from https://github.com/ntop/PF_RING and >> selected Branch as dev and Saved the Zip file using CloneorDownload >> option. >> I compiled the PFRING source code and used all the necessary files. I can >> see the changes you have done in pf_ring.c also. >> >> I think version is not displayed due to some issues with git. Received >> this error while executing ./configure in kernel directory. >> fatal: Not a git repository (or any of the parent directories): .git >> >> >> Ok got it >> >> Have you used any pfring examples to verify these changes? >> >> >> Yes, I ran 2 instances of pfcount using this command line: >> >> ./pfcount -i eth2 -c 99 -H 2 -v 1 -m >> >> Alfredo >> >> >> Regards, >> Chandrika >> >> On Fri, Nov 11, 2016 at 2:32 PM, Alfredo Cardigliano < >> [email protected]> wrote: >> >>> Hi Gautam >>> for some reason I do not see the pf_ring revision here, please make sure >>> the pf_ring.ko module you are using is from latest code, >>> if you are using packages, please remove the pfring package, manually >>> delete all pf_ring.ko in your system, and reinstall it to make >>> sure DKMS installs the new module. >>> >>> Alfredo >>> >>> On 11 Nov 2016, at 09:53, Chandrika Gautam < >>> [email protected]> wrote: >>> >>> # cat /proc/net/pf_ring/info >>> PF_RING Version : 6.5.0 (unknown) >>> Total rings : 0 >>> >>> Standard (non ZC) Options >>> Ring slots : 409600 >>> Slot version : 16 >>> Capture TX : No [RX only] >>> IP Defragment : No >>> Socket Mode : Standard >>> Cluster Fragment Queue : 0 >>> Cluster Fragment Discard : 0 >>> >>> Regards, >>> Gautam >>> >>> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano < >>> [email protected]> wrote: >>> >>>> >>>> On 11 Nov 2016, at 07:29, Chandrika Gautam < >>>> [email protected]> wrote: >>>> >>>> Hi Alfredo, >>>> >>>> I tested with latest pfring from github but still packets are >>>> segregated to different applications. >>>> >>>> >>>> Please provide me the output of "cat /proc/net/pf_ring/info" >>>> >>>> After your latest change, We need to use cluster_per_flow_2_tuple only >>>> right to segregate traffic on outer ip addresses ? >>>> >>>> >>>> Correct >>>> >>>> Should we load pfring module with enable_frag_coherence=1? I have >>>> tested with using this or without this with the latest package from github. >>>> >>>> >>>> enable_frag_coherence is set to 1 by default >>>> >>>> Alfredo >>>> >>>> >>>> >>>> Regrads, >>>> Gautam >>>> >>>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam < >>>> [email protected]> wrote: >>>> >>>>> Thanks Alfredo for an update. >>>>> I will update you once merge with latest >>>>> PFRing. >>>>> Regards, >>>>> Gautam >>>>> >>>>> Sent from my iPhone >>>>> >>>>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano < >>>>> [email protected]> wrote: >>>>> >>>>> Hi Gautam >>>>> your traffic is GTP traffic and the hash was computed on the inner >>>>> headers when present, >>>>> I did change the behaviour computing the hash on the outer header when >>>>> using cluster_per_flow_2_tuple, and introduced >>>>> new hash types cluster_per_inner_* for computing hash on inner header, >>>>> when present. >>>>> Please update from github or wait for new packages. >>>>> >>>>> Regards >>>>> Alfredo >>>>> >>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam < >>>>> [email protected]> wrote: >>>>> >>>>> Hi Alfredo >>>>> >>>>> PFA the traces having vlan and not vlan. >>>>> >>>>> To add more details to this, there are 2 observations - >>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of >>>>> same packet got distributed across application >>>>> >>>>> 2. We ran with the attached file and observed that the 2 packets were >>>>> going to one application and rest of the packets were to other one. >>>>> >>>>> Thanks & Regards >>>>> >>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano < >>>>> [email protected]> wrote: >>>>> >>>>>> Hi Gautam >>>>>> could you provide a pcap we can use to reproduce this? >>>>>> >>>>>> Alfredo >>>>>> >>>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam < >>>>>> [email protected]> wrote: >>>>>> > >>>>>> > Hi, >>>>>> > >>>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 >>>>>> applications >>>>>> > are reading from same cluster id. >>>>>> > >>>>>> > We have observed that the packets having same source and >>>>>> destination ip addresses are getting distributed across 2 applications >>>>>> which has completely tossed our logic as we are trying to assemble the >>>>>> fragments in our applications. >>>>>> > >>>>>> > Is there any bug in PFRING clustering mechanism which is causing >>>>>> this. >>>>>> > >>>>>> > Using PFRING 6.2.0 and pfring is loaded with below command - >>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 >>>>>> > >>>>>> > I tried with this also. >>>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 >>>>>> enable_frag_coherence=1 >>>>>> > >>>>>> > >>>>>> > Regards, >>>>>> > Gautam >>>>>> > >>>>>> > _______________________________________________ >>>>>> > Ntop-misc mailing list >>>>>> > [email protected] >>>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>>>>> >>>>>> _______________________________________________ >>>>>> Ntop-misc mailing list >>>>>> [email protected] >>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>>>>> >>>>> >>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515 >>>>> _wo_vlan.pcap>_______________________________________________ >>>>> Ntop-misc mailing list >>>>> [email protected] >>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>>>> >>>>> >>>>> _______________________________________________ >>>>> Ntop-misc mailing list >>>>> [email protected] >>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>>>> >>>>> >>>> _______________________________________________ >>>> Ntop-misc mailing list >>>> [email protected] >>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>>> >>>> >>>> >>>> _______________________________________________ >>>> Ntop-misc mailing list >>>> [email protected] >>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>>> >>> >>> _______________________________________________ >>> Ntop-misc mailing list >>> [email protected] >>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>> >>> >>> >>> _______________________________________________ >>> Ntop-misc mailing list >>> [email protected] >>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >>> >> >> _______________________________________________ >> Ntop-misc mailing list >> [email protected] >> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >> >> >> >> _______________________________________________ >> Ntop-misc mailing list >> [email protected] >> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >> > > _______________________________________________ > Ntop-misc mailing list > [email protected] > http://listgateway.unipi.it/mailman/listinfo/ntop-misc > > > > _______________________________________________ > Ntop-misc mailing list > [email protected] > http://listgateway.unipi.it/mailman/listinfo/ntop-misc >
_______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
