Hi Gautam
for some reason I do not see the pf_ring revision here, please make sure the 
pf_ring.ko module you are using is from latest code, 
if you are using packages, please remove the pfring package, manually delete 
all pf_ring.ko in your system, and reinstall it to make
sure DKMS installs the new module.

Alfredo

> On 11 Nov 2016, at 09:53, Chandrika Gautam <[email protected]> 
> wrote:
> 
> # cat /proc/net/pf_ring/info 
> PF_RING Version          : 6.5.0 (unknown)
> Total rings              : 0
> 
> Standard (non ZC) Options
> Ring slots               : 409600
> Slot version             : 16
> Capture TX               : No [RX only]
> IP Defragment            : No
> Socket Mode              : Standard
> Cluster Fragment Queue   : 0
> Cluster Fragment Discard : 0
> 
> Regards,
> Gautam
> 
> On Fri, Nov 11, 2016 at 2:18 PM, Alfredo Cardigliano <[email protected] 
> <mailto:[email protected]>> wrote:
> 
>> On 11 Nov 2016, at 07:29, Chandrika Gautam <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hi Alfredo, 
>> 
>> I tested with latest pfring from github but still packets are segregated to 
>> different applications. 
> 
> Please provide me the output of "cat /proc/net/pf_ring/info"
> 
>> After your latest change, We need to use cluster_per_flow_2_tuple only right 
>> to segregate traffic on outer ip addresses ?
> 
> Correct
> 
>> Should we load pfring module with enable_frag_coherence=1? I have tested 
>> with using this or without this with the latest package from github. 
> 
> enable_frag_coherence is set to 1 by default
> 
> Alfredo
> 
>> 
>> 
>> Regrads,
>> Gautam
>> 
>> On Fri, Nov 11, 2016 at 9:12 AM, Chandrika Gautam 
>> <[email protected] <mailto:[email protected]>> wrote:
>> Thanks Alfredo for an update.
>> I will update you once merge with latest 
>> PFRing.
>> Regards,
>> Gautam
>> 
>> Sent from my iPhone
>> 
>> On Nov 10, 2016, at 10:38 PM, Alfredo Cardigliano <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>>> Hi Gautam
>>> your traffic is GTP traffic and the hash was computed on the inner headers 
>>> when present,
>>> I did change the behaviour computing the hash on the outer header when 
>>> using cluster_per_flow_2_tuple, and introduced
>>> new hash types cluster_per_inner_* for computing hash on inner header, when 
>>> present.
>>> Please update from github or wait for new packages.
>>> 
>>> Regards
>>> Alfredo
>>> 
>>>> On 10 Nov 2016, at 11:41, Chandrika Gautam <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> Hi Alfredo 
>>>> 
>>>> PFA the traces having vlan and not vlan.
>>>> 
>>>> To add more details to this, there are 2 observations - 
>>>> 1. We ran a bigger file of 1 lakh packets, out of which fragments of same 
>>>> packet got distributed across application
>>>> 
>>>> 2. We ran with the attached file and observed that the 2 packets were 
>>>> going to one application and rest of the packets were to other one.
>>>> 
>>>> Thanks & Regards
>>>> 
>>>> On Thu, Nov 10, 2016 at 4:04 PM, Alfredo Cardigliano <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> Hi Gautam
>>>> could you provide a pcap we can use to reproduce this?
>>>> 
>>>> Alfredo
>>>> 
>>>> > On 10 Nov 2016, at 11:22, Chandrika Gautam 
>>>> > <[email protected] <mailto:[email protected]>> 
>>>> > wrote:
>>>> >
>>>> > Hi,
>>>> >
>>>> > We are using PFRING cluster feature and using cluster_2_tuple and 2 
>>>> > applications
>>>> > are reading from same cluster id.
>>>> >
>>>> > We have observed that the packets having same source and destination ip 
>>>> > addresses are getting distributed across 2 applications which has 
>>>> > completely tossed our logic as we are trying to assemble the fragments 
>>>> > in our applications.
>>>> >
>>>> > Is there any bug in PFRING clustering mechanism which is causing this.
>>>> >
>>>> > Using PFRING 6.2.0 and  pfring is loaded with below command -
>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0
>>>> >
>>>> > I tried with this also.
>>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 
>>>> > enable_frag_coherence=1
>>>> >
>>>> >
>>>> > Regards,
>>>> > Gautam
>>>> >
>>>> > _______________________________________________
>>>> > Ntop-misc mailing list
>>>> > [email protected] <mailto:[email protected]>
>>>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> 
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> 
>>>> <multiple_fragments_id35515.pcap><multiple_fragments_id35515_wo_vlan.pcap>_______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected] <mailto:[email protected]>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected] <mailto:[email protected]>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected] <mailto:[email protected]>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected] <mailto:[email protected]>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to