My bad !!!

I am checking this for longer run and will update.

Thanks & Regards,
Gautam

On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <[email protected]>
wrote:

> Gautam
> they are not all the same, you have 4 flows 199.223.102.6 -> 49.103.1.132
> and 2 flows 220.159.237.103 -> 203.118.242.166
>
> Alfredo
>
> On 11 Nov 2016, at 10:51, Chandrika Gautam <[email protected]>
> wrote:
>
>
> If you check the outer src and dst IP addresses of all these 6 packets are
> same, then shouldn't all these 6 packets go to 1 pfcount instance if we
> have chosen cluster_type as cluster_per_2_flow?
>
> Regards,
> Gautam
>
> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <[email protected]
> > wrote:
>
>> This is what I am receiving, it looks correct as they are distributed by
>> 2-tuple:
>>
>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>> Using PF_RING v.6.5.0
>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>> 10000Mb/s]
>> # Device RX channels: 1
>> # Polling threads:    1
>> pfring_set_cluster returned 0
>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14
>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>> 6.58.194.110:443 -> 100.83.201.244:43485] 
>> [hash=4182140810][tos=0][tcp_seq_num=0]
>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>> 34][payload_offset=42]
>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>> =34][payload_offset=0]
>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152]
>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21
>> 6.58.194.97:443 -> 100.83.201.244:55379] 
>> [hash=4182140810][tos=0][tcp_seq_num=0]
>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset=
>> 34][payload_offset=42]
>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 ->
>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0]
>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0]
>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset
>> =34][payload_offset=0]
>> ^CLeaving...
>>
>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m
>> Using PF_RING v.6.5.0
>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed:
>> 10000Mb/s]
>> # Device RX channels: 1
>> # Polling threads:    1
>> pfring_set_cluster returned 0
>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15
>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 ->
>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152]
>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80
>> <http://49.96.0.26/> -> 10.160.153.151:60856]
>> [hash=2820071437][tos=104][tcp_seq_num=0] [caplen=128][len=1514][eth_off
>> set=0][l3_offset=14][l4_offset=34][payload_offset=42]
>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 ->
>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20
>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0]
>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=
>> 38][payload_offset=0]
>> ^CLeaving...
>>
>> Alfredo
>>
>> On 11 Nov 2016, at 10:41, Chandrika Gautam <[email protected]>
>> wrote:
>>
>> I tried with above. I found the same result one instance of pfcount
>> receiving 2 packets and 6 in other instance for the file shared
>> multiple_fragments_id35515_wo_vlan.pcap.
>>
>> Are you receiving all 6 packets in one pfcount instance ?
>>
>> Regards,
>> Chandrika
>>
>>
>
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to