Hi Alfredo, There is an observation further on this.
PFA for the new traces having 8 packets from same source and destination. On first run. They are getting segregated across pfcount different instances. When I send the same file again, It goes to one instance of pfcount. *Output of first run ----------------------------------------* userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m Using PF_RING v.6.5.0 Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed: 10000Mb/s] # Device RX channels: 16 # Polling threads: 1 pfring_set_cluster returned 0 Dumping statistics on /proc/net/pf_ring/stats/6442-ens2f0.2 15:25:05.593222239 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 ->* 49.103.84.212*:0] [l3_proto=UDP][ *hash=2780252203* ][tos=0][tcp_seq_num=0][caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:25:05.593439521 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0] [l3_proto=UDP][ *hash=2780252203*][tos=0][tcp_seq_num=0] [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:25:05.593618032 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0] [l3_proto=UDP][ *hash=2780252203*][tos=0][tcp_seq_num=0] [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] userland/examples/pfcount -i ens2f0 -c 99 -H 2 -v 1 -m Using PF_RING v.6.5.0 Capturing from ens2f0 [mac: 00:1B:21:C7:2D:6C][if_index: 6][speed: 10000Mb/s] # Device RX channels: 16 # Polling threads: 1 pfring_set_cluster returned 0 Dumping statistics on /proc/net/pf_ring/stats/6441-ens2f0.1 15:25:05.593070816 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:0 -> *49.103.84.212*:0] [l3_proto=UDP][ *hash=2780252203*][tos=0][tcp_seq_num=0] [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:25:05.593123086 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4]*[116.79.243.70*:2152 -> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [*hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] 15:25:05.593326381 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 -> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [*hash=2780252186* ][tos=0][tcp_seq_num=0][caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] 15:25:05.593529674 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 -> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [*hash=2780252186*][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] 15:25:05.593776442 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][*116.79.243.70*:2152 -> *49.103.84.212*:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [*hash=2780252186* ][tos=0][tcp_seq_num=0][caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] *Output of second run ----------------------------------------* 15:28:03.255165805 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0] [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0] [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:28:03.255217727 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] 15:28:03.255367715 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0] [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0][caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:28:03.255416304 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26366 -> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] 15:28:03.255551827 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0] [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0] [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:28:03.255616828 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] 15:28:03.255765232 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:0 -> 49.103.84.212:0] [l3_proto=UDP][hash=2780252203][tos=0][tcp_seq_num=0] [caplen=64][len=64][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=0] 15:28:03.255917611 [RX][if_index=6][10:F3:11:B3:06:01 -> 00:10:DB:FF:10:01] [vlan 250] [IPv4][116.79.243.70:2152 -> 49.103.84.212:2152] [l3_proto=UDP][TEID=0xC0E29BE6][tunneled_proto=TCP][IPv4][1.73.229.95:26367 -> 172.217.25.241:443] [hash=2780252186][tos=0][tcp_seq_num=0] [caplen=128][len=1518][eth_offset=0][l3_offset=18][l4_offset=38][payload_offset=46] Regards, Gautam On Fri, Nov 11, 2016 at 3:42 PM, Chandrika Gautam < [email protected]> wrote: > My bad !!! > > I am checking this for longer run and will update. > > Thanks & Regards, > Gautam > > On Fri, Nov 11, 2016 at 3:33 PM, Alfredo Cardigliano <[email protected] > > wrote: > >> Gautam >> they are not all the same, you have 4 flows 199.223.102.6 -> 49.103.1.132 >> and 2 flows 220.159.237.103 -> 203.118.242.166 >> >> Alfredo >> >> On 11 Nov 2016, at 10:51, Chandrika Gautam <[email protected]> >> wrote: >> >> >> If you check the outer src and dst IP addresses of all these 6 packets >> are same, then shouldn't all these 6 packets go to 1 pfcount instance if we >> have chosen cluster_type as cluster_per_2_flow? >> >> Regards, >> Gautam >> >> On Fri, Nov 11, 2016 at 3:16 PM, Alfredo Cardigliano <cardigliano@ntop. >> org> wrote: >> >>> This is what I am receiving, it looks correct as they are distributed by >>> 2-tuple: >>> >>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m >>> Using PF_RING v.6.5.0 >>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: >>> 10000Mb/s] >>> # Device RX channels: 1 >>> # Polling threads: 1 >>> pfring_set_cluster returned 0 >>> Dumping statistics on /proc/net/pf_ring/stats/25212-eth2.14 >>> 10:45:00.296354612 [RX][if_index=86][10:F3:11:B3:06:01 -> >>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152] >>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21 >>> 6.58.194.110:443 -> 100.83.201.244:43485] >>> [hash=4182140810][tos=0][tcp_seq_num=0] >>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset= >>> 34][payload_offset=42] >>> 10:45:00.296358154 [RX][if_index=86][10:F3:11:B3:06:01 -> >>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0] >>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] >>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset >>> =34][payload_offset=0] >>> 10:45:00.296359417 [RX][if_index=86][10:F3:11:B3:06:01 -> >>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:2152 -> 49.103.1.132:2152] >>> [l3_proto=UDP][TEID=0x40611E78][tunneled_proto=TCP][IPv4][21 >>> 6.58.194.97:443 -> 100.83.201.244:55379] >>> [hash=4182140810][tos=0][tcp_seq_num=0] >>> [caplen=128][len=226][eth_offset=0][l3_offset=14][l4_offset= >>> 34][payload_offset=42] >>> 10:45:00.296361197 [RX][if_index=86][10:F3:11:B3:06:01 -> >>> 00:10:DB:FF:10:01] [IPv4][199.223.102.6:0 -> 49.103.1.132:0] >>> [l3_proto=UDP][hash=4182140827][tos=0][tcp_seq_num=0] >>> [caplen=128][len=1328][eth_offset=0][l3_offset=14][l4_offset >>> =34][payload_offset=0] >>> ^CLeaving... >>> >>> # ./pfcount -i eth2 -c 99 -H 2 -v 1 -m >>> Using PF_RING v.6.5.0 >>> Capturing from eth2 [mac: 00:25:90:E0:7F:45][if_index: 86][speed: >>> 10000Mb/s] >>> # Device RX channels: 1 >>> # Polling threads: 1 >>> pfring_set_cluster returned 0 >>> Dumping statistics on /proc/net/pf_ring/stats/25213-eth2.15 >>> 10:45:00.296362749 [RX][if_index=86][00:10:DB:FF:10:01 -> >>> 00:00:0C:07:AC:01] [IPv4][220.159.237.103:2152 -> 203.118.242.166:2152] >>> [l3_proto=UDP][TEID=0x3035D0A8][tunneled_proto=TCP][IPv4][49.96.0.26:80 >>> <http://49.96.0.26/> -> 10.160.153.151:60856] >>> [hash=2820071437][tos=104][tcp_seq_num=0] [caplen=128][len=1514][eth_off >>> set=0][l3_offset=14][l4_offset=34][payload_offset=42] >>> 10:45:00.296365824 [RX][if_index=86][00:10:DB:FF:10:01 -> >>> 00:00:0C:07:AC:01] [vlan 210] [IPv4][220.159.237.103:0 -> 20 >>> 3.118.242.166:0] [l3_proto=UDP][hash=2820071454][tos=104][tcp_seq_num=0] >>> [caplen=74][len=74][eth_offset=0][l3_offset=18][l4_offset=38 >>> ][payload_offset=0] >>> ^CLeaving... >>> >>> Alfredo >>> >>> On 11 Nov 2016, at 10:41, Chandrika Gautam < >>> [email protected]> wrote: >>> >>> I tried with above. I found the same result one instance of pfcount >>> receiving 2 packets and 6 in other instance for the file shared >>> multiple_fragments_id35515_wo_vlan.pcap. >>> >>> Are you receiving all 6 packets in one pfcount instance ? >>> >>> Regards, >>> Chandrika >>> >>> >> >> _______________________________________________ >> Ntop-misc mailing list >> [email protected] >> http://listgateway.unipi.it/mailman/listinfo/ntop-misc >> > >
case4.pcap
Description: application/cap
_______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
