Here's an additional test on 5.5.3 with http_gzip.pcap (with 10 packets):

- running a single instance of pfcount everything works properly and
/proc/net/pf_ring/info shows:
PF_RING Version          : 5.5.3 ($Revision: exported$)
Total rings              : 1

Standard (non DNA) Options
Ring slots               : 4096
Slot version             : 15
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 0

- running two instances of pfcount with the same clusterId shows no
packets in either pfcount instance and /proc/net/pf_ring/info shows:
PF_RING Version          : 5.5.3 ($Revision: exported$)
Total rings              : 2

Standard (non DNA) Options
Ring slots               : 4096
Slot version             : 15
Capture TX               : Yes [RX+TX]
IP Defragment            : No
Socket Mode              : Standard
Transparent mode         : Yes [mode 0]
Total plugins            : 0
Cluster Fragment Queue   : 0
Cluster Fragment Discard : 10

Any ideas why it's discarding the 10 packets?

Thanks!

Doug

On Wed, Jun 5, 2013 at 8:44 AM, Doug Burks <[email protected]> wrote:
> I repeated these tests on the same Ubuntu VM using PF_RING 5.5.2
> kernel/userland and clustering works as expected without the packet
> loss seen in 5.5.3.
>
> Perhaps I'm missing something, but it appears that there is a bug in
> the 5.5.3 cluster code.
>
> Can somebody confirm, please?
>
> Thanks!
>
> Doug
>
>
> On Wed, Jun 5, 2013 at 8:06 AM, Doug Burks <[email protected]> wrote:
>> I just did a new test as follows:
>>
>> - started with a fresh installation of Ubuntu 12.04
>> - downloaded the PF_RING 5.5.3 tarball
>> - compiled and inserted the kernel module
>> - changed pfcount.c to use cluster_per_flow_2_tuple:
>>     rc = pfring_set_cluster(pd, clusterId, cluster_per_flow_2_tuple);
>> - compiled pfcount
>>
>> TEST #1
>> - downloaded http.cap from wireshark.org:
>> http://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=http.cap
>> - capinfos reports 43 packets in the file:
>> capinfos -c http.cap
>> File name:           http.cap
>> Number of packets:   43
>> - replayed pcap using:
>> sudo tcpreplay -ieth0 -t http.cap
>> - running a single instance of pfcount results in all 43 packets received
>> - adding a second instance of pfcount with the same clusterId results
>> in all 43 packets received by the first instance
>> - adding a third instance of pfcount results in only 2 packets being
>> seen by the first instance, 7 packets being seen by the second
>> instance, and 0 packets being seen by the third instance
>>
>> TEST #2
>> - downloaded http_gzip.cap from wireshark.org:
>> http://wiki.wireshark.org/SampleCaptures?action=AttachFile&do=get&target=http_gzip.cap
>> - capinfos reports 10 packets in the file:
>> capinfos -c http_gzip.cap
>> File name:           http_gzip.cap
>> Number of packets:   10
>> - replayed pcap using:
>> sudo tcpreplay -ieth0 -t http_gzip.cap
>> - running a single instance of pfcount results in all 10 packets received
>> - adding a second instance of pfcount with the same clusterId results
>> in 0 packets received by both instances
>> - adding a third instance of pfcount with the same clusterId results
>> in 0 packets received by all three instances
>>
>> What am I missing?
>>
>> Can somebody please try the tests above and let me know what results you get?
>>
>> Thanks!
>>
>> Doug
>>
>> On Tue, Jun 4, 2013 at 7:27 PM, Doug Burks <[email protected]> wrote:
>>> I pulled new code from svn, compiled and inserted the new kernel
>>> module, and verified that I get the same results.
>>>
>>> I see this in the 5.5.3 Changelog:
>>>
>>> - Added ability to balance tunneled/fragmented packets with the cluster
>>>
>>> Is it possible that this change is affecting the hashing mechanism?
>>>
>>> Anything else I can try?
>>>
>>> Thanks,
>>> Doug
>>>
>>>
>>> On Tue, Jun 4, 2013 at 6:40 AM, Alfredo Cardigliano
>>> <[email protected]> wrote:
>>>> Good morning Doug
>>>> I received the pcap but I was traveling, I will check them asap
>>>>
>>>> Thanks
>>>> Alfredo
>>>>
>>>> On Jun 4, 2013, at 12:30 PM, Doug Burks <[email protected]> wrote:
>>>>
>>>>> Good morning Alfredo,
>>>>>
>>>>> Just wanted to follow up and confirm that you received the 5 pcaps I
>>>>> sent off-list yesterday.
>>>>>
>>>>> Is there anything else I can provide to help troubleshoot this issue?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Doug
>>>>>
>>>>>
>>>>> On Mon, Jun 3, 2013 at 7:24 AM, Doug Burks <[email protected]> wrote:
>>>>>> On Mon, Jun 3, 2013 at 3:39 AM, Alfredo Cardigliano
>>>>>> <[email protected]> wrote:
>>>>>>> Doug
>>>>>>> I don't think the support for packet injection is going to interfere 
>>>>>>> your test.
>>>>>>> Could you try sending packets from another interface?
>>>>>>
>>>>>> I've confirmed this behavior using tcpreplay in a VM and also on a
>>>>>> physical sensor connected to a tap.
>>>>>>
>>>>>>> Could you provide me the original pcap you are using and the produced 
>>>>>>> pcaps?
>>>>>>
>>>>>> Sent off-list.
>>>>>>
>>>>>> Please let me know if there is anything else I can provide to help
>>>>>> troubleshoot this issue.
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> Doug
>>>>>>>
>>>>>>> Thanks
>>>>>>> Alfredo
>>>>>>>
>>>>>>> On Jun 2, 2013, at 11:40 PM, Doug Burks <[email protected]> wrote:
>>>>>>>
>>>>>>>> I see this in the Changelog:
>>>>>>>>
>>>>>>>> - Support for injecting packets to the stack
>>>>>>>>
>>>>>>>> Is it possible that this change could have an impact on my test since
>>>>>>>> I'm using tcpreplay?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Doug
>>>>>>>>
>>>>>>>> On Sun, Jun 2, 2013 at 2:59 PM, Doug Burks <[email protected]> 
>>>>>>>> wrote:
>>>>>>>>> cat /proc/net/pf_ring/info
>>>>>>>>>
>>>>>>>>> PF_RING Version          : 5.5.3 ($Revision: $)
>>>>>>>>> Total rings              : 2
>>>>>>>>>
>>>>>>>>> Standard (non DNA) Options
>>>>>>>>> Ring slots               : 4096
>>>>>>>>> Slot version             : 15
>>>>>>>>> Capture TX               : Yes [RX+TX]
>>>>>>>>> IP Defragment            : No
>>>>>>>>> Socket Mode              : Standard
>>>>>>>>> Transparent mode         : Yes [mode 0]
>>>>>>>>> Total plugins            : 0
>>>>>>>>> Cluster Fragment Queue   : 0
>>>>>>>>> Cluster Fragment Discard : 16830
>>>>>>>>>
>>>>>>>>> I've tried a few different pcaps, some of them are like my testmyids
>>>>>>>>> sample in that no packets make it to pfdump, others work perfectly,
>>>>>>>>> while for others it looks like only some of the packets are making it
>>>>>>>>> into pfdump:
>>>>>>>>>
>>>>>>>>> sudo tcpreplay -i eth1 -M10 
>>>>>>>>> /opt/samples/markofu/honeynet_suspicious-time.pcap
>>>>>>>>> sending out eth1
>>>>>>>>> processing file: /opt/samples/markofu/honeynet_suspicious-time.pcap
>>>>>>>>> Actual: 745 packets (293958 bytes) sent in 0.32 seconds
>>>>>>>>> Rated: 918618.8 bps, 7.01 Mbps, 2328.12 pps
>>>>>>>>> Statistics for network device: eth1
>>>>>>>>> Attempted packets:         745
>>>>>>>>> Successful packets:        745
>>>>>>>>> Failed packets:            0
>>>>>>>>> Retried packets (ENOBUFS): 0
>>>>>>>>> Retried packets (EAGAIN):  0
>>>>>>>>>
>>>>>>>>> sudo ./pfdump -l77 -i eth1 -w instance1.pcap
>>>>>>>>> Using PF_RING v.5.5.3
>>>>>>>>> Capturing from eth1 [00:0C:29:5F:58:D8][ifIndex: 3]
>>>>>>>>> # Device RX channels: 1
>>>>>>>>> pfring_set_cluster returned 0
>>>>>>>>> 1 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 2 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 3 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 4 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 5 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 6 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 7 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 8 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 9 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 10 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 11 sec pkts 257 drop 0 bytes 81262 | pkts 257 bytes 81262 drop 0
>>>>>>>>> 12 sec pkts 136 drop 0 bytes 72265 | pkts 393 bytes 153527 drop 0
>>>>>>>>> 13 sec pkts 0 drop 0 bytes 0 | pkts 393 bytes 153527 drop 0
>>>>>>>>> 14 sec pkts 0 drop 0 bytes 0 | pkts 393 bytes 153527 drop 0
>>>>>>>>> 15 sec pkts 0 drop 0 bytes 0 | pkts 393 bytes 153527 drop 0
>>>>>>>>> 16 sec pkts 0 drop 0 bytes 0 | pkts 393 bytes 153527 drop 0
>>>>>>>>> 17 sec pkts 0 drop 0 bytes 0 | pkts 393 bytes 153527 drop 0
>>>>>>>>> ^CLeaving...
>>>>>>>>> 18 sec pkts 0 drop 0 bytes 0 | pkts 393 bytes 153527 drop 0
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> sudo ./pfdump -l77 -i eth1 -w instance2.pcap
>>>>>>>>> Using PF_RING v.5.5.3
>>>>>>>>> Capturing from eth1 [00:0C:29:5F:58:D8][ifIndex: 3]
>>>>>>>>> # Device RX channels: 1
>>>>>>>>> pfring_set_cluster returned 0
>>>>>>>>> 1 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 2 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 3 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 4 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 5 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 6 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 7 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 8 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 9 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>> 10 sec pkts 21 drop 0 bytes 6352 | pkts 21 bytes 6352 drop 0
>>>>>>>>> 11 sec pkts 15 drop 0 bytes 3640 | pkts 36 bytes 9992 drop 0
>>>>>>>>> 12 sec pkts 0 drop 0 bytes 0 | pkts 36 bytes 9992 drop 0
>>>>>>>>> 13 sec pkts 0 drop 0 bytes 0 | pkts 36 bytes 9992 drop 0
>>>>>>>>> 14 sec pkts 0 drop 0 bytes 0 | pkts 36 bytes 9992 drop 0
>>>>>>>>> 15 sec pkts 0 drop 0 bytes 0 | pkts 36 bytes 9992 drop 0
>>>>>>>>> 16 sec pkts 0 drop 0 bytes 0 | pkts 36 bytes 9992 drop 0
>>>>>>>>> ^CLeaving...
>>>>>>>>> 17 sec pkts 0 drop 0 bytes 0 | pkts 36 bytes 9992 drop 0
>>>>>>>>>
>>>>>>>>> What else can I test?
>>>>>>>>>
>>>>>>>>> Thanks!
>>>>>>>>>
>>>>>>>>> Doug
>>>>>>>>>
>>>>>>>>> On Sun, Jun 2, 2013 at 2:07 PM, Alfredo Cardigliano
>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>> Doug
>>>>>>>>>> I ran a test using curl + pfcount and it is working for me.
>>>>>>>>>>
>>>>>>>>>> $ curl testmyids.com
>>>>>>>>>>
>>>>>>>>>> (first instance)
>>>>>>>>>> $ ./pfcount -i eth0 -c 99 -v 1 -m
>>>>>>>>>> ...
>>>>>>>>>> Absolute Stats: [0 pkts rcvd][0 pkts filtered][0 pkts dropped]
>>>>>>>>>>
>>>>>>>>>> (second instance)
>>>>>>>>>> $ ./pfcount -i eth0 -c 99 -v 1 -m
>>>>>>>>>> ...
>>>>>>>>>> Absolute Stats: [11 pkts rcvd][11 pkts filtered][0 pkts dropped]
>>>>>>>>>>
>>>>>>>>>> Please make sure tx capture is enabled in your test (cat 
>>>>>>>>>> /proc/net/pf_ring/info)
>>>>>>>>>>
>>>>>>>>>> Alfredo
>>>>>>>>>>
>>>>>>>>>> On Jun 2, 2013, at 7:43 PM, Doug Burks <[email protected]> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Alfredo,
>>>>>>>>>>>
>>>>>>>>>>> Thanks for your suggestion!
>>>>>>>>>>>
>>>>>>>>>>> I've changed pfdump.c to use cluster_per_flow_2_tuple:
>>>>>>>>>>>
>>>>>>>>>>> if(clusterId > 0) {
>>>>>>>>>>>  rc = pfring_set_cluster(pd, clusterId, cluster_per_flow_2_tuple);
>>>>>>>>>>>  printf("pfring_set_cluster returned %d\n", rc);
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> I then re-ran the test as follows:
>>>>>>>>>>>
>>>>>>>>>>> Replayed a TCP stream with 11 packets onto eth1:
>>>>>>>>>>>
>>>>>>>>>>> sudo tcpreplay -i eth1 -M10 testmyids.pcap
>>>>>>>>>>> sending out eth1
>>>>>>>>>>> processing file: testmyids.pcap
>>>>>>>>>>> Actual: 11 packets (1062 bytes) sent in 0.00 seconds
>>>>>>>>>>> Rated: inf bps, inf Mbps, inf pps
>>>>>>>>>>> Statistics for network device: eth1
>>>>>>>>>>> Attempted packets:         11
>>>>>>>>>>> Successful packets:        11
>>>>>>>>>>> Failed packets:            0
>>>>>>>>>>> Retried packets (ENOBUFS): 0
>>>>>>>>>>> Retried packets (EAGAIN):  0
>>>>>>>>>>>
>>>>>>>>>>> Ran two instances of pfdump on eth1 with same clusterId but neither 
>>>>>>>>>>> of
>>>>>>>>>>> them saw traffic this time:
>>>>>>>>>>>
>>>>>>>>>>> sudo ./pfdump -l77 -i eth1 -w instance1.pcap
>>>>>>>>>>> Using PF_RING v.5.5.3
>>>>>>>>>>> Capturing from eth1 [00:0C:29:5F:58:D8][ifIndex: 3]
>>>>>>>>>>> # Device RX channels: 1
>>>>>>>>>>> pfring_set_cluster returned 0
>>>>>>>>>>> 1 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 2 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 3 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 4 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 5 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 6 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 7 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 8 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 9 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 10 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 11 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 12 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 13 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> ^CLeaving...
>>>>>>>>>>> 14 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>>
>>>>>>>>>>> sudo ./pfdump -l77 -i eth1 -w instance2.pcap
>>>>>>>>>>> Using PF_RING v.5.5.3
>>>>>>>>>>> Capturing from eth1 [00:0C:29:5F:58:D8][ifIndex: 3]
>>>>>>>>>>> # Device RX channels: 1
>>>>>>>>>>> pfring_set_cluster returned 0
>>>>>>>>>>> 1 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 2 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 3 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 4 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 5 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 6 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 7 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 8 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 9 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 10 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 11 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> 12 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>> ^CLeaving...
>>>>>>>>>>> 13 sec pkts 0 drop 0 bytes 0 | pkts 0 bytes 0 drop 0
>>>>>>>>>>>
>>>>>>>>>>> tcpdump -nnvvr instance1.pcap
>>>>>>>>>>> reading from file instance1.pcap, link-type EN10MB (Ethernet)
>>>>>>>>>>>
>>>>>>>>>>> tcpdump -nnvvr instance2.pcap
>>>>>>>>>>> reading from file instance2.pcap, link-type EN10MB (Ethernet)
>>>>>>>>>>>
>>>>>>>>>>> I've repeated this a few times and get the same result each time.
>>>>>>>>>>>
>>>>>>>>>>> Any ideas why cluster_per_flow_2_tuple wouldn't be passing the 
>>>>>>>>>>> traffic?
>>>>>>>>>>>
>>>>>>>>>>> Thanks!
>>>>>>>>>>>
>>>>>>>>>>> Doug
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Jun 2, 2013 at 12:41 PM, Alfredo Cardigliano
>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>> Hi Doug
>>>>>>>>>>>> the code in pfcount sets  the cluster mode to round-robin,
>>>>>>>>>>>> for flow coherency you should change it to (for instance)
>>>>>>>>>>>> cluster_per_flow_2_tuple.
>>>>>>>>>>>> The daq-pfring code sets the cluster mode to 
>>>>>>>>>>>> cluster_per_flow_2_tuple by
>>>>>>>>>>>> default.
>>>>>>>>>>>>
>>>>>>>>>>>> Best Regards
>>>>>>>>>>>> Alfredo
>>>>>>>>>>>>
>>>>>>>>>>>> Index: pfcount.c
>>>>>>>>>>>> ===================================================================
>>>>>>>>>>>> --- pfcount.c (revisione 6336)
>>>>>>>>>>>> +++ pfcount.c (copia locale)
>>>>>>>>>>>> @@ -924,7 +924,7 @@
>>>>>>>>>>>> #endif
>>>>>>>>>>>>
>>>>>>>>>>>> if(clusterId > 0) {
>>>>>>>>>>>> -    rc = pfring_set_cluster(pd, clusterId, cluster_round_robin);
>>>>>>>>>>>> +    rc = pfring_set_cluster(pd, clusterId, 
>>>>>>>>>>>> cluster_per_flow_2_tuple);
>>>>>>>>>>>>   printf("pfring_set_cluster returned %d\n", rc);
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> On Jun 2, 2013, at 2:54 PM, Doug Burks <[email protected]> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I copied the clusterId code from pfcount and pasted into pfdump and
>>>>>>>>>>>> compiled it.  Then tested with a fresh pcap of "curl 
>>>>>>>>>>>> testmyids.com":
>>>>>>>>>>>>
>>>>>>>>>>>> tcpdump -nnr testmyids.pcap
>>>>>>>>>>>> reading from file testmyids.pcap, link-type EN10MB (Ethernet)
>>>>>>>>>>>> 12:37:21.846561 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [S],
>>>>>>>>>>>> seq 2183306783, win 42340, options [mss 1460,sackOK,TS val 13599714
>>>>>>>>>>>> ecr 0,nop,wscale 11], length 0
>>>>>>>>>>>> 12:37:21.963023 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags
>>>>>>>>>>>> [S.], seq 3354284181, ack 2183306784, win 64240, options [mss 
>>>>>>>>>>>> 1460],
>>>>>>>>>>>> length 0
>>>>>>>>>>>> 12:37:21.963070 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 1, win 42340, length 0
>>>>>>>>>>>> 12:37:21.963268 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [P.], seq 1:166, ack 1, win 42340, length 165
>>>>>>>>>>>> 12:37:21.963423 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 166, win 64240, length 0
>>>>>>>>>>>> 12:37:22.083864 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags
>>>>>>>>>>>> [P.], seq 1:260, ack 166, win 64240, length 259
>>>>>>>>>>>> 12:37:22.083906 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 260, win 42081, length 0
>>>>>>>>>>>> 12:37:22.084118 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [F.], seq 166, ack 260, win 42081, length 0
>>>>>>>>>>>> 12:37:22.085362 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 167, win 64239, length 0
>>>>>>>>>>>> 12:37:22.202741 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags
>>>>>>>>>>>> [FP.], seq 260, ack 167, win 64239, length 0
>>>>>>>>>>>> 12:37:22.202786 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 261, win 42081, length 0
>>>>>>>>>>>>
>>>>>>>>>>>> I then started the two instances of pfdump using the same clusterId
>>>>>>>>>>>> and then replayed the 11 packets with tcpreplay:
>>>>>>>>>>>> sudo tcpreplay -i eth1 -M10 testmyids.pcap
>>>>>>>>>>>> sending out eth1
>>>>>>>>>>>> processing file: testmyids.pcap
>>>>>>>>>>>> Actual: 11 packets (1062 bytes) sent in 0.01 seconds
>>>>>>>>>>>> Rated: 106200.0 bps, 0.81 Mbps, 1100.00 pps
>>>>>>>>>>>> Statistics for network device: eth1
>>>>>>>>>>>> Attempted packets:         11
>>>>>>>>>>>> Successful packets:        11
>>>>>>>>>>>> Failed packets:            0
>>>>>>>>>>>> Retried packets (ENOBUFS): 0
>>>>>>>>>>>> Retried packets (EAGAIN):  0
>>>>>>>>>>>>
>>>>>>>>>>>> FIRST INSTANCE OF PFDUMP
>>>>>>>>>>>>
>>>>>>>>>>>> sudo ./pfdump -l77 -i eth1 -w instance1.pcap
>>>>>>>>>>>> Using PF_RING v.5.5.3
>>>>>>>>>>>> Capturing from eth1 [00:0C:29:5F:58:D8][ifIndex: 3]
>>>>>>>>>>>> # Device RX channels: 1
>>>>>>>>>>>> pfring_set_cluster returned 0
>>>>>>>>>>>> <snip>
>>>>>>>>>>>> 241 sec pkts 6 drop 0 bytes 500 | pkts 6 bytes 500 drop 0
>>>>>>>>>>>> <snip>
>>>>>>>>>>>>
>>>>>>>>>>>> tcpdump -nnr instance1.pcap
>>>>>>>>>>>> reading from file instance1.pcap, link-type EN10MB (Ethernet)
>>>>>>>>>>>> 12:38:55.886037 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [S],
>>>>>>>>>>>> seq 2183306783, win 42340, options [mss 1460,sackOK,TS val 13599714
>>>>>>>>>>>> ecr 0,nop,wscale 11], length 0
>>>>>>>>>>>> 12:38:55.886889 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 3354284182, win 42340, length 0
>>>>>>>>>>>> 12:38:55.887325 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 165, win 64240, length 0
>>>>>>>>>>>> 12:38:55.887986 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 260, win 42081, length 0
>>>>>>>>>>>> 12:38:55.888306 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 166, win 64239, length 0
>>>>>>>>>>>> 12:38:55.888741 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags 
>>>>>>>>>>>> [.],
>>>>>>>>>>>> ack 261, win 42081, length 0
>>>>>>>>>>>>
>>>>>>>>>>>> SECOND INSTANCE OF PFDUMP
>>>>>>>>>>>>
>>>>>>>>>>>> sudo ./pfdump -l77 -i eth1 -w instance2.pcap
>>>>>>>>>>>> Using PF_RING v.5.5.3
>>>>>>>>>>>> Capturing from eth1 [00:0C:29:5F:58:D8][ifIndex: 3]
>>>>>>>>>>>> # Device RX channels: 1
>>>>>>>>>>>> pfring_set_cluster returned 0
>>>>>>>>>>>> <snip>
>>>>>>>>>>>> 16 sec pkts 5 drop 0 bytes 826 | pkts 5 bytes 826 drop 0
>>>>>>>>>>>> 17 sec pkts 0 drop 0 bytes 0 | pkts 5 bytes 826 drop 0
>>>>>>>>>>>> 18 sec pkts 0 drop 0 bytes 0 | pkts 5 bytes 826 drop 0
>>>>>>>>>>>> 19 sec pkts 0 drop 0 bytes 0 | pkts 5 bytes 826 drop 0
>>>>>>>>>>>> ^CLeaving...
>>>>>>>>>>>> 20 sec pkts 0 drop 0 bytes 0 | pkts 5 bytes 826 drop 0
>>>>>>>>>>>>
>>>>>>>>>>>> tcpdump -nnr instance2.pcap
>>>>>>>>>>>> reading from file instance2.pcap, link-type EN10MB (Ethernet)
>>>>>>>>>>>> 12:38:55.886499 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags
>>>>>>>>>>>> [S.], seq 3354284181, ack 2183306784, win 64240, options [mss 
>>>>>>>>>>>> 1460],
>>>>>>>>>>>> length 0
>>>>>>>>>>>> 12:38:55.887129 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [P.], seq 1:166, ack 1, win 42340, length 165
>>>>>>>>>>>> 12:38:55.887666 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags
>>>>>>>>>>>> [P.], seq 1:260, ack 166, win 64240, length 259
>>>>>>>>>>>> 12:38:55.888117 IP 172.16.116.128.44229 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [F.], seq 166, ack 260, win 42081, length 0
>>>>>>>>>>>> 12:38:55.888530 IP 217.160.51.31.80 > 172.16.116.128.44229: Flags
>>>>>>>>>>>> [FP.], seq 260, ack 167, win 64239, length 0
>>>>>>>>>>>>
>>>>>>>>>>>> As you can see, the first instance sees 6 packets and the second
>>>>>>>>>>>> instance sees 5 packets.  Shouldn't all 11 packets in that TCP 
>>>>>>>>>>>> stream
>>>>>>>>>>>> be sent to the same instance?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks!
>>>>>>>>>>>>
>>>>>>>>>>>> Doug
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sun, Jun 2, 2013 at 7:11 AM, Doug Burks <[email protected]> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hi Luca,
>>>>>>>>>>>>
>>>>>>>>>>>> I can repeat the test with pfdump when I'm back at my computer, 
>>>>>>>>>>>> but is there
>>>>>>>>>>>> something in particular you're looking for that wasn't in the 
>>>>>>>>>>>> pfcount output
>>>>>>>>>>>> I provided?  Shouldn't all the traffic from that one TCP stream be 
>>>>>>>>>>>> sent to
>>>>>>>>>>>> one instance of pfcount?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Doug
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Sunday, June 2, 2013, Luca Deri wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Hi
>>>>>>>>>>>> You're right. We need to add it: you can c&p the code from pfcount 
>>>>>>>>>>>> in the
>>>>>>>>>>>> meantime
>>>>>>>>>>>>
>>>>>>>>>>>> Luca
>>>>>>>>>>>>
>>>>>>>>>>>> On Jun 2, 2013, at 1:54 AM, Doug Burks <[email protected]> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> I have pfdump now but I don't see a cluster-id option.  Did you 
>>>>>>>>>>>> mean
>>>>>>>>>>>> pfcount?  If I run 2 instances of pfcount with the same cluster-id 
>>>>>>>>>>>> and
>>>>>>>>>>>> then replay a pcap with 10 packets all belonging to the same TCP
>>>>>>>>>>>> stream, I get 5 packets being sent to each pfcount instance.
>>>>>>>>>>>> Shouldn't all 10 packets be sent to 1 instance?
>>>>>>>>>>>>
>>>>>>>>>>>> First instance:
>>>>>>>>>>>>
>>>>>>>>>>>> sudo ./pfcount -c77 -i eth1
>>>>>>>>>>>> <snip>
>>>>>>>>>>>> =========================
>>>>>>>>>>>> Absolute Stats: [5 pkts rcvd][5 pkts filtered][0 pkts dropped]
>>>>>>>>>>>> Total Pkts=5/Dropped=0.0 %
>>>>>>>>>>>> 5 pkts - 434 bytes [0.38 pkt/sec - 0.00 Mbit/sec]
>>>>>>>>>>>> =========================
>>>>>>>>>>>> Actual Stats: 5 pkts [1'000.75 ms][5.00 pps/0.00 Gbps]
>>>>>>>>>>>> =========================
>>>>>>>>>>>>
>>>>>>>>>>>> Second instance:
>>>>>>>>>>>>
>>>>>>>>>>>> sudo ./pfcount -c77 -i eth1
>>>>>>>>>>>> <snip>
>>>>>>>>>>>> =========================
>>>>>>>>>>>> Absolute Stats: [5 pkts rcvd][5 pkts filtered][0 pkts dropped]
>>>>>>>>>>>> Total Pkts=5/Dropped=0.0 %
>>>>>>>>>>>> 5 pkts - 834 bytes [0.62 pkt/sec - 0.00 Mbit/sec]
>>>>>>>>>>>> =========================
>>>>>>>>>>>> Actual Stats: 5 pkts [1'001.39 ms][4.99 pps/0.00 Gbps]
>>>>>>>>>>>> =========================
>>>>>>>>>>>>
>>>>>>>>>>>> The replayed pcap is just ten packets that result from "curl
>>>>>>>>>>>> testmyids.com":
>>>>>>>>>>>>
>>>>>>>>>>>> tcpdump -nnr testmyids.pcap
>>>>>>>>>>>> reading from file testmyids.pcap, link-type EN10MB (Ethernet)
>>>>>>>>>>>> 11:46:11.691648 IP 192.168.111.111.50154 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [S], seq 3840903154, win 42340, options [mss 1460,sackOK,TS val
>>>>>>>>>>>> 20137183 ecr 0,nop,wscale 11], length 0
>>>>>>>>>>>> 11:46:11.808833 IP 217.160.51.31.80 > 192.168.111.111.50154: Flags
>>>>>>>>>>>> [S.], seq 2859277445, ack 3840903155, win 5840, options [mss
>>>>>>>>>>>> 1460,nop,wscale 7], length 0
>>>>>>>>>>>> 11:46:11.808854 IP 192.168.111.111.50154 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [.], ack 1, win 21, length 0
>>>>>>>>>>>> 11:46:11.809083 IP 192.168.111.111.50154 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [P.], seq 1:166, ack 1, win 21, length 165
>>>>>>>>>>>> 11:46:11.927518 IP 217.160.51.31.80 > 192.168.111.111.50154: Flags
>>>>>>>>>>>> [.], ack 166, win 54, length 0
>>>>>>>>>>>> 11:46:12.036708 IP 217.160.51.31.80 > 192.168.111.111.50154: Flags
>>>>>>>>>>>> [P.], seq 1:260, ack 166, win 54, length 259
>>>>>>>>>>>> 11:46:12.036956 IP 192.168.111.111.50154 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [.], ack 260, win 21, length 0
>>>>>>>>>>>> 11:46:12.037206 IP 192.168.111.111.50154 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [F.], seq 166, ack 260, win 21, length 0
>>>>>>>>>>>> 11:46:12.154641 IP 217.160.51.31.80 > 192.168.111.111.50154: Flags
>>>>>>>>>>>> [F.], seq 260, ack 167, win 54, length 0
>>>>>>>>>>>> 11:46:12.154888 IP 192.168.111.111.50154 > 217.160.51.31.80: Flags
>>>>>>>>>>>> [.], ack 261, win 21, length 0
>>>>>>>>>>>>
>>>>>>>>>>>> Any ideas?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Doug
>>>>>>>>>>>>
>>>>>>>>>>>> On Sat, Jun 1, 2013 at 5:48 PM, Doug Burks <[email protected]> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> On Sat, Jun 1, 2013 at 10:24 AM, Luca Deri <[email protected]> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hi Doug
>>>>>>>>>>>>
>>>>>>>>>>>> On Jun 1, 2013, at 6:59 AM, Doug Burks <[email protected]> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> Hello all,
>>>>>>>>>>>>
>>>>>>>>>>>> I recently packaged PF_RING 5.5.3 for my Security Onion distro:
>>>>>>>>>>>>
>>>>>>>>>>>> http://securityonion.blogspot.com/2013/05/pfring-553-packages-now-available.html
>>>>>>>>>>>>
>>>>>>>>>>>> Perhaps I'm missing something, but I'm seeing some behavior I don't
>>>>>>>>>>>> remember seeing in 5.5.2 or previous versions of PF_RING.
>>>>>>>>>>>>
>>>>>>>>>>>> Here are my testing parameters:
>>>>>>>>>>>> - starting off with a good test, if I run just one instance of 
>>>>>>>>>>>> snort,
>>>>>>>>>>>> I get an alert from rule 2100498 for EACH time I run "curl
>>>>>>>>>>>> testmyids.com"
>>>>>>>>>>>> - if I increase to two instances of snort with the same 
>>>>>>>>>>>> cluster-id, I
>>>>>>>>>>>> get NO alerts when running "curl testmyids.com"
>>>>>>>>>>>> - if I set the daq clustermode to 2, I get NO alerts when running
>>>>>>>>>>>> "curl > _______________________________________________
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Ntop-misc mailing list
>>>>>>>>>>>> [email protected]
>>>>>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Ntop-misc mailing list
>>>>>>>>>>>> [email protected]
>>>>>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Doug Burks
>>>>>>>>>>>> http://securityonion.blogspot.com
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> Doug Burks
>>>>>>>>>>>> http://securityonion.blogspot.com
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Ntop-misc mailing list
>>>>>>>>>>>> [email protected]
>>>>>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Ntop-misc mailing list
>>>>>>>>>>>> [email protected]
>>>>>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Doug Burks
>>>>>>>>>>> http://securityonion.blogspot.com
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Ntop-misc mailing list
>>>>>>>>>>> [email protected]
>>>>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Ntop-misc mailing list
>>>>>>>>>> [email protected]
>>>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Doug Burks
>>>>>>>>> http://securityonion.blogspot.com
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Doug Burks
>>>>>>>> http://securityonion.blogspot.com
>>>>>>>> _______________________________________________
>>>>>>>> Ntop-misc mailing list
>>>>>>>> [email protected]
>>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Ntop-misc mailing list
>>>>>>> [email protected]
>>>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Doug Burks
>>>>>> http://securityonion.blogspot.com
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Doug Burks
>>>>> http://securityonion.blogspot.com
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> [email protected]
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>>
>>>> _______________________________________________
>>>> Ntop-misc mailing list
>>>> [email protected]
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>
>>>
>>>
>>> --
>>> Doug Burks
>>> http://securityonion.blogspot.com
>>
>>
>>
>> --
>> Doug Burks
>> http://securityonion.blogspot.com
>
>
>
> --
> Doug Burks
> http://securityonion.blogspot.com



-- 
Doug Burks
http://securityonion.blogspot.com
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to