I was reminded that my filter filter would catch all but the last
fragmented packet, so I ran a new capture with:

time tcpdump -i 'p6p1;p6p2' -nn -c 1000 'ip[6] & 0x20 !=0 or ip[6:2] &
0x1fff != 0' > /dev/null

(Thanks Judy for the filter!)

It took just over 61 minutes to collect 1000 fragmented packets on a link
that averaged 1.3 Gb/s during that hour. I wouldn't consider that heavily
fragmented traffic (although that's obviously a small sample time). Could
there be an issue with removing fragments from the cluster queue?

Cheers,

Jesse


On Fri, Apr 4, 2014 at 1:34 PM, Jesse Bowling <[email protected]>wrote:

> Hi Luca,
>
> Thank you for your response. I'll look into getting a traffic capture, but
> I'm not hopeful. I'd argue against the fragmentation issue for two reasons:
>
> *) The change in packet loss was consistent with the change in PF_RING
> versions, and previous versions did not exhibit this behavior
> *) Filtering current traffic with tcpdump -i 'p6p1;p6p2' -nn 'ip[6] & 0x20
> != 0' does not seem to reveal any unusual amount of fragmented packets (a
> dozen or so over 3 minutes).
>
> If I'm unable to provide pcaps, what's next best? Any debug logging I can
> turn on, or attach to debugger, etc?
>
> Cheers,
>
> Jesse
>
>
> On Fri, Apr 4, 2014 at 11:03 AM, Luca Deri <[email protected]> wrote:
>
>> Jesse
>> I am traveling with my team this week, sorry for not being responsive.
>> Would you be able to provide us a traffic sample you have? It looks you
>> have strongly frgmented traffic
>>
>> Regards Luca
>>
>> On 02 Apr 2014, at 11:42, Jesse Bowling <[email protected]> wrote:
>>
>> I've not seen any comments or updates on this issue. I'm still
>> experiencing this issue and would appreciate any advice on how to
>> troubleshoot, or at least some confirmation from other sites that they see
>> this issue with 5.6.2 PF_RING?
>>
>> Thanks,
>>
>> Jesse
>>
>>
>> On Tue, Mar 25, 2014 at 2:14 PM, Jesse Bowling <[email protected]>wrote:
>>
>>> Hello,
>>>
>>> Recently I updated my sensors to use PF_RING 5.6.2, after a good long
>>> while on an SVN version of 5.6.1. Initial results on a lightly loaded test
>>> box were good, however when the same setup was loaded onto more heavily
>>> loaded boxes we started seeing a substantial number of dropped packets.
>>>
>>> We use the ixgbe PF_RING aware drivers on RHEL 6. The
>>> /proc/net/pf_ring/info reports:
>>>
>>> PF_RING Version          : 5.6.2 ($Revision: exported$)
>>> Total rings              : 28
>>>
>>> Standard (non DNA) Options
>>> Ring slots               : 16384
>>> Slot version             : 15
>>> Capture TX               : No [RX only]
>>> IP Defragment            : No
>>> Socket Mode              : Standard
>>> Transparent mode         : No [mode 2]
>>> Total plugins            : 0
>>> Cluster Fragment Queue   : 32720
>>> Cluster Fragment Discard : 69173446
>>>
>>> The Cluster Fragment Discard numbers are much larger than usual.
>>> Additionally the NIC started reporting drops (less than 1%, but compared to
>>> the previous setup which dropped 0 packets at the NIC). Finally, the snort
>>> and argus processes running on top of this setup now report drops at a much
>>> higher rate than before. Previously snort ran with 1% or less drops, argus
>>> typically dropped only a few thousandths of a percent. Currently, snort is
>>> reporting drops of 15-20%, and argus is seeing roughly 15% drops as well.
>>> This is on a 10 Gb link with peaks of 1.8 Gb/sec with daytime averages of
>>> 1.4 Gb/sec. Similar, although smaller, increases in dropped packets was
>>> seen on another link that is much less loaded (300-500 Mb/sec, loss
>>> typically in the 5-10% range).
>>>
>>> Overall this is a substantial loss of performance. While I also upgraded
>>> snort and argus to newer versions at the same time as PF_RING, I find it
>>> less likely that both programs made changes to adversely affect packet
>>> capture rates at the same time.
>>>
>>> Are there any other reports of this happening? Anything I can do to help
>>> troubleshoot where this packet loss might be happening?
>>>
>>> I will file this on the bugzilla as well for tracking.
>>>
>>> Cheers,
>>>
>>> Jesse
>>>
>>> --
>>> Jesse Bowling
>>>
>>>
>>
>>
>> --
>> Jesse Bowling
>>
>>  _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>>
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>
>>
>
>
> --
> Jesse Bowling
>
>


-- 
Jesse Bowling
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to