id like to reiterate ryans advice to have a look at the systat mbuf output.

as he said, mclgeti will try to protect the host by restricting the number of
packets placed on the rx rings. it turns out you dont need (or cant use) a lot
of packets on the ring, so bumping the ring size is a useless tweak. mclgeti
simply wont let you fill all those descriptors.

if you were allowed to fill all 2048 entries on your modified rings, that
would just mean you spend more time in the interrupt handler pulling packets
off these rings and freeing them immediately because you have no time to
process them. ie, increasing the ring size would actually slow down your
forwarding rate if mclgeti was disabled.

cheers,
dlg

On 25/02/2011, at 9:41 AM, Ryan McBride wrote:

> On Wed, Feb 23, 2011 at 06:07:16PM +0100, Patrick Lamaiziere wrote:
>> I log the congestion counter (each 10s) and there are at max 3 or 4
>> congestions per day. I don't think the bottleneck is pf.
>
> The congestion counter doesn't directly mean you have a bottleneck in
> PF; it's triggered by the IP input queue being full, and could indicate
> a bottleneck in other places as well, which PF tries to help out with by
> dropping packets earlier.
>
>
>>> Interface errors?
>>
>> Quite a lot.
>
> The output of `systat mbufs` is worth looking at, in particular the
> figure for LIVELOCKS, and the LWM/CWM figures for the interface(s) in
> question.
>
> If the livelocks value is very high, and the LWM/CWM numbers are very
> small, it is likely that the MCLGETI interface is protecting your system
> from being completly flattened by forcing the em card to drop packets
> (supported by your statement that the error rate is high). If it's bad
> enough MCLGETI will be so effective that the pf congestion counter will
> not get increment.
>
>
> You mentioned the following in your initial email:
>
>> #define MAX_INTS_PER_SEC        8000
>>
>> Do you think I can increase this value? The interrupt rate of the
>> machine is at max ~60% (top).
>
> Increasing this value will likely hurt you. 60% interrupt rate sounds
> about right to me for a firewall system that is running at full tilt;
> 100% interrupt is very bad, if your system spends all cycles servicing
> interrupts it will not do very much of anything useful.
>
>
>> dmesg:
>> em0 at pci5 dev 0 function 0 "Intel PRO/1000 QP (82571EB)" rev
>> 0x06: apic 1 int 13 (irq 14), address 00:15:17:ed:98:9d
>>
>> em4 at pci9 dev 0 function 0 "Intel PRO/1000 QP (82575GB)" rev 0x02:
>> apic 1 int 23 (irq 11), address 00:1b:21:38:e0:80
>
> How about a _full_ dmesg, so someone can take a wild guess at what
> your machine is capable of?
>
> -Ryan

Reply via email to