On Tue, Apr 24, 2018 at 10:18:09AM -0400, David Miller wrote:
> From: Tal Gilboa <ta...@mellanox.com>
> Date: Tue, 24 Apr 2018 13:36:00 +0300
> 
> > Net DIM is a library designed for dynamic interrupt moderation. It was
> > implemented and optimized with receive side interrupts in mind, since these
> > are usually the CPU expensive ones. This patch-set introduces adaptive 
> > transmit
> > interrupt moderation to net DIM, complete with a usage in the mlx5e driver.
> > Using adaptive TX behavior would reduce interrupt rate for multiple 
> > scenarios.
> > Furthermore, it is essential for increasing bandwidth on cases where payload
> > aggregation is required.
> > 
> > v3: Remove "inline" from functions in .c files (requested by DaveM). Revert
> > adding "enabled" field from struct net_dim and applied mlx5e structural
> > suggestions (suggested by SaeedM).
> > 
> > v2: Rebase over proper tree.
> > 
> > v1: Fix compilation issues due to missed function renaming.
> 
> I have no problem with this, series applied, thanks.
> 
> Although I have to say that I've always been suspicious of adaptive moderation
> schemes, especially if implemented in software.
> 
> My thinking was that at these kinds of link speeds, the conditions of the link
> change so fast that whatever state you've measured changes by the time you
> commit new settings to the chip.
> 
> It obviously helps, so I must be missing some piece of the puzzle in my mental
> analysis :-)

You are definitely correct that there are many cases where sessions are
so short that by the time a measurement is made and modified conditions
can change.

What I found when adding this to the bnxt_en driver was that for longer
running sessions/transfers (flows lasting secs not msecs) that the
adjustment can happen pretty quickly and you get a nice reduction in CPU
utilization during the duration of that transfer.

There is also an advantage that since this is done a per queue basis one
queue that may be handling a bulk transfer can have its coalescing
parameters adjusted while others stay at a setting that keeps traffic
flowing at low latency.  This is helpful when a system is receiving a
large amount of traffic on one queue but also sending data on another
queue and quick processing of acks keeps data flowing at high rate with
low CPU utilization in both directions.

Reply via email to