On Tue, Jul 27, 2010 at 10:47:59PM -0700, Joe Eykholt wrote:
> Hi Tristan,
>
> One thing you could do is figure out how much is leaking
> (if any) per I/O.   Look at /proc/slabinfo (assuming you're
> using the slab allocator ... if not you could reconfigure to
> do that).  Copy /proc/slabinfo somewhere, run 1000 or so
> I/Os and then diff to see which slabs are changing by how much.
> You could also look at vmstats before and after the I/O.
> I used something like that to find a leak in another area
> recently.

/proc/slabinfo is a really easy first check to rule out skb leaks,
because the skb structures are allocated out of a dedicated slab (2
actually, depending on if they are allocated with room for fast cloning
or not).

On my FCoE test system if I grep for skb in /proc/slabinfo I see

skbuff_fclone_cache      7      7    512    7    1 : tunables   54   27    8 : 
slabdata      1      1      0
skbuff_head_cache  10255  10650    256   15    1 : tunables  120   60    8 : 
slabdata    710    710      0

The first # is the number of active objects, or outstanding allocations.

The 7 skbuff_fclone_cache allocations are probably all TCP packets being
held until they've been acked (I'm connected in via SSH).  The FCoE
stack allocates fast clone frames for transmit also, so it doesn't look
like any of those are leaking.

The 10k skbuff_head_cache allocations are not unreasonable, they're
probably receive buffers allocated by network drivers waiting for data.
I've got a 10Gb ixgbe interface with 16 active queues, each with 512
receive buffers, so that's 8k right there.  This system also has two
other gigabit interfaces active, so 10k sounds about right.

Let us know what your slabinfo looks like, so we can hopefully rule out
skb leaks.  Then we just need to figure out what's actually going on :)

 - Chris

_______________________________________________
devel mailing list
[email protected]
http://www.open-fcoe.org/mailman/listinfo/devel

Reply via email to