Hmm, I did some extensive tests then made a core dump, no leak?
-bash-3.00# mdb -k 32
Loading modules: [ unix genunix specfs dtrace ufs sd mpt px ldc ip hook neti
sctp arp usba fcp fctl qlc nca lofs zfs nfs cpc random crypto ptm sppp ]
> ::findleaks -dv
findleaks: maximum buffers => 783106
findleaks: actual buffers => 722155
findleaks:
findleaks: potential pointers => 77430152
findleaks: dismissals => 68481241 (88.4%)
findleaks: misses => 500836 ( 0.6%)
findleaks: dups => 7725920 ( 9.9%)
findleaks: follows => 722155 ( 0.9%)
findleaks:
findleaks: elapsed wall time => 15 seconds
findleaks:
CACHE LEAKED BUFCTL CALLER
----------------------------------------------------------------------
Total 0 buffers, 0 bytes
Tom
> Date: Fri, 22 Oct 2010 07:17:53 -0400
> From: [email protected]
> To: [email protected]
> CC: [email protected]
> Subject: Re: [networking-discuss] why there are so many memory allocation
>
> On 10/21/10 21:11, Tom Chen wrote:
> > Hello,
> >
> > I am testing a GLDv3 driver running on Solaris10. The driver's tx
> > performance needs more improvement.
> > After started netperf transmission test for a few seconds, I run lockstat
> > to see what it is busy with. Surprisingly, I see a lot of page_hashin /
> > page_hashout, kmem_slab_alloc/kmem_slab_free functions get called and they
> > wasted a lot of time. Then I run a simple dtrace script to see who called
> > these functions. It looks like the caller is from the OS, not the driver. I
> > also tested Intel 10G card but did not see any of these functions on top of
> > the lockstat result.
> > I am wondering why OS spends so much time to do memory allocation &
> > de-allocation for this driver?
>
> Going from kmem to vmem means the OS needs more memory than it currently
> has readily available. Have you tried running with the kmem_debug flags
> set and checking ::findleaks in mdb?
>
> --
> James Carlson 42.703N 71.076W <[email protected]>
_______________________________________________
networking-discuss mailing list
[email protected]