[mdb-discuss] ::kmastat output different in meaning between x86 32-bit and 64-bit?

2007-08-03 Thread Raymond LI
Jonathan, thanks! I understood the difference now.

Raymond

Jonathan Adams wrote:
> (oops;  I forgot to CC mdb-discuss on this earlier)
>
> On 8/2/07, *Jonathan Adams*  > wrote:
>
>
>
> On 8/2/07, * Raymond LI*  > wrote:
>
> Guys,
>
> I met a puzzle when I work with my amd64 box. When I observe a
> stream
> slab of 2048 sizes, the "buf in use", "buf total" and "memory
> in use"
> seems to mean different thing between 32/64 bit kernels. 
>
>
> To answer your questions, I need to know more about your caches; 
> what is the output of "::kmem_cache ! grep streams_dblk_19.." in
> MDB on your 32-bit and 64-bit systems?  That will tell me what the
> cache flags for your system are.  (replace .. with 36 or 84 for
> 64-bit and 32-bit, respectively)
>
> While you're at it, take the cache pointer (the first field output
> by the above command), and do:
>
> pointer::print kmem_cache_t cache_chunksize cache_bufsize
> cache_slabsize
>
>
> He responded with the following data:
>  
> --- cut here ---
> On 32bit kernel, with driver unloaded at startup
>  >::kmem_cache ! grep streams_dblk_1984
> cac2e2b0 streams_dblk_1984 020f 00 2048 9
>  >cac2e2b0::print kmem_cache_t cache_chunksize cache_bufsize 
> cache_slabsize
> cache_chunksize = 0x840
> cache_bufsize = 0x800
> cache_slabsize = 0x5000
>
> After add_drv,
>  >::kmem_cache ! grep streams_dblk_1984
> cac2e2b0 streams_dblk_1984 020f 00 2048 612
>  >cac2e2b0::print kmem_cache_t cache_chunksize cache_bufsize 
> cache_slabsize
>
> And then rem_drv,
>  >::kmem_cache ! grep streams_dblk_1984
> cac2e2b0 streams_dblk_1984 020f 00 2048 612
>  >cac2e2b0::print kmem_cache_t cache_chunksize cache_bufsize 
> cache_slabsize
> cache_chunksize = 0x840
> cache_bufsize = 0x800
> cache_slabsize = 0x5000
> --
> 
>
> On 64bit kernel, with driver unloaded
>  > ::kmem_cache ! grep streams_dblk_1936
> ec0033c08 streams_dblk_1936 0269 00 2048 2
>
> with driver loaded,
>  > ::kmem_cache ! grep streams_dblk_1936
> ec0033c08 streams_dblk_1936 0269 00 2048 602
>
>  >ec0033c08::print kmem_cache_t cache_chunksize cache_bufsize
> cache_slabsize
> cache_chunksize = 0x800
> cache_bufsize = 0x800
> cache_slabsize = 0x1000
> --- cut here ---
>
>
> Looking at the ::kmem_cache output:
>
> 32-bit:
>  >::kmem_cache ! grep streams_dblk_1984
> cac2e2b0 streams_dblk_1984 020f 00 2048 9
>
> 64-bit
>  > ::kmem_cache ! grep streams_dblk_1936
> ec0033c08 streams_dblk_1936 0269 00 2048 602
>
> The third field is the "flags" field;  this contains the full reason 
> for all the differences you noticed.
>
> 32-bit:  KMF_HASH | KMF_CONTENTS | KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE
> 64-bit:  KMF_HASH | KMF_CONTENTS | KMF_AUDIT | KMF_FIREWALL | 
> KMF_NOMAGAZINE
>
> The flags they both have are KMF_HASH (buffers require a hash table to 
> track control structures), KMF_CONTENTS (record contents of buffer 
> upon free), KMF_AUDIT (record information about each allocation and 
> free).
>
> The 64-bit cache is a firewall cache;  this means the buffer size is 
> rounded up to a multiple of PAGESIZE, and all buffers are allocated so
> that the end of the buffer is at the end of a page.  The allocation is 
> then done in such a way that there is an unmapped VA hole *after* that 
> page, and so that allocation addresses are not re-used recently.  The 
> magazine (that is, caching) layer of the cache is disabled, which 
> means that the objects are freed and unmapped immediately upon 
> kmem_cache_free().
>
> The 32-bit cache is a standard debugging cache;  the slabs are five 
> pages long, which is 9 buffers / slab (0x5000 / 0x540); the extra 40 
> bytes is a "REDZONE", used to detect buffer overruns, etc.  The 
> magazine layer is *enabled*, which means that freed buffers are 
> cached, until the system notices that we're running low on space.
>
> The difference only exists on DEBUG kernels, and is because 
> firewalling is not done on 32-bit platforms, since the allocation 
> patterns used waste and fragment VA space.
>
> On a non-debug system, the setup will be pretty much the same between 
> 32-bit and 64-bit systems.
>
> I allocated mbuf of 1600 bytes,
> In 64-bit mode, the cache name of 2048 should have name of
> "streams_dblk_1936", output like below:
> cache buf buf buf memory alloc alloc
> name size in use total in use succeed fail
> - -- -- -- -
> - -
> streams_dblk_1936 2048 602 602 2465792 1382 0
>
> While with 32-bit mode, with the name of "streams_dblk_1984",
> output
> like below:
> cache buf buf buf memory alloc alloc
> name size in use total in use su

[mdb-discuss] ::kmastat output different in meaning between x86 32-bit and 64-bit?

2007-08-03 Thread Jonathan Adams
= ("buf
> > in use" + "buf total") * "buf size"?
> > And we see in 32-bit kernel, it is "buf total" * "buf size".
>
>
> It's not either of those;  it is some number >= "buf total" * "buf size",
> where how much larger it is depends on what slab options are in effect.
>
> Another thing I don't understand is my driver allocated 600 mblk of 1600
> > Bytes during attach. After rem_drv, I found in 64-bit kernel, kmastat
> > reports below:
> >
> > cache buf buf buf memory alloc alloc
> > name size in use total in use succeed fail
> > - -- -- -- - - -
> >
> > streams_dblk_1936 2048 2 2 8192 1493 0
> >
> > "buf total" also reduced by 600 after freemsg. And it seems next time
> > allocate, stream module will allocate new area of streams_dblk_1936.
> >
> > While on 32-bit kernel, the streams_dblk_1984 seems to be still there:
> >
> > cache buf buf buf memory alloc alloc
> > name size in use total in use succeed fail
> > - -- -- -- - - -
> > streams_dblk_1984 2048 0 603 1372160 608 0
> >
> > Why "buf total" still 603?
>
>
> Because the kernel decided to keep them around, in case someone else
> wanted them.  If memory gets tight, the buffers will be freed.  As to why
> the 64-bit kernel isn't keeping them around, I'll need the information I
> requested above to tell you.
>

(see above)

Does that answer your questions?

Cheers,
- jonathan
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/mdb-discuss/attachments/20070803/c899d2e8/attachment.html>