Comments in line...

Neil Perrin wrote:

1. DNLC-through-ZFS doesn't seem to listen to ncsize.

The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was about
68k and vmstat -s  said we had a hitrate of ~30%, so I set ncsize to
600k and rebooted.. Didn't seem to change much, still seeing hitrates at
about the same and manual find(1) doesn't seem to be that cached
(according to vmstat and dnlcsnoop.d).
When booting, the following message came up, not sure if it matters or not:
NOTICE: setting nrnode to max value of 351642
NOTICE: setting nrnode to max value of 235577

Is there a separate ZFS-DNLC knob to adjust for this? Wild guess is that
it has its own implementation which is integrated with the rest of the
ZFS cache which throws out metadata cache in favour of data cache.. or
something..

Current memory usage (for some values of usage ;):
# echo ::memstat|mdb -k
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                      95584               746   75%
Anon                        20868               163   16%
Exec and libs                1703                13    1%
Page cache                   1007                 7    1%
Free (cachelist)               97                 0    0%
Free (freelist)              7745                60    6%

Total                      127004               992
Physical                   125192               978


/Tomas
This memory usage shows nearly all of memory consumed by the kernel
and probably by ZFS.  ZFS can't add any more DNLC entries due to lack of
memory without purging others. This can be seen from  the number of
dnlc_nentries being way less than ncsize.
I don't know if there's a DMU or ARC bug to reduce the memory footprint
of their internal structures for situations like this, but we are aware of the
issue.

Can you please check the zio buffers and the arc status ?

Here is how you can do it :
- Start mdb : ie. mdb -k

> ::kmem_cache

- In the output generated above check the amount consumed by the zio_buf_*, arc_buf_t and
 arc_buf_hdr_t.

- Dump the values of arc

> arc::print struct arc

- This should give you some like below.
-- snip--
> arc::print struct arc
{
   anon = ARC_anon
   mru = ARC_mru
   mru_ghost = ARC_mru_ghost
   mfu = ARC_mfu
   mfu_ghost = ARC_mfu_ghost
size = 0x3e20000 <-- tells you the current memory consumed by ARC buffer (including the the memory consumed for the data cached ie. zio_buff_*
   p = 0x1d06a06
   c = 0x4000000
   c_min = 0x4000000
   c_max = 0x2f9aa800
   hits = 0x2fd2
   misses = 0xd1c
   deleted = 0x296
   skipped = 0
   hash_elements = 0xa85
   hash_elements_max = 0xcc0
   hash_collisions = 0x173
   hash_chains = 0xbe
   hash_chain_max = 0x2
no_grow = 0 <-- This would be set to 1 if we have a memory crunch
}
-- snip --

And as Niel pointed out we would probably need some way of limiting the ARC consumption.

Regards,
Sanjeev.


Neil.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel: x27521 +91 80 669 27521
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to