On 11/13/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Generally requirements are similar if you would use different file
system.

CL>  i noticed the "free" phyiscal memory dropped
CL>  quickly while doing "dd" on zfs files:

It's because ZFS doesn't use page cache and it uses kernel memory.
Side effect is you see almost all free mem drop to 0. When memory is
needed zfs should get back memory but there are circumstances (read
bugs) it doesn't.

Are there any plans to expose the size of the zfs page cache so that
the vmstat "free" column is useful again?

Consider an environment where you run a bunch of J2EE apps that are
memory bound.  In the pre-zfs world, a reasonable way to track "how
full" a server is is to use vmstat's free column.  Now, after backups
hit the server the first time and brings about physmem worth of data
from disk, vmstat's free hovers close to zero until the next reboot
(or zpool export).

This (currently) leaves me with the following options to get the info
that I have without ZFS:

1) Add up the rss of all the running processes, run pmap on all of
them, then figure out how much RAM is really being used.  This is
troublesome at best and impossible if there are too many short-running
processes.  Oh, wait... the kernel uses some memory too.

2) Use mdb to look at zfs`arc->size.  This may be more attractive if
exposed through kstat.

Neither of those options is very attractive.  Does anyone know of
relevant RFE's that are in the works to improve the situation, or
should I file one and stop complaining.  :)

Mike

--
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to