Mike Pumford <mpumf...@mudcovered.org.uk> writes: > Now I might be reading it wrong but that suggest to me that it would > be an awful idea to run ZFS on a system that needs memory for things > other than filesystem caching as there is no way for those memory > needs to force ZFS to give up its pool usage.
As I infer the kernel behavior from reading tnn@'s patch, there is a limit on the amount of ARC storage. On my 8G system, it seems ARC ends up around 2-2.5G and doesn't grow. One can debate what the limit should be -- and clearly that's too big for a 4G system, but it does seem to be bounded. > If I've read it right there needs to be a mechanism for memory > pressure to force ZFS to release memory. Doing it after all the > processes have been swapped to disk is way too late as the chances are > the system will become non-responsive by then. From memory this was a > problem FreeBSD had to solve as well. It would be interesting to read a description of what they did. That seems easier than figuring it out from scratch. > Even with the conventional BSD FFS I have to set vm.filemin and > vm.filemax to quite low values to stop the kernel prioritizing file > system cache over process memory and thats on a system with 16GB of > RAM. Without that tuning I'd regularly have processes effectively > rendered unresponsive as they were completely swapped out in favor of > FS cache. Yes, but the FS cache is allowed to grow to most of memory. The ARC size has a limit that if you have as much memory as the people that wrote the code comntemplated, is not nearly "most of memory". Another thing I don't understand is how ARC relates to the vnode cache and the buffer cache that stores file contents, and in particular if there are two copies of things. > What's the equivalent lever for ZFS? Some variable not hooked up to a sysctl!