On 16 September 2015 at 20:21, Austin S Hemmelgarn <ahferro...@gmail.com> wrote:
> ZFS has been around for much longer, it's been mature and feature complete 
> for more than a decade, and has had a long time to improve performance wise.  
> It is important to note though, that on low-end hardware, BTRFS can (and 
> often does in my experience) perform better than ZFS, because ZFS is a 
> serious resource hog (I have yet to see a stable ZFS deployment with less 
> than 16G of RAM, even with all the fancy features turned off).

If you have a real example of ZFS becoming unstable with, say, 4 or
8GB of memory, that doesn't involve attempting deduplication (which I
guess is what you mean by 'all the fancy features') on a many-TB pool,
I'd be interested to hear about it. (Psychic debugger says 'possibly
somebody trying to use a large L2ARC on a pool with many/large zvols')

My home fileserver has been running zfs for about 5 years, on a system
maxed out at 4GB RAM. Currently up to ~9TB of data. The only stability
problems I ever had were towards the beginning when I was using
zfs-fuse because zfsonlinux wasn't ready then, *and* I was trying out
deduplication.

I have a couple of work machines with 2GB RAM and pools currently
around 2.5TB full; no problems with these either in the couple of
years they've been in use, though granted these are lightly loaded
machines since what they mostly do is receive backup streams.

Bear in mind that these are Linux machines, and zfsonlinux's memory
management is known to be inferior to ZFS on Solaris and FreeBSD
(because it does not integrate with the page cache and instead grabs a
[configurable] chunk of memory, and doesn't always do a great job of
dropping it in response to memory pressure).
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to