On 2015-09-25 09:12, Jim Salter wrote:
Pretty much bog-standard, as ZFS goes.  Nothing different than what's
recommended for any generic ZFS use.

* set blocksize to match hardware blocksize - 4K drives get 4K
blocksize, 8K drives get 8K blocksize (Samsung SSDs)
* LZO compression is a win.  But it's not like anything sucks without
it.  No real impact on performance for most use, + or -. Just saves space.
* > 4GB allocated to the ARC.  General rule of thumb: half the RAM
belongs to the host (which is mostly ARC), half belongs to the guests.

I strongly prefer pool-of-mirrors topology, but nothing crazy happens if
you use striped-with-parity instead.  I use to use RAIDZ1 (the rough
equivalent of RAID5) quite frequently, and there wasn't anything
amazingly sucky about it; it performed at least as well as you'd expect
ext4 on mdraid5 to perform.

ZFS might or might not do a better job of managing fragmentation; I
really don't know.  I /strongly/ suspect the design difference between
the kernel's simple FIFO page cache and ZFS' weighted cache makes a
really, really big difference.
I've been coming to that same conclusion myself over the years. I would really love to see a drop in replacement for Linux's pagecache with better performance (I don't remember for sure, but I seem to remember that the native pagecache isn't straight FIFO), but the likelihood of that actually getting into mainline is slim to none (can you imagine though how fast XFS or ext* would be with a good caching algorithm?).



On 09/25/2015 09:04 AM, Austin S Hemmelgarn wrote:
you really need to give specifics on how you have ZFS set up in that
case.




Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to