2015-09-25 16:52 GMT+03:00 Jim Salter <j...@jrs-s.net>:
> Pretty much bog-standard, as ZFS goes.  Nothing different than what's
> recommended for any generic ZFS use.
>
> * set blocksize to match hardware blocksize - 4K drives get 4K blocksize, 8K
> drives get 8K blocksize (Samsung SSDs)
> * LZO compression is a win.  But it's not like anything sucks without it.
> No real impact on performance for most use, + or -. Just saves space.
> * > 4GB allocated to the ARC.  General rule of thumb: half the RAM belongs
> to the host (which is mostly ARC), half belongs to the guests.
>
> I strongly prefer pool-of-mirrors topology, but nothing crazy happens if you
> use striped-with-parity instead.  I use to use RAIDZ1 (the rough equivalent
> of RAID5) quite frequently, and there wasn't anything amazingly sucky about
> it; it performed at least as well as you'd expect ext4 on mdraid5 to
> perform.
>
> ZFS might or might not do a better job of managing fragmentation; I really
> don't know.  I strongly suspect the design difference between the kernel's
> simple FIFO page cache and ZFS' weighted cache makes a really, really big
> difference.
>
>
>
> On 09/25/2015 09:04 AM, Austin S Hemmelgarn wrote:
>> you really need to give specifics on how you have ZFS set up in that case.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

FYI:
Linux pagecache use LRU cache algo, and in general case it's working good enough

-- 
Have a nice day,
Timofey.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to