I'm getting down to the wire on the home storage server I've been working on building for altogether too long. Right now I'm trying to finalize the zpool/zfs settings I'm going to create the rpool and bulk storage pool with.

The rpool is going to be a mirror of the first partitions of two 256G SSDs (I haven't decided how big to make that, it looks like a default install of omnios takes up less than 1G, and I don't anticipate installing that much more at the OS level).

The storage pool is going to made up of 2 x 6 RAIDZ2 WD Red 3TB drives, with a partition of an Intel DC 3700 as log, and the remaining two partitions of the rpool SSDs as l2arc.

There's going to be a wide variety of stuff stored on the storage pool, from mp3/flac/aac music, to DVD/bluray rips, and basic home directory stuff like documents and pictures. A big chunk will be dedicated to my mythtv system and contain OTA ATSC mpeg2 transport streams.

For the rpool I plan to enable lz4 compression for the entire pool. For the storage pool I will probably disable it for filesystems containing mostly noncompressible stuff and enable it for home directory like filesystems containing documents and other things more likely to be compressible. I think I'm going to set atime=off globally on both pools, I can't think of any reason that data would be worth the extra write load.

While I'm going to leave it the default for rpool, I'm thinking I'm going to set failmode=continue on the storage pool.

If I understood correctly, there's really no value in changing the default recordsize unless you're using an application like a database that you know always reads in chunk sizes smaller and reading in the entire record would be a waste. I thought I remembered reading somewhere that one of the zfs forks had increased the maximum recordsize to something bigger, like 2M or 10M or something, but I can't seem to find a reference to it now. For filesystems which store only large files (OTA HD recordings are generally multi-gigabyte) it seems that would be more efficient. Is anybody looking at increasing the maximum record size in illumos zfs?

While I understand what the primarycache and secondarycache attributes control, I'm not really clear on when it might be desirable to change them. Are there any routine scenarios where you might want to configure them other than the default, or is that just limited to specific deployments such as databases where you want to avoid double buffering of data in memory?

I was planning on making the storage available to my mythv server over NFS. I was thinking about setting logbias=throughput for that file system, as intuitively that would seem a better tuning for multiple streams being written at once. Or, I could possibly set sync=disabled on that file system to completely avoid the overhead of NFS synchronous writes. If there's a failure during a recording and I lose five minutes of the end, oh well…

Moving on to kernel tunables, I was thinking of enabling l2arc prefetching (l2arc_noprefetch=0). I'm also thinking of increasing l2arc_write_max to better avail of more modern SSDs, but I'm not sure to what. At some point will the default zfs kernel tunables be updated to match currently available devices rather than the more limited flash devices of yesteryear?

Any thoughts on these mentioned settings/tunables, or recommendations of other things I might want to do?

Thanks much…


-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to