Erik and Richard: thanks for the information -- this is all very good stuff.

Erik Trimble wrote:
Something occurs to me: how full is your current 4 vdev pool? I'm assuming it's not over 70% or so.

yes, by adding another 3 vdevs, any writes will be biased towards the "empty" vdevs, but that's for less-than-full-stripe-width writes (right, Richard?). That is, if I'm doing a write that would be full-stripe size, and I've got enough space on all vdevs (even if certain ones are much fuller than others), then it will write across all vdevs.

So, while you can't get a virgin pool out of this, I think you can get stuff reasonably well-balanced by recopying then deleting say 1TB (or less) of data at a time.


I'm giving this a shot now. The four top level vdevs were just under 50% full, so this should give us good distribution and in fact it looks like the data is being spread across the top level vdevs almost equally. Of course, this makes perfect sense since disk may often be less than 70% used.


Richard Elling wrote:
For a lot of reasons, I would consider creating NEW zpools when you add new disk space in large lots, rather than adding vdevs to existing zpools. It should prove no harder to manage, and allows you to get a virgin zpool which will provide the best performance.


Yes, that's what I will strive for although money flows in interesting ways sometimes, and occasionally we will be in a situation where we'll be building a new pool in two or even three expansions. Depending on the pool's usage, the write patterns of zpool/zfs are good for me to understand.

Best,
Jesse
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to