On Sun, January 8, 2012 00:28, Bob Friesenhahn wrote:
>
> I think that I would also be interested in a system which uses the
> so-called spare disks for more protective redundancy but then reduces
> that protective redundancy in order to use that disk to replace a
> failed disk or to automatically enlarge the pool.
>
> For example, a pool could start out with four-way mirroring when there
> is little data in the pool.  When the pool becomes more full, mirror
> devices are automatically removed (from existing vdevs), and used to
> add more vdevs.  Eventually a limit would be hit so that no more
> mirrors are allowed to be removed.
>
> Obviously this approach works with simple mirrors but not for raidz.
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

I actually disagree about raidz. I have often thought that a "dynamic
raidz" would be a great feature.

For instance, you have a 4-way raidz. What you are saying is you want the
array to survive the loss of a single drive. So, from an empty vdev, it
starts by writing 2 copies of each block, effectively creating a pair of
mirrors. These are quicker to write and quicker to resilver than parity,
and you would likely get a read speed increase too.

As the vdev starts to get full, it starts using a parity based redundancy,
and converting "older" data to this as well. Performance drops a bit, but
it happens slowly. In addition, any older blocks not yet converted are
still quicker to read and resilver.

This is only a theory, but it is certainly something which could be
considered. It would probably take a lot of rewriting of the raidz code,
though.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to