Sunil wrote:
If you like, you can later add a fifth drive
relatively easily by replacing one of the slices with a whole drive.


how does this affect my available storage if I were to replace both of those 
sparse 500GB files with a real 1TB drive? Will it be same? Or will I have 
expanded my storage? If I understand correctly, I would need to replace other 3 
drives with 1TB as well to expand beyond 3X500GB.

So, in essence I can go from 3x500GB to 3X1000GB in-place with this scheme in 
future if I have the money to upgrade all the drives to 1TB, WITHOUT needing 
any movement of data to temp? Please say yes!....:-)

It should work to replace devices the way you describe. The only time you need some temp storage space is if you want to change the arrangement of devices that make up the pool, e.g. to go from striped-mirrors to RAIDZ2, or RAIDZ1 to RAIDZ2, or some other combination. If you just want to replace devices with identical or larger sized devices you don't need to move the data anywhere.

The capacity will expand to the lowest common denominator. In some OpenSolaris builds I believe this happened automatically when all member devices had been upgraded. At some point in later builds I think it was changed to require manual intervention to prevent problems (like the pool suddenly growing to fill all the new big drives when the admin really wanted the unused space to stay unused..say for partition/slice based short stroking, or when smaller drives were being kept around as spares. If ZFS had the ability to shrink and use smaller devices this would not have been as big of a problem.

As I understand it from the documentation, replacement can happen two ways. First, you can connect the replacement device to the system at the same time as the original device is working, and then issue the replace command. I think this technique is safe, as the original device is still available during the replacement procedure and could be used to provide redundancy to the rest of the pool until the new device finishes resilvering. (Does anyone know if this is really the case...i.e. if redundancy is preserved during the replacement operation when both original and new devices are connected simultaneously and both are functioning correctly? One way to verify this is might be to run zfs replace on a non-redundant pool while both devices are connected.)

The second way is to (physically) disconnect the original device and connect the new device in its place. The pool will be degraded because a member device is missing...if you have RAIDZ1, you have no redundancy remaining, if you have RAIDZ2, you still have 1 level of redundancy intact. The zfs replace command should be able to rebuild the missing data onto the replacement new device.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to