We have Thumper that we got at a good price from the Sun Educational Grant 
program (thank you Sun!) but it came populated with 500GB drives. The box will 
be used as a virtual tape library and general purpose NFS/iSCSI/Samba file 
server for users' stuff. Probably, in about two years, we will want to reload 
it with whatever the big >1TB drive of the day is. This gives me a problem with 
respect to planning for the future, since currently one can't shrink a zpool.

I can think of a few approaches:

1) Initial configuration with two zpools. This lets us do the upgrade just 
before utilization hits 50%. We can migrate everyone off pool 1, destroy it, 
upgrade it, and either repeat the process for pool2 or join the pools together.

2) Replace with new, bigger disks, and slice them in half. Use one slice to 
rejoin the existing pool, and the second slice to start a new pool.

3) Unlikely: Mirror the existing zpool with some kind of external vdev. I've 
tested this - I actually mirrored a physical disk with a NFS vdev once, and to 
my amazement it worked. Unfortunately the Thumper is the biggest box we have 
right now, we don't have any other devices with 18+TB of space.

3 1/2): Tape, like failure, is always an option.

Either way with 1 or 2 we're stuck with two pools on the same host, but since I 
have 40+ disks to spread the IO over, I'm not too worried.

Option 4) If I just replace the 500GB disks one by one with 1 TB disks in an 
existing single zpool, will the zpool magically have twice as much space when I 
am done replacing the very last disk? I don't have any way to test this. In the 
past I have been able to do this with *some* RAID5 array controllers.

If you've been through this drill, let us know how you handled it. Thanks in 
advance,

-W Sanders
 St Marys College of CA
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to