I believe that long term folks are working on solving this problem.  I
believe bp_rewrite is needed for this work.

Mid/short term, the solution to me at least seems to be to migrate your
data to a new zpool on the newly configured array, etc.  Most
enterprises don't incrementally upgrade an array (except perhaps to add
more drives, etc.)  Disks are cheap enough that its usually not that
hard to justify a full upgrade every few years.  (Frankly, spinning rust
MTBFs are still low enough that I think most sites wind up assuming that
they are going to have to replace their storage on a 3-5 year cycle
anyway.  We've not yet seen what SSDs do that trend, I think.)

        - Garrett


On Wed, 2010-07-07 at 10:54 -0700, Marty Scholes wrote:
> > I think the request is to remove vdev's from a pool.
> >  Not currently possible.  Is this in the works?
> 
> Actually, I think this is two requests, hashed over hundreds of times in this 
> forum:
> 1. Remove a vdev from a pool
> 2. Nondisruptively change vdev geometry
> 
> #1 above has a stunningly obvious use case.  Suppose, despite your best 
> efforts, QA, planning and walkthroughs, you accidentally fat finger a "zpool 
> attach" and unintentionally "zpool add" a disk to a pool.  There is no way to 
> reverse that operation without *significant* downtime.
> 
> I have discussed #2 above multiple times and has at least one obvious use 
> case.  Suppose, just for a minute, that over the years since you deployed a 
> zfs pool with nearly constant uptime, that your business needs change and you 
> need to add a disk to a RAIDZ vdev, or move from RAIDZ1 to RAIDZ2, or disks 
> have grown so big that you wish to remove a disk from a vdev.
> 
> The responses from the community on the two requests seem to be:
> 1. Don't ever make this mistake and if you do, then tough luck
> 2. No business ever changes, or technology never changes, or zfs deployments 
> have short lives, or businesses are perfectly ok with large downtimes to 
> effect geometry changes.
> 
> Both responses seem antithetical to the zfs ethos of survivability in the 
> face of errors and nondisruptive flexibility.
> 
> Honestly, I still don't understand the resistance to adding those features.




_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to