MP wrote: > Hi, > I hope someone can help cos ATM zfs' logic seems a little askew. > I just swapped a failing 200gb drive that was one half of a 400gb gstripe > device which I was using as one of the devices in a 3 device raidz1. When the > OS came back up after the drive had been changed, the necessary metadata was > of course not on the new drive so the stripe didn't exist. Zfs understandably > complained it couldn't open the stripe, however it did not show the array as > degraded. I didn't save the output, but it was just like described in this > thread: > > http://www.nabble.com/Shooting-yourself-in-the-foot-with-ZFS:-is-quite-easy-t4512790.html > > I recreated the gstripe device under the same name stripe/str1 and assumed I > could just: > > # zpool replace pool stripe/str1 > invalid vdev specification > stripe/str1 is in use (r1w1e1) > > It also told me to try -f, which I did, but was greeted with the same error. > Why can I not replace a device with itself? > As the man page describes just this procedure I'm a little confused. > Try as I might (online, offline, scrub) I could not get the array to rebuild, > just like was the guy described in that thread above. I eventually resorted > to recreating the stripe with a different name stripe/str2. I could then > perform a: > > # zpool replace pool stripe/str1 stripe/str2 > > Is there a reason I have to jump through these seemingly pointless hoops to > replace a device with itself? > Many thanks.
Yes. From the fine manual on zpool: zpool replace [-f] pool old_device [new_device] Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device. ... If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actu- ally a different disk. ZFS recognizes this. For a stripe, you don't have redundancy, so you cannot replace the disk with itself. You would have to specify the [new_device] I've submitted CR6612596 for a better error message and CR6612605 to mention this in the man page. -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss