William Schumann wrote: >>> A mirror slice vdev can be declared within the ai_target_device: >>> - unique device identifier (ctds, mpxio, /device node, reference >>> name of a selected disk) >>> - slice number >>> This results in the command: >>> zpool create <poolname> <install slice> [<mirror vdev>] >>> If the pool exists, doing the "zpool create" will overwrite the >>> existing pool. >>> >>> The target_device_option_overwite_root_zfs_pool can be done as follows: >>> - import named rpool >>> - delete datasets that comprise the rpool, using "zfs destroy >>> <dataset>" >> >> Doing this sounds like you're really reusing an existing pool. >> Is that the intent of this parameter? If not, why wouldn't >> we destroy the pool, and recreate it? If I'm reinstalling, I >> don't want to see crufty attributes on the pool from my previous >> install. > Well, we could reuse an existing pool as it was defined. The case > covered here would allow the user to use the existing pool definition, > saving the user from having to redefine the entire pool or from having > to know any details about how the pool was defined in the first place.
So then there should be a use case defined for wanting to create a pool from scratch, even if one by that name already exists. >> >>> <ai_target_device> >>> <reference_name>mirrordev</reference_name> >>> <!-- assume that disk is appropriate for raid2 mirror --> >>> <target_device_overwrite_disk/> <!-- erase disk, use whole disk >>> for slice 0 --> >>> </ai_target_device> >> >> This third ai_target_device here seems to be what's defining >> the 'mirrordev' reference name, and also its usage definition >> (the fact that it should be erased and relaid out using s0), >> but then first ai_target_device seems to also be defining >> (or maybe just assuming) the usage definition of 'mirrordev' >> by saying 'mirrordev.s0' > The first device is using it, the third is defining it. So then there's a dependency there, in that the first specification depends on the third (the s0 part), but the specifications are peers. This will be a nightmare for semantic validation, and if not done there, then a nightmare for the program consuming this wad of data. -ethan
