Hi Sarah, Caimans,
Sarah Jelinek wrote:
> I said:
>>
>> Target devices can then have symbolic names. The symbolic name can
>> be used in vdevs in zfs pool definitions. Examples follow below.
>>
>> The default disk reference names are of the form "deviceN" where
>> N=1....number of
>> devices. A custom name for a disk can be created using element:
>> ai_target_device/reference_name, and must be unique alphanumeric
>> string with underscores.
>>
>
> What is the benefit of allowing the reference naming for a device? I
> am not getting why we would want to add this level of indirection to
> the device specification.
[I mistakenly referred to "reference names" as "symbolic names" above,
so to recap...]
The reference names can be used in zfs pool definitions, so that you can
automatically select disks by criteria *other than disk name*, name
them, then specify slices on them as vdevs in a pool or mirror.
From the AI manifest example below, the second disk selected has a
reference name:
<reference_name>newpooldisk</reference_name>
Then a slice from the disk is used as a vdev from a zfs pool. In
section ai_zfs_pool:
<vdev>
newpooldisk.s0 <!-- use selected disk named "newpooldisk", slice
0 -->
</vdev>
The disk has to meet the manifest criteria of being over 30G.
Reference names are not used outside the manifest processing - they are
in no way permanent.
>
>> Implementing root zfs pool (rpool)
>>
>> The manifest currently allows specification of rpool.
>> ai_target_device/install_slice_number indicates the root pool slice.
>> If not specified, slice 0 of the 1st disk in the list is assumed to
>> be the root.
>> A mirror slice vdev can be declared within the ai_target_device:
>> - unique device identifier (ctds, mpxio, /device node, reference
>> name of a selected disk)
>> - slice number
>> This results in the command:
>> zpool create <poolname> <install slice> [<mirror vdev>]
>> If the pool exists, doing the "zpool create" will overwrite the
>> existing pool.
> So, we are allowing the users to specify the root pool names? Or is
> this for any pool?
Yes, for any pool. "rpool" is the hard-coded root pool name in the
installer, and this will have to be carefully generalized.
> Also, if we overwrite the pool, we don't give the users a chance to
> realize their mistake. Are we sure we want the default to be
> overwrite? Maybe we should require they explicitly set an overwrite
> flag or something to ensure they get what they thought they were
> asking for.
Good point. We will have flags that are required to overwrite the pool
if it exists.
>
>>
>> The target_device_option_overwite_root_zfs_pool can be done as follows:
>> - import named rpool
>> - delete datasets that comprise the rpool, using "zfs destroy <dataset>"
>> - proceed with installation an usual
>>
> I assume these notes above are how we would implement overwriting the
> existing zpool?
Yes.
>> zfs pool creation:
>> A pool consists of a set of vdevs. At this time, the vdevs are
>> slices, so they consist of a unique disk identifier (can be ctds,
>> mpxio, /device, or reference name) plus a slice number.
>>
>> Mirrors consist of list of vdevs and can be listed in the same way.
>>
>> General definition for a zfs pool (not the root pool):
>> [reworked below]
>
> It isn't clear to me if we are going to allow:
> -naming of the root pool by the user?
> -Creation of multiple pools during AI, with user naming
>
> Are we allowing for both?
Yes. Sorry this was unclear - I wrote this is a bit of a rush.
>
> Also, what zfs datasets will be created on the non-root pools? Are we
> going to allow for specification of these?
The pseudo-XML for section ai_zfs_pool (reworked below) was intended for
non-root ZFS pools.
Here is some pseudo-XML to create or reuse a *root* pool:
ai_zfs_root_pool
action=create
name - the pool name (defaults to rpool)
id - the guid of an existing rpool, to resolve naming conflicts
overwrite_existing_pool (if we want to reuse an existing pool without
redefining it)
mirror_type (regular mirror, raid, or raid2)
mirror (0 or more mirror definitions, each specifying a slice to
mirror the install slice)
/ai_zfs_root_pool
The root pool would be installed into a slice - a vdev list for root
pools is not possible at the moment, I believe.
To delete a root pool as a separate action:
ai_zfs_root_pool
action=delete
name - the pool name
id - the guid, to resolve naming conflicts
/ai_zfs_root_pool
General definition for a zfs pool (not the root pool):
ai_zfs_pool (zero or more ZFS pools)
action=create
name - the pool name
vdevs (1 or more vdev definitions or a set)
mirror_type (regular mirror, raid, or raid2)
mirror (0 or more mirror definitions, each containing a list of vdevs)
mountpoint (a zpool property)
overwrite_existing_pool (if we want to reuse an existing pool without
redefining it)
/ai_zfs_pool
ai_zfs_pool
action=delete
name - the pool name
id (the pool guid, if name conflict)
/ai_zfs_pool
NOTE: not all of the XML elements are shown here - the focus is on
multiple device details.
>
>>
>> Format for vdev:
>> disk_name - real (ctds, mpxio, /device node) or reference name of
>> selected disk
>> slice - valid slice number 0,1,3-7
>>
>> Example: install on boot disk, use some selected disk as raid2
>> mirror, and use another selected disk over 30GB for zfs pool newpool
>> mounted at /export1
>> <ai_target_device>
>> <target_device_select_boot_disk>
>> <mirror>mirrordev.s0</mirror> <!-- put mirror selected disk named
>> "mirrordev", slice 0 -->
>> <mirror_type>raid2</mirror_type>
>> </ai_target_device>
>> <ai_target_device>
>> <reference_name>newpooldisk</reference_name>
>> <target_select_min_size>30<target_select_min_size>
>> <target_device_overwrite_disk/> <!-- erase disk, use whole disk
>> for slice 0 -->
>> </ai_target_device>
>> <ai_target_device>
>> <reference_name>mirrordev</reference_name>
>> <!-- assume that disk is appropriate for raid2 mirror -->
>> <target_device_overwrite_disk/> <!-- erase disk, use whole disk
>> for slice 0 -->
>> </ai_target_device>
>> <ai_zfs_pool>
>> <name>newpool</name>
>> <mountpoint>/export1</mountpoint>
>> <vdev>
>> newpooldisk.s0 <!-- use selected disk named "newpooldisk",
>> slice 0 -->
>> </vdev>
>> </ai_zfs_pool>
>>
>> For further consideration:
>> rpool deletion:
>> is there a use case for this? Should this be defined?
>> zfs pool deletion:
>> is there a use case for this? Should this be defined?
>
> Seems to me If we want to allow for full management for users for
> multiple disks, we should enable zpool deletion, rpool or otherwise.
> We need to work through the use cases for when we will either
> automatically delete a pool or when we require user specification to
> do this.
So we will require actions (create, delete) for pools as we do for
slices and partitions, then.
Thank you,
William
>
> thanks,
> sarah
> ******
>>
>> Not addressed:
>> - reusability issues - if a manifest specifying non-root zfs pools is
>> re-used, what happens to the existing pools? Are they verified in
>> any manner?
>> - use of /var as a separate zfs volume
>>
>> A proposed updated RNG schema, ai_manifest.rng, will be posted with
>> examples.
>>
>> Again, comments from the OpenSolaris community are desired.
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>