The Automated Solaris Installer (AI) client should support selection and
formatting of more than one disk. Additional considerations include
mirroring, additional zfs pools.
This document attempts to lay out an approach to the task. OpenSolaris
Community feedback is desired.
Currently, only one disk is supported - in
<ai_target_device>...</ai_target_device>. Slice and partition
definitions for multiple device specifications can simply be defined by
moving the slice and partition definitions inside ai_target_device.
Target devices can then have symbolic names. The symbolic name can be
used in vdevs in zfs pool definitions. Examples follow below.
The default disk reference names are of the form "deviceN" where
N=1....number of
devices. A custom name for a disk can be created using element:
ai_target_device/reference_name, and must be unique alphanumeric
string with underscores.
Implementing root zfs pool (rpool)
The manifest currently allows specification of rpool.
ai_target_device/install_slice_number indicates the root pool slice. If
not specified, slice 0 of the 1st disk in the list is assumed to be the
root.
A mirror slice vdev can be declared within the ai_target_device:
- unique device identifier (ctds, mpxio, /device node, reference
name of a selected disk)
- slice number
This results in the command:
zpool create <poolname> <install slice> [<mirror vdev>]
If the pool exists, doing the "zpool create" will overwrite the existing
pool.
The target_device_option_overwite_root_zfs_pool can be done as follows:
- import named rpool
- delete datasets that comprise the rpool, using "zfs destroy <dataset>"
- proceed with installation an usual
zfs pool creation:
A pool consists of a set of vdevs. At this time, the vdevs are slices,
so they consist of a unique disk identifier (can be ctds, mpxio,
/device, or reference name) plus a slice number.
Mirrors consist of list of vdevs and can be listed in the same way.
General definition for a zfs pool (not the root pool):
ai_zfs_pool
name
id (used to reference an existing pool)
vdevs (1 or more vdev definitions or a set)
mirror_type (regular mirror, raid, or raid2)
mirror_vdevs (0 or more mirror definitions, each a list of vdevs)
mountpoint (for consideration)
/ai_zfs_pool
Format for vdev:
disk_name - real (ctds, mpxio, /device node) or reference name of
selected disk
slice - valid slice number 0,1,3-7
Example: install on boot disk, use some selected disk as raid2 mirror,
and use another selected disk over 30GB for zfs pool newpool mounted at
/export1
<ai_target_device>
<target_device_select_boot_disk>
<mirror>mirrordev.s0</mirror> <!-- put mirror selected disk named
"mirrordev", slice 0 -->
<mirror_type>raid2</mirror_type>
</ai_target_device>
<ai_target_device>
<reference_name>newpooldisk</reference_name>
<target_select_min_size>30<target_select_min_size>
<target_device_overwrite_disk/> <!-- erase disk, use whole disk for
slice 0 -->
</ai_target_device>
<ai_target_device>
<reference_name>mirrordev</reference_name>
<!-- assume that disk is appropriate for raid2 mirror -->
<target_device_overwrite_disk/> <!-- erase disk, use whole disk for
slice 0 -->
</ai_target_device>
<ai_zfs_pool>
<name>newpool</name>
<mountpoint>/export1</mountpoint>
<vdev>
newpooldisk.s0 <!-- use selected disk named "newpooldisk", slice
0 -->
</vdev>
</ai_zfs_pool>
For further consideration:
rpool deletion:
is there a use case for this? Should this be defined?
zfs pool deletion:
is there a use case for this? Should this be defined?
Not addressed:
- reusability issues - if a manifest specifying non-root zfs pools is
re-used, what happens to the existing pools? Are they verified in any
manner?
- use of /var as a separate zfs volume
A proposed updated RNG schema, ai_manifest.rng, will be posted with
examples.
Again, comments from the OpenSolaris community are desired.