Hi Sarah,
Sarah Jelinek wrote:
>>> -what is the zfs root pool configuration for slim? mirrored or not?
>>> I assume not mirrored but not sure. Since we are creating
>>> a separate swap slice, outside the root pool, we will at a minimum
>>> have to create a pool with the slice(s). Not sure if we are allowing
>>> more than 1 disk to be included in the pool?
>>>
>>>
>> From TI perspective, following configuration will be supported:
>> * one disk (not mirrored)
>> * two slices within Solaris2 partition (one for root pool, one for swap)
>> * only one ZFS pool (root pool created on slice)
>> * five ZFS filesystems (root, usr, var, opt, export)
>>
>> Looking at the UI roadmap, mirrored configuration might be supported, but
>> would be handled as one of post-installation tasks.
>>
>> Why would be necessary to create ZFS pool (other than root pool)
>> containing
>> the slices ?
>>
>>
> For future stuff. Users may want to separate out root from user data
> pools. So, I was just wondering if TI will allow 'non root' pools to be
> created. Not for Oct, just in general.
>
I see. I agree with you. It is planned that for March release TI should
support
creating more sophisticated ZFS structures (different type of ZFS pools, ZFS
snapshots, clones, ...). The set of features will be mostly based on
Snap Upgrade
project requirements.
>
>>> ****Postinstall tasks. We wil likely need a postinstall service to
>>> handle these.
>>> pfinstall/libspmi currently do this for us automatically.
>>> 1. transferlist
>>> 2. create boot archive
>>> 3. XXX whatever else is in finish script.
>>> 4. Moving, copying the logs
>>>
>>> ****Changes required in the orchestrator:
>>>
>>> typedef struct {
>>> char *pool_name; /* More info will be added */
>>> + zfs_dset_t *set_list; /* data sets in pool */
>>> + boolean_t is_root; /* root pool */
>>> } zfs_instance_t;
>>>
>>> -GUI will pass in the fdisk partition data to the orchestrator as
>>> usual, the difference will be that the user does not select anything
>>> other
>>> than the disk.
>>> -What to pass to TI:
>>>
>>>
>> Below I have tried to assign appropriate nv list attributes describing
>> the particular part of target;
>>
>>
>>> -disk name
>>>
>>>
>> TI_ATTR_FDISK_DISK_NAME (string)
>>
>>
>>> -use whole disk or not
>>>
>>>
>> TI_ATTR_FDISK_WDISK_FL (boolean)
>>
>>
>>> -pool name
>>>
>>>
>> TI_ATTR_ZFS_RPOOL_NAME (string)
>> TI_ATTR_ZFS_RPOOL_DEVICE (string array) - will contain one slice name
>> for October release
>>
>> I am thinking about following information, which would be passed to TI
>> as well:
>>
>> - slice configuration
>>
>> TI_ATTR_SLICE_DISK_NAME (string) =
>> <name_of_disk_containing_target_Solaris2_partition>
>> - optional for October. It is not required if TI_ATTR_FDISK_DISK_NAME
>> is defined
>>
>> TI_ATTR_SLICE_NUM (uint16) = 3
>> TI_ATTR_SLICE_PART (uint16 array) = {0,1,2}
>> TI_ATTR_SLICE_TAG (uint16 array) = {2,3,5}
>> TI_ATTR_SLICE_FLAG (uint16 array) = {0,1,0}
>> TI_ATTR_SLICE_START (uint32 array) = {0,??,0}
>> TI_ATTR_SLICE_SIZE (uint32 array) = {??,??,??}
>>
>>
> So, this does this imply that TI will do some sanity checking in terms
> of the slice data? I assume so. What I mean by this is that if for some
> reason the orchestrator gets the size wrong, or it gets mangled somehow,
> we need to be able to either a) adjust for the appropriate size or b)
> return an error back to the orchestrator.
>
I agree. I am not sure, if there are VTOC specific
recommendations/requirements which
needs to be taken into account when manipulating VTOC structures, but I
can at least
think about following set of sanity checking:
- slices don't overlap
- slices don't pass across Solaris2 partition
- check for size ? (e.g. minimum size ?)
Looking at the code, some kind of checking is also done in
write_vtoc(3EXT) function,
(write_vtoc() is supposed to be used for writing VTOC structure to the
disk):
- check for magic number (will be set by TI)
- check for maximum number of slices
- check that at least one slice has size > 0
If sanity checking fails for some reason, if you agree for October I would
prefer to return error back to the orchestrator rather than trying to adjust
for the appropriate size, which might turn out not to be trivial task.
Thank you,
Jan