Hi Jan,

My comments RE: TI design, and some comments below to your answers to 
Sundar.

1. I see you that you say only one zfs pool will be created to contain 
all datasets. So, does that imply that /export/home, which is a dataset 
I believe we need still for Slim, will be in the root pool? Why this is 
this limitation listed? Eventually the ZFM will be able to handle 
creation of multiple pools?

2. We need to clearly outline the changes required in the orchestrator 
to support ZFS. We did make allowances for ZFS in the data structures we 
defined, but I am not sure what other interfaces or changes we need. 
This needs to be accounted for in the schedule.

3. We also need to define new callback milestones in the orchestrator to 
handle the new target instantiation services along with changes to 
utilize TI . TI will need to provide milestone data as well for the 
callbacks.

4. Regarding using libzfs as compared to the cli's, I suggest using 
libzfs. The current libdiskmgt code uses libzfs and has for some time. 
It is, I believe, stable enough. I also think we would need a contract, 
but I really think we should consider using this library before using 
the cli commands.

5. The ti_create_target() interface doesn't really take a 'type'. It 
assumes a ZFS zpool or ZFS dataset. Should we limit this to ZFS? Or 
allow for a type specification.

6. Some other things that we need to calculate for scheduling:

    -changes for support of transfer service in orchestrator
    -Interface changes to the orchestrator in support of new GUI
    functionality



> Sundar,
>
> Sundar Yamunachari wrote:
>   
>> Jan,
>>    
>> Jan Damborsky wrote:
>>     
>>> Hi Sundar,
>>>
>>> thank you very much for your feedback. Please see my response in line.
>>>
>>> Jan
>>>
>>>
>>> Sundar Yamunachari wrote:
>>>       
>>>> jan damborsky wrote:
>>>>         
>>>>> Hi all,
>>>>>
>>>>> preliminary version of design document for Target Instantiation Service
>>>>> which is part of Slim Install project has been posted on
>>>>>
>>>>> http://opensolaris.org/os/project/caiman/files/slim_ti_design-2.pdf
>>>>>
>>>>> It is the first draft with many issues still pending and many design 
>>>>> details
>>>>> to be determined. Any comments, questions, suggestions are appreciated.
>>>>> If possible, please review the document by the COB 9/19/07.
>>>>>
>>>>> Thank you very much,
>>>>> Jan
>>>>>
>>>>> _______________________________________________
>>>>> caiman-discuss mailing list
>>>>> caiman-discuss at opensolaris.org
>>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>>   
>>>>>           
>>>> Jan,
>>>>
>>>>    These are my comments. In general you design assumes lot of 
>>>> changes in the orchestrator module. For example Orchestrator doesn't 
>>>> talk about ZFS. Are you planning to update the orchestrator design 
>>>> document with the changes?
>>>>         
>>> You are right that orchestrator would need to be modified, so that 
>>> target instantiation (TI) might work properly. As you are mentioning 
>>> above,
>>> for Slim, TI will be creating appropriate ZFS structures for handling 
>>> final Solaris instance. Information about ZFS configuration would need 
>>> to be
>>> provided by orchestrator. For October release, set of attributes 
>>> describing ZFS structures will be simple and will probably only 
>>> contain information
>>> about ZFS root pool and ZFS file systems to be created.
>>>       
Actually, there is some basic support in the orchestrator data 
structures for ZFS. But, I would like us to clearly define the data 
structure additions/changes. I will start working on this.

>>> Other modifications which would need to be done to orchestrator relate 
>>> to the intent not to use pfinstall engine for Slim. In Dwarf 
>>> orchestrator prepares
>>> profile file according to information received from GUI and then 
>>> invokes pfinstall in order to carry out set of tasks, which for Slim 
>>> are to be carried out by
>>> TI and transfer module. Since TI will be implemented as shared library 
>>> and will use other approach for describing final target (target will 
>>> be determined
>>> by set of attributes passed as nv list to the TI), orchestrator would 
>>> need to be modified accordingly.
>>>       
>> Please make sure that the code changes are additions since orchestrator 
>> needs to works the same way in the dwarf path.
>>     
>>> For October release, the information passed from orchestrator to TI 
>>> will be limited and probably hard wired in orchestrator (slice 
>>> configuration, name of ZFS
>>> root pool, set of ZFS file systems to be created - /, /usr, /opt, ...) 
>>> and thus not configurable from GUI.
>>>
>>> I am attaching initial version of TI task list with features are 
>>> supposed to be delivered for October release and what might be the 
>>> requirement for March.
>>>
>>> As soon as it is determined, how and which information would flow 
>>> between orchestrator and TI, I think I might also update orchestrator 
>>> design document
>>> accordingly.
>>>
>>>       
>>>> Section 1: Assumptions
>>>>
>>>>    - The project assumes that ZFS root/boot is available. Can you add 
>>>> the information whether it is already available or when it is 
>>>> expected to be available?
>>>>         
>>> The ZFS structures (root pool and file systems) will be created by TI 
>>> according to information received from orchestrator. In general, I 
>>> could think about
>>> following flow of control:
>>>
>>> * orchestrator receives information about target from target discovery 
>>> service
>>> * user creates desired target configuration in GUI (limited or not 
>>> supported for October) and passes it back to the orchestrator
>>> * orchestrator passes this information to the target instantiation 
>>> service which then prepares target accordingly
>>>       

What if the user has a ZFS root zpool created already? Is this supported 
in the GUI and in Slim for use to install in to?

>> Still the design document doesn't say clearly what are the targets 
>> created by Slim Install. For example does it create Solaris fdisk 
>> partition or assumes that Solaris fdisk partition is created by the user.
>>     
>
> The exact structures to be created by TI for October/March releases are 
> subject
> of discussion, I try to summarize what it has been decided for October 
> release so far:
>
> [1] If "whole disk" option is selected in GUI, TI will create
> * fdisk partition table on target disk by means of "fdisk -B" command
> * slice configuration (VTOC) by means of write_vtoc()
> * ZFS root pool
> * ZFS file systems
>
> [2] If other case, TI will suppose that there is existing Solaris 
> partition present and will create
> * slice configuration (VTOC) by means of write_vtoc() within existing 
> fdisk Solaris partition
> * ZFS root pool
> * ZFS file systems
>
>
> As soon as the final decision is made, I will add this information to 
> the document as well.
>   

According to this mornings Slim meeting it sounds like we won't allow 
changes to the existing Solaris partition for Slim. I assume then that 
the choices are whole disk or existing partition only for Slim?

>
>   
>>> * orchestrator invokes transfer module with information about created 
>>> target
>>>       
>> What is the interaction between TI module and transfer module? does the 
>> transfer module expects a mounted directory? Where does this information 
>> come from?
>>     
>
> It seems to me that both TI as well as transfer module would be driven 
> by orchestrator
> and TI wouldn't be aware of transfer module existence. If this is the 
> case, firstly
> TI would be invoked from orchestrator in order to create target based on 
> information
> received from orchestrator. Then the orchestrator would invoke transfer 
> module passing
> it information about target created.
>
> To be honest, I am not sure about assumptions transfer module depends on 
> for now, it definitely
> needs to be discussed, so that we could decide, if there are more tasks 
> which need to be
> carried out by TI.
>
>   
It needs to be understood to clearly understand the tasks either 
required in the orchestrator and/or TI module.

>>> * ...
>>>
>>> I will add appropriate information to the document.
>>>
>>>       
>>>>    - What is a basic environement? (The second bullet in the 
>>>> assumptions)
>>>>         
>>> I agree this is not quite good wording. I wanted to say that Slim 
>>> doesn't address install within zones or XEN environment. What might be 
>>> the
>>> more appropriate term here ?
>>>       
>> You can say slim is supported only on the ZFS environment.
>>     
>
> Thanks you. I will change the wording.
>
>   
>>>> Section 2:
>>>>
>>>>    - What is a Manager Module (MM)? Is it same as TI manager? The 
>>>> name sounds too generic. Can you change it to refer to the actual 
>>>> functionality?
>>>>         
>>> Yes, MM is the same as TI manager. I will consolidate it to more 
>>> specific TI manager within document.
>>>
>>>       
>>>> Section 5.1.1:
>>>>
>>>>    - Design is spelled as desifn
>>>>         
>>> OK. I have corrected it.
>>>
>>>       
>>>> Section 5.2:
>>>>
>>>>    -  Are we planning to support creation of zfs pool across multiple 
>>>> disks during installation?
>>>>         
>>> It is not addressed for October release. I am not sure right now, if 
>>> we would support it for March. If I have understood correctly,
>>> only non-root ZFS pool can be spread across several disks. As far as 
>>> root pool is concerned, I think that support of mirror
>>> configuration might be taken into account.
>>>
>>>       
You should add text specifically about what type of support we are 
providing for Slim. But, what type of support in terms of root pools in 
the longer term for TI. At least the Slim support should be clearly 
spelled out.

>>>>    -  What is meant by "It is possible to install on unlabeled 
>>>> disks". How do we determine whether the disk doesn't have a label or 
>>>> it has a label we don't understand?
>>>>         
>>> I was thinking about the case, where sparc is shipped with unlabeled 
>>> disk. In this case it would be necessary to label disk before
>>> read_vtoc()/write_vtoc() interfaces could be used for creating VTOC 
>>> structure.
>>>       
>> My question is how do you determine whether the disk is a new disk or a 
>> disk with data. If the disk has user data, we may destroy it.
>>     
>
> This is a good question. To tell the truth, I am not sure, if there is
> reliable approach for obtaining this data.
>
> I was thinking about this also from the other perspective and for now it
> seems to me that there is possibility we wouldn't need this information.
>
> If user selects "whole disk" as target then it agrees that all data
> on the disk are to be lost.
>
> Regarding "non-whole disk" path, for October release user will prepare fdisk
> partition table (by creating Solaris partition) and for March user will 
> select
> in GUI, how/where TI should create Solaris fdisk (for x86).
>
> That said, it is still to be decided, how "sparc" path will look like and if
> we would need information about labeled disk in that case.
>
> I will continue thinking about this.
>
>   
>>> I will clarify this point in document.
>>>
>>>       
>>>>    -  I don't understand the third bullet "There should be 
>>>> possibility of utilizing unallocated disk space when creating new 
>>>> Solaris partition". Do you mean there will be an interface in MM to 
>>>> specify a special keyword to use all free space?
>>>>         
>>> I meant to say that TI will have to provide following features:
>>> * creating new Solaris partition within unallocated disk space, when 
>>> there is no Solaris partition available (not for October)
>>> * using existing Solaris partition
>>>
>>> For October release, it is supposed that there would be Solaris 
>>> partition already created before TI is invoked. User will be asked to 
>>> prepare one
>>> before installation is invoked.
>>>       
>> So for October release, does the TI module creates ZFS pools and datasets?
>>     
>
> Yes. TI module will create one ZFS root pool and ZFS file systems according
> to information received from orchestrator.
>
>   
This includes /export/home? I didn't see this mentioned.

>>> It is still not determined for now, how set of attributes describing 
>>> the target partition configuration will look like. Probably the main 
>>> issue here is to
>>> choose the right level of abstraction. For example, the orchestrator 
>>> might pass to TI name of the target device and complete partition 
>>> table to be created.
>>> On the other hand, it might also say to TI something like "Create at 
>>> least 20GB Solaris partition and then let me know where and how big it 
>>> was actually
>>> created". We would need to find appropriate balance here for March.
>>>
>>>
>>>       
>>>>    - CLI tools fdisk and and Format can't be used from GUI. Why these 
>>>> CLI tools are mentioned here?
>>>>         
>>> I have meant that TI would use these CLI for creating appropriate 
>>> label/partition table on disks. If I have understood your assertion
>>> correctly, is that mean that if TI is implemented as shared library 
>>> utilized indirectly by GUI via orchestrator, it is not possible
>>> to consume these CLI by TI library ?
>>>       
>> Both fdisk and format are interactive tools. If you call those tools in 
>> your install, it will bring up shell to run those tools. It may not 
>> conform to the slim GUI and users won't like it.
>>     
>
> I have been considering to use format&fdisk in non-interactive mode in
> similar way, how spmi libraries utilize these interfaces. In 
> non-interactive mode,
> they both take as parameter file describing what tasks are supposed to 
> be carried out
> and thus no user input is required ("fdisk -F" for creating partition 
> table, "format -f").
> Please let me know, if this approach might work for our purposes.
>
>   
This brings up a point about a bug we currently have in Dwarf... so, I 
am wondering when is it planned to putback the TI changes in to the 
install gate? If TI does a better job of target instantiation with 
regard to fdisk than pfinstall, it would be better to use this and not 
use the private interfaces in pfinstall for the Dwarf(SXDE) fixes. Also, 
this brings up the point about the ti_target_create() function... this 
is only creating zpools. But, what about the interfaces to create the 
fdisk partition data? That would be good api's to have exposed.

>>>>    - pfinstall and ttinstall manages fdisk partitions using the 
>>>> libspmi libraries.
>>>>         
>>> Since pfinstall is not considered to be used for Slim TI would need to 
>>> implement another approach for managing
>>> fdisk partitions.
>>>       
>> My comment meant that the real fdisk partition management is done by 
>> libspmi libraries and used by both pfinstall and ttinstall. The 
>> implementation is not completely in pfinstall.
>>     
>
> I see. I have misunderstood. Thank you for clarification.
>
>   
>>>>    - I am not sure whether it is a good idea to create a label. We 
>>>> may destroy the user data.
>>>>         
>>> Label should only be created when unlabeled disk is recognized (for 
>>> sparc). I will clarify this point.
>>>       
>> How will you differentiate no label and unknown label?
>>     
>
> Please see my response above.
>
>   
>>>>    - Are you planning to use fdisk tool to create/modify fdisk table? 
>>>> Why can't use the existing libspmi interfaces?
>>>>         
>>> Originally I was planning to use fdisk utility for this. But this 
>>> assumption might probably change, since
>>> we will probably not deal with fdisk partitions in TI for October 
>>> release and for March it is now discussed
>>> using new interfaces which could be available at that time (for 
>>> example libfdisk library might be one of candidates).
>>>
>>> In accordance with the long term goal, we would like to decrease 
>>> utilizing existing spmi interfaces, so that using
>>> of spmi libraries might be completely avoided in future.
>>>       
>> I agree with you about libspmi but if it is doing the right thing, 
>> please take a look at them first. These functions are written because 
>> the tools couldn't satisfy install needs.
>>     
>
> To be honest, I have initially taken a look at the appropriate spmi library
> (libspmisvc) which provides set of low level tasks responsible for 
> preparing target.
> It utilizes format CLI for lov level formating, fdisk CLI for 
> manipulating fdisk
> partition table and read_vtoc()/write_vtoc() interfaces from libadm library
> for creating slice configuration.
>
> I agree that since spmi libraries are relatively complex and not so easy to
> understand, I should also take a look at the other places. Could you please
> suggest, what might the other source of information which I could also use
> for inspiration ?
>
>   
Take a look at the libzfs code. They do some fdisk stuff in there, I 
think, for creation of zpools.

>>>> Section 5.3.1:
>>>>
>>>>    - Why install needs to create non-root ZFS pool?
>>>>         
>>> This is not requirement for October, since all bits will be installed 
>>> within one root pool.
>>> But based on assumption that root pool can't be striped across several 
>>> slices, it might
>>> be useful to have possibility to define non-root pool which might for 
>>> example utilize
>>> remaining devices not consumed by root pool.
>>>       
>> My feeling that we should not add stuff that is not needed for install. 
>> There are ZFS tools that allows creation of non-root ZFS tool.
>>     
>
> I agree that TI should only carry out set of tasks necessary for further
> installation/upgrade process. Creating non-root ZFS pool is not considered
> for October, but I am not sure about requirements for Snap Upgrade project,
> which will also utilize TI service for preparing upgrade target.
>
>
>   
We should consider and think about the requirements for the Snap 
upgrade. In particular so we don't design ourselves in to a corner with 
regard to what they may need.

>>>> Section 7:
>>>>
>>>>    - If I want to use the existing target (fdisk partition), do have 
>>>> to use ti_target_create() from the orchestrator?
>>>>         
>>> Yes. If fdisk configuration is suitable (that would mean there is 
>>> existing Solaris partition available - this assumption is taken for 
>>> October release)
>>> orchestrator would still need to call ti_target_create() with 
>>> appropriate set of attributes passed as nv list. In this case, 
>>> attributes wouldn't contain
>>> information about fdisk configuration, but would be expected to 
>>> contain information about slice (VTOC) as well as ZFS configuration.
>>>       
>> Can we split the API in to two different api with one for fdisk and 
>> another for ZFS?
>>     
>
> I was initially thinking about approach that the target is fully 
> described by set of
> attributes passed to the one common interface. In this case, 
> orchestrator would
> be only responsible for preparing this attribute list and wouldn't be 
> aware of
> sequence of operations which need to be executed in order to create target.
>
> I might split the API, so that there is exiting interface for every task 
> to be carried
> out in order to create target - for now it would probably mean to create 
> separate interfaces
> for creating fdisk&slice configuration and ZFS structures.
>
>   
I would actually like this for use in the SXDE path later.

> The disadvantage might be that if this approach is taken part of the 
> implementation logic
> would be transfered to orchestrator. It then it would have to be aware 
> of the fact that ZFS
> interfaces may be used after fdisk&slice interfaces finished their work.
>
>
>   
But, the orchestrator may have to create/modify fdisk partitions in a 
later release. It would be good to have this capability available.


thanks,
sarah
****
> Thank you very much,
> Jan
>
>   
>> - Sundar
>>     
>>>>    - What about deleting a partition?
>>>>         
>>> This feature is not supported for October, but will probably be 
>>> required for March. It is not determined for now, how this information 
>>> will be passed
>>> from orchestrator to TI.
>>>
>>>       
>>>> That is all for now.
>>>>         
>>> Thank you very much.
>>>
>>>       
>>>> - Sundar
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>         
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>     
>
> _______________________________________________
> caiman-discuss mailing list
> caiman-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>
>   


Reply via email to