Sarah,
Thank you for responding to my comments. More comments to your response.
Sarah Jelinek wrote:
> Hi Sundar,
>
> Thank you so much for taking the time to review this. My comments
> inline..
>
>
>> Sarah,
>> This is a well-written document. It gives a complete picture of
>> all installation tools and modules.
>>
>> Page 2:
>> - The links to opensolaris.org documents (1,2 and 3) doesn't work.
>
> It does for me. Did you ctrl-click on the link?
It could be a problem with evince. If I cut and paste the document link
on the browser, they work ok.
>
>> - The differences between 2 (Installation from pre-packaged media,
>> such as a CD, DVD, or USB flash memory stick) and 4 (Booting of the
>> installation environment from media such as CD, DVD, or USB flash
>> memory) are not clear. What do you mean by installation in this case?
>
> One describes the installation itself and the other simple notes what
> boot mechanisms we will provide. So, there isn't any specific
> installation method implied in the statement "1.Booting of the
> installation environment from media such as CD, DVD, or USB flash
> memory." Just how we plan to provide the booting to get to the
> installation environment
You need to boot to do the install. So I consider them together.
>
>> - I don't understand 9 (Sufficient system configuration support to
>> achieve a useful, running installation on the first boot). What is
>> meant by running installation on the first boot?
>
> I think this isn't clearly worded. What it is intended to say is that
> we must have sufficient system configuration support to have a
> bootable and runnable system at first boot.
okay.
>
>
>> Page 3:
>> - The words "Install installation" looks redundant in "New Solaris
>> Install installation suite"
>
> Ok, fair point. I will reword.
>
>> - Can you give an example for "a set of components that provide
>> common behaviors usable by applications" similar to the example for
>> "a set of components that are used to implement specific application
>> functionality"?
>
> sure. I will add.
>
>
>> Page 4:
>> - What is an "installation sequence?" (Section 2.1)
>
> It is the sequence of operations required to complete an install of
> OpenSolaris. Do you have a suggestion as to better wording?
It was used the first time there in the document. Can you add the
definition like you mentioned here "sequence of operations required to
complete an install of OpenSolaris?"
>
>
>> Page 5:
>> - get_progress_estimate(): retrieves an estimate of the time
>> required to execute the checkpoint
>> My understanding of the checkpoint is that it is a milestone or
>> endpoint. How can you estimate with out a starting point?
>
> This method asks the checkpoint for its estimate on time, length,
> whatever way we are estimating progress(this hasn't been designed
> yet). The engine has to ask all registered checkpoints for this, and
> then it must create a progress total, with the milestones, for giving
> progress to the progress receivers.
The expectation is that the each checkpoint will provide the time (or
length or something) it takes to complete the tasks defined by the
checkpoint?
>
>> - For progress reporting, the progress receiver gets the updated
>> progress from update() method. Who will report progress? (which
>> modules provide progress data and what is the interface?)
>
> The basic sequence is this:
> -App registers checkpoints with ExecutionEngine
> -ExecutionEngine asks all checkpoints to give progress estimates
> -ExecutionEngine calls 'update()' on ProgressReceiver with this
> initial progress data. This progress data includes: number of
> milestones, unit of measure for progress, and the initial overall
> estimate for completing the checkpoints.
> -ExecutionEngine starts a checkpoint and during the checkpoint
> processing, it calls report_progress() on the ExecutionEngine to give
> a point in time progress update.
> -ExecutionEngine calls update() on ProgressReceiver.
>
> The interfaces are defined in the ExecutionEngine class and the
> checkpoint class and the ProgressReceiver class.
Are the checkpoints are the one that provide/update the progress data.
>
>
>> - Section 2.2 second bullet "Provide the ability to translate
>> global installation data into a valid Automated Installer (AI) Manifest"
>> What is the reason behind this requirement? How does the DC fit
>> in to the picture? How do you translate DC data in AI manifest? Also
>> if the user has installed from a media (not using a repo), will a
>> default repo used as the repo in the AI manifest?
>
> The reason for this requirement is that users want a way to
> 'replicate' an install easily. Providing an AI manifest as output from
> an installation gives them one option to this 'replication'.
Does it mean that all types of installation create an AI manifest at the
end of the installation? I don't know whether we could create AI
manifest to replicate the system in all scenarios such as custom image
that contains things that do not come from a manifest.
>
> As for translating a DC run into an AI manifest, I am not sure we
> would want to do that, but we could. The DC and AI manifests have one
> thing in common, and that is the data specification section of the
> manifest. For the things like target info, and package info. DC has an
> additional section for 'execution' which is not part of AI and isn't
> likely it would be. In the case of a DC run translated to an AI
> manifest, not much would carry over from the DC manifest to the AI
> manifest. The repo could, maybe the package list if the package list
> to install is the same as the one that was used to build the image.
> Again, I am not sure this is a scenario we want to support.
>
>
>
>> Page 6:
>> - Why data being converted to xml? Why the architecture document
>> cares about data format?
>
> I am removing this. The architecture shouldn't care about data format,
> and actually, the data format should be opaque to any consumers,
> within the data collection cache itself. It should only have methods
> to get and set the data without caring about the internal format.
>
>> - section 2.3 "logging and error reporting"
>> Is there any plan to support multiple logging levels?
>
> Yes, there are plans to support multiple logging levels. The design
> for logging is in progress now.
>
>> Page 7:
>> - Why the architecture document have details about specific
>> implementation like "Python logging?"
>
> I think because it is the default logging mechanism for the
> ExecutionEngine it is noted. I have no heartburn taking it out if you
> want me to.
I don't have any issue if you are looking at Python logging as a model.
I want to make sure that we look at other things too before settling on
one model.
>
>> Page 9:
>> - What about an interface to enumerate the disks on the system?
>
> The disks, as well as any other target data type, have the
> 'discover()' method which is the method to use to start discovery of
> the type specified. So, this would be used to enumerate disks on the
> system.
>
>> - There are two "get_assoc_slices()". One should be
>> "set_assoc_slices()"
>
> Fixed.
>
>
>> Page 10:
>> - I am confused whether the partition class refer to all
>> partitions or a single partition? It looks like it refers to a single
>> partition.
>
> It refers to a single partition.
>
>> - Do you need methods that provide information about the
>> partitions like type, size, id etc?
>
> We have the following methods in the DeviceTarget class:
>
> get/set_size
> get/set_id
> get/set_type
>
> Are there other attributes on a partition that I should consider
> adding methods for?
Primary or logical?
Anything special for GPT?
>
>> - Why slice class needs associated parts? All slices either
>> associated with a single partition or single disk.
>
> Right.. I was trying to keep the method names consistent across the
> data objects. In the case of a slice it is only associated with one
> and only on partition or a disk. Let me think on this a bit and see if
> I should remove it.
>
>> - I have same questions for slices I listed for partitions
>
> Same answers as for partitions. If there are methods missing that I
> need specifically for slices that you know of, please let me know.
>
>> - What are the flags in slice class?
>
> Flags for creation of a slice.
Can you elaborate? There are flags defined in slices to indicate whether
a slice is read, read/write or mountable. There is also tags values to
define what type of slice (swap, root/boot, reserved etc.). Does the
flags capture all these information?
Thanks,
Sundar
>
>> Page 14:
>> - What about GPT in target instantiation?
>>
> GPT is a partition type, so is included in the Partition class
> definition.
>
>> Page 15:
>> - Does the TD, TI and Transfer need methods to provide progress
>> reporting?
>
> No, they call the ExecuteEngine report_progress() method as needed to
> provide progress reporting. It is a 'push' and not a 'pull' so no
> methods required for TD, TI and Transfer for progress reporting.
>
>
>> Page 32:
>> - I don't think that the server side (installadm) fits in to the
>> architecture. Since we have plans to run the server side setup on
>> systems running different operating systems, We need to have a
>> different design to fit those needs.
>
> This is a good point. I think for Solaris it fits into the
> architecture. But, if we are going to support multiple OS's on the
> server we need to rethink this to ensure it is portable.
>
> Thanks again for the review.
>
> sarah
> *****
>
>> Thanks,
>> Sundar
>>
>>