Hi Ethan,

Thank you for the review. My comments inline...


> Hi Sarah,
> 
> Here are my comments/questions...
> 
> 
> 2.0
> Do application-specific core engine classes that are provided for by
> sub-projects (for example the Manifest Retrieval class or the Media 
> Creation
> class from 5.2 and 5.3 respectively) become part of the engine once
> developed, or are they only part of the application?

Only a part of the application. The Core Engine classes are only these:
-ExecutionEngine
-DataCollection
-Logging

There is some confusion apparently about what is in the Core Engine. The 
Common Application classes and Application specific classes are always 
considered outside the Core Engine.


> 
> 2.1
> Should error reporting be listed as a requirement on the execution engine
> as well?
> 
Progress reporting encapsulates error reporting. I will modify the 
requirement to make this clear.


> 2.2  The Data Collection component
> 
> What does "global installation data" include?  Is this only the data that
> represents an AI manifest, or a superset of the AI manifest?  ... or is 
> the intent
> for Data Collection to be used for random data object storage between 
> the Core
> components?

It stores everything that is calculated or provided during the course of 
an installation. Most likely a superset of the AI manifest. I am trying 
to think of things that it might store that are not captured in the AI 
manifest, and one thing I can think of is the altroot mountpoint.

> 


> 
> 3.2.1.1
> 
> Partition Class / Slice Class
> nit - What about entire partitions that are allocated to pools?  The
> Partition Class also seems to need a get_inuse() method

Can entire partitions, without a slice label be allocated to a zpool? I 
thought it had to have a label on it, EFI at least.  I am not sure it 
needs a get_inuse() method. The slices themselves are in use but a 
partition is not generally in use on its own, unless I am missing something?

however, I can see needing a get_inuse() method for disks. Disks can be 
used as storage in a VMEnv, including zones, so this may be something 
that needs to be considered.


> FileSystem Class / Dataset Class
> This is perhaps another nit, but the relationship of the Dataset Class 
> being
> a subclass of the FileSystem class doesn't align with the the ZFS 
> definitions
> for those objects.  Not all datasets are filesystems, e.g. ZFS Volume 
> and a ZFS
> snapshots aren't filesystems.

A ZFS Volume is covered by the LogicalVolume class. The snapshot is a 
good point though. I will think on this a bit. but, you have a good 
point which I think I need to address.


> VMEnv Class / Zone Class
> WRT the R&R statement in the last paragraph of 3.2.1.1 (first paragraph on
> page 14), an assumption here seems to be that the zone content itself 
> always
> lives in the same pool as where the zone configuration is found.  That's 
> not
> always the case.  Zones (and the VMs represented by the VMEnv Class) are
> logical targets, but their translation into physical devices (for target 
> discovery)
> would seem to be dependent on another logical target -- an installed 
> Solaris
> instance (or a BootEnv seems to be the closest object defined).

It wasn't my intent to imply the that the zone content itself always 
lives in the same pool. The configuration lives in the pool, and we can 
get the configuration data simply by getting the data from the pool. 
That's all this statement is intended to say. However, I get your point 
about the fact that the zone content may live outside the root pool and 
we might want to understand the zone configuration to try to 'do 
something' about getting that content during replication. As well as any 
devices that are resourced to the zone.

As for mapping the logical device to physical devices, there are cases I 
can see where we would need to 'follow' the information for a VM and 
then do more with regard to replicating the content for example. The 'in 
use' details are provided by libdiskmgt, so we don't actually have to 
'follow' anything to get this data.

A couple of use cases come to mind that I thought I would outline here 
just to try to understand what we might need in terms of zones for Caiman.

1. Creation of a zone during initial installation of a system:
In this case, since we only create a root pool, and I believe that's all 
we plan to support during install, the zone and its contents must 
initially live in that root pool. This might include filesystems not in 
the current BE, such as /export/home or others created by the user.

2. Discovery of zones for replication:
One approach to this is what you are eluding to that is implied in the 
document. That we simply copy the zone configuration, located in 
/etc/zones, and any content we archive from the root pool will be copied 
to the new system. This can result in zones that are not fully available 
if resources for that zone are not contained in the root pool. Even 
today, when a user migrates a zone from one machine to another, it is 
possible that the zone configuration will make it such that a resource 
or network configuration will not 'match' the new system, and as a 
result the zone won't be fully available until the user does some manual 
intervention.

This approach is currently proposed as part of Sanjay's replication 
requirements. You will note he says that for replication we will only 
copy BE's, he excludes 'data' and anything outside of the root dataset. 
So, anything that is defined outside a BE, for example, /export/home, 
will not be included in the replication archive.  So, any zone resources 
that are not in the datasets included in the BE's will not be copied.

> (Perhaps we need to delineate between logical devices --pools, filesystems,
> zfs datasets-- vs. logical targets --solaris instances, zones, VMs-- 
> which occupy
> logical devices and/or physical devices?)
> 

In terms of mapping discovery of these logical devices to physical 
devices I am not sure we need an extra delineation. For a zone for 
example, we can get the zonepath, which is a mountpoint, which is 
contained in a zfs pool, which we can then map to the devices. If a zone 
has a device as a resource, then we do have extra work to do to 
understand this. In the super class, Logical Volume, we have a 
get_assoc_devs() to account for this. Again, this mapping happens in 
libdiskmgt.

Maybe I am misunderstanding your concern. If I am, can you clarify?

> 4.1  Drawing 4.1
> Just a clarification question.  From 2.1, "A progress receiver 
> (typically the user
> interface) implements the following ...", gave me the impression that 
> the ui
> Application is what provides the ProgressReceiver methods, is that the 
> case?

Mostly that's the case. but it doesn't have to be. The application could 
have a separate progress receiver to get the progress and somehow 
communicate with it to do what it needs to do. That's why the diagram 
has both the InstallerApp and the ProgressReceiver classes shown.

> If so, that's not very obvious here; but perhaps that's just too much 
> detail for
> this diagram.
> 

Thanks again!

sarah
****

> 
> thanks,
> -ethan
> 
> 
> On 12/01/09 14:18, Sarah Jelinek wrote:
>> Hi Caimaniacs,
>>
>> I know you have been waiting for this bestseller to hit the shelves! :-).
>>
>> The Caiman team has been working on an updated architecture and we have
>> the architecture document ready for review. The opensolaris-caiman
>> project architecture page is located here(formerly known as Caiman
>> Unified Design):
>>
>> http://hub.opensolaris.org/bin/view/Project+caiman/CUD
>>
>> The Caiman architecture document open for review is located here:
>>
>> http://hub.opensolaris.org/bin/download/Project+caiman/CUD/caimanarchitecture.pdf
>>  
>>
>>
>>
>> Please send comments/questions to caiman-discuss by 12/18/09. If you
>> need more time please let us know, with the holidays coming up we may
>> have to extend the review period.
>>
>> Thank you for your time.
>>
>> Regards,
>> sarah
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
> _______________________________________________
> caiman-discuss mailing list
> caiman-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss

Reply via email to