On 01/14/10 09:43, Sarah Jelinek wrote:
>>
>> 2.0
>> Do application-specific core engine classes that are provided for by
>> sub-projects (for example the Manifest Retrieval class or the Media
>> Creation
>> class from 5.2 and 5.3 respectively) become part of the engine once
>> developed, or are they only part of the application?
>
> Only a part of the application. The Core Engine classes are only these:
> -ExecutionEngine
> -DataCollection
> -Logging
>
> There is some confusion apparently about what is in the Core Engine. The
> Common Application classes and Application specific classes are always
> considered outside the Core Engine.

I think I just misinterpreted the first sentence in section 2.0

>
>> 2.2 The Data Collection component
>>
>> What does "global installation data" include? Is this only the data that
>> represents an AI manifest, or a superset of the AI manifest? ... or is
>> the intent
>> for Data Collection to be used for random data object storage between
>> the Core
>> components?
>
> It stores everything that is calculated or provided during the course of
> an installation. Most likely a superset of the AI manifest. I am trying
> to think of things that it might store that are not captured in the AI
> manifest, and one thing I can think of is the altroot mountpoint.

I recall in previous discussion (from older iterations of the arch doc) 
that the
DataCollection component is what facilitates the checkpointing ability 
of the core
engine classes (i.e. in terms of state that can be held).  Is this still 
the case?
If so, what is the extent of state that can be store between classes? 
It seems
it can only be string-able data.  If this is the case, this may be 
important to note
in the document, as it presents a limitation/boundary of how the application
provided engine classes communicate, and likely impacts their design.


dump_cache() -- This method will dump the XML contained in the cache in
                            an AI manifest format.

It would seem then that this functionality requires consumption of an
interface from the AI application.  Perhaps this should really be 
functionality
provided for by a class from that app?  Also, -nit- if what's in the 
DataCollection
object is a superset, a generic name like dump_cache()


>>
>> 3.2.1.1
>>
>> Partition Class / Slice Class
>> nit - What about entire partitions that are allocated to pools? The
>> Partition Class also seems to need a get_inuse() method
>
> Can entire partitions, without a slice label be allocated to a zpool? I
> thought it had to have a label on it, EFI at least. I am not sure it
> needs a get_inuse() method. The slices themselves are in use but a
> partition is not generally in use on its own, unless I am missing
> something?

I was thinking about zfs intent log devices (disks; and on x86, a
partition) assigned to pools, but I'm actually not 100% sure that a
label isn't created to be used in that case.

Thinking about this some more though, it might not make sense for the
Partition class to have an in_use attribute.  With partitions, mere
existence should imply some sort of usage.  right?

>> VMEnv Class / Zone Class
>> WRT the R&R statement in the last paragraph of 3.2.1.1 (first
>> paragraph on
>> page 14), an assumption here seems to be that the zone content itself
>> always
>> lives in the same pool as where the zone configuration is found.
>> That's not
>> always the case. Zones (and the VMs represented by the VMEnv Class) are
>> logical targets, but their translation into physical devices (for
>> target discovery)
>> would seem to be dependent on another logical target -- an installed
>> Solaris
>> instance (or a BootEnv seems to be the closest object defined).
>
> It wasn't my intent to imply the that the zone content itself always
> lives in the same pool. The configuration lives in the pool, and we can
> get the configuration data simply by getting the data from the pool.
> That's all this statement is intended to say. However, I get your point
> about the fact that the zone content may live outside the root pool and
> we might want to understand the zone configuration to try to 'do
> something' about getting that content during replication. As well as any
> devices that are resourced to the zone.
>
> As for mapping the logical device to physical devices, there are cases I
> can see where we would need to 'follow' the information for a VM and
> then do more with regard to replicating the content for example. The 'in
> use' details are provided by libdiskmgt, so we don't actually have to
> 'follow' anything to get this data.
>
> A couple of use cases come to mind that I thought I would outline here
> just to try to understand what we might need in terms of zones for Caiman.
>
> 1. Creation of a zone during initial installation of a system:
> In this case, since we only create a root pool, and I believe that's all
> we plan to support during install, the zone and its contents must
> initially live in that root pool. This might include filesystems not in
> the current BE, such as /export/home or others created by the user.
>
> 2. Discovery of zones for replication:
> One approach to this is what you are eluding to that is implied in the
> document. That we simply copy the zone configuration, located in
> /etc/zones, and any content we archive from the root pool will be copied
> to the new system. This can result in zones that are not fully available
> if resources for that zone are not contained in the root pool. Even
> today, when a user migrates a zone from one machine to another, it is
> possible that the zone configuration will make it such that a resource
> or network configuration will not 'match' the new system, and as a
> result the zone won't be fully available until the user does some manual
> intervention.
>
> This approach is currently proposed as part of Sanjay's replication
> requirements. You will note he says that for replication we will only
> copy BE's, he excludes 'data' and anything outside of the root dataset.
> So, anything that is defined outside a BE, for example, /export/home,
> will not be included in the replication archive. So, any zone resources
> that are not in the datasets included in the BE's will not be copied.
>
>> (Perhaps we need to delineate between logical devices --pools,
>> filesystems,
>> zfs datasets-- vs. logical targets --solaris instances, zones, VMs--
>> which occupy
>> logical devices and/or physical devices?)
>>
>
> In terms of mapping discovery of these logical devices to physical
> devices I am not sure we need an extra delineation. For a zone for
> example, we can get the zonepath, which is a mountpoint, which is
> contained in a zfs pool, which we can then map to the devices. If a zone
> has a device as a resource, then we do have extra work to do to
> understand this. In the super class, Logical Volume, we have a
> get_assoc_devs() to account for this. Again, this mapping happens in
> libdiskmgt.
>
> Maybe I am misunderstanding your concern. If I am, can you clarify?

I think I'm okay.  As you point out there are ways to ultimately
discover the physical devices associated with all of the logical
entities you've listed.  My comment was more around the relationship of
the VM/Zone classes to the other logical device entities, but that's not
really architectural.

>
>> 4.1 Drawing 4.1
>> Just a clarification question. From 2.1, "A progress receiver
>> (typically the user
>> interface) implements the following ...", gave me the impression that
>> the ui
>> Application is what provides the ProgressReceiver methods, is that the
>> case?
>
> Mostly that's the case. but it doesn't have to be. The application could
> have a separate progress receiver to get the progress and somehow
> communicate with it to do what it needs to do. That's why the diagram
> has both the InstallerApp and the ProgressReceiver classes shown.

Thanks for the clarification.


-ethan


Reply via email to