Hi Keith, No worries.. I can sift through. Thank you for the review. My comments inline...
> Hi Sarah, > > Forgive me, but I haven't as of yet had a chance to read through anyone > else's comments or your replies to them. If any of my comments are > addressed elsewhere, let me know. > > General impressions: I think this looks really solid. At least, I think > about what the installers might look like in this framework, and it > seems like a Good Thing (tm). > > From a terminology standpoint, the word "class" seemed to be used in > capacities in the document, at least to my eye. In some cases, it seemed > to refer to a literal object-oriented type class, while in others, > "class" seemed to mean something more like 'module' or 'component' > (i.e., a larger "grouping" of sorts). That is a fair statement. Although, I would argue we have similar implementation of "classes" in our code now, such as ICT. I think the gist of what we are saying is that this a modular architecture that allows for common functionality to be shared easily. The term 'class' allows us to convey this. > > Checkpointing: In section 2.1, it says the 'list of checkpoints is > defined by the application.' Since we're talking install phases here, > I'm assuming checkpoints mean more than they do in the DC sense. DC > checkpoints are ZFS snapshots (plus some other bookkeeping); in a full > install, in the early stages we don't have a zpool set-up yet, so the > checkpoints have a different definition. Am I right in assuming that the > checkpoint definitions have to include how to "rollback" to the starting > point? Is that part of the 'cleanup_checkpoint' functionality, or is > that different? They do mean more than in the DC sense. A checkpoint a piece of functionality, such Target Instantiation, that runs as a unit, that cannot be stopped and restarted in the middle. Think of a Checkpoint as as container for a set of functionality. As for a 'rollback', yes, in some cases we need to be able to restart and cleanup from a previous checkpoint run. This can be managed by zfs, when zfs is available, and for other checkpoints we need to determine if there is cleanup required if the user wants to rollback. For example, with target discovery we don't have any cleanup specifically. Except for data stored in the data cache. That will be managed with the design of the data cache. > > Loggers and "Progress receivers": These two concepts seem very similar > in function (although not in the data they receive). One thought I had > was that, if each logging statement were composed of the following data, > then the progress receiver(s) could be a special logger that only > consumes certain portions of that data. > - Current time > - Current progress % > - Estimated time remaining > - Message string > - Optionally, location in code (execution stack or file name and line > number, for example) > Sure, this could work. A progress receiver could be a logger.. we just have to be sure that the data can be used by the progress receiver in a way that can be displayed. > 2.1 Execution Engine: I know this isn't a complete API, but for the > cancel() function, is this a "safe" cancel? Will we have a way to > stop_at_next_checkpoint() or something similar? Yes, a safe cancel, as much as we can make it safe. If the user cancels the process and we cancel the checkpoint, there may be some cruft leftover. Stop at next checkpoint will be provided in some way. Haven't figured out the api for that yet. > get_loggers() vs. log_message() - For logging, what is the expected > usage: log_message(), or iterate through the loggers from get_loggers() > and log things explicitly to each? (Am I getting too low level here?) Good question :-). The intent is that logging goes through the engine. So, you do log to each log you are interested in logging to. I imagine we will have an 'all' flag or something. we do have a log_message() method on both the ExecutionEngine and the Logging class. This may be incorrect. I will take a look at this. > 3.1 Manifest Parser: parse() - it seems like the data collection classes > may need to implement this interface in some fashion as well, yes? Why? If the parser parses the manifest, then stores the data in the data cache, the data collection itself doesn't have to parse anything. Am I misunderstanding what you are getting at? > 3.2 Target Discovery: We have DeviceTarget as a base, and LogicalTarget > as a base. Is there a common base for each (even if it initially > provides a very minimal interface)? Probably. And, actually we are working on this with the data modeling now. If we come up with a common base for each then I will update the document to reflect that. > 3.2.1.2: Logical Device: What is the DeviceTarget equivalent of > instantiate()? The DeviceTargets are unique in that the Disk type is not something we ever create. The Partition and Slice types are things we create, via a label of the appropriate kind. If you look at the methods on DeviceTarget you will see: create_label() is essentially the equivalent of the instantiate() method for the LogicalDevice class types. > 3.4 Transfer Data: I'm assuming get_type() returns cpio, IPS, etc. Would > it be better to subclass TransferData based on type? Maybe.. And, that may be what we end of up doing. We are starting the Transfer design work in a couple of weeks and we will be looking at this. > > 5.3 DC: How do DC's current 'checkpoints' relate to the 'checkpoints' of > the install engine? Will each finalizer be an install engine checkpoint? > Or are they different concepts? DC's checkpoints will have to be rewritten, at least to implement the checkpoint interface. They will also have to store and retrieve data from the data cache as required. So, yes, they could stay as they are, but more likely the common things, such as Transfer(im-pop in DC land) will use the Transfer CommonApplicationClass that will be used by the other installers. The other functionality will be ApplicationClass specific to DC. It would be preferable to have the DC use as much of the common stuff as possible, but we will have to see how that works out. They are the same concepts in terms of a grouping of functionality running as a whole, and that we can stop and restart as defined by the groupings. sarah **** > - Keith > > Sarah Jelinek wrote: >> Hi Caimaniacs, >> >> I know you have been waiting for this bestseller to hit the shelves! :-). >> >> The Caiman team has been working on an updated architecture and we >> have the architecture document ready for review. The >> opensolaris-caiman project architecture page is located here(formerly >> known as Caiman Unified Design): >> >> http://hub.opensolaris.org/bin/view/Project+caiman/CUD >> >> The Caiman architecture document open for review is located here: >> >> http://hub.opensolaris.org/bin/download/Project+caiman/CUD/caimanarchitecture.pdf >> >> >> >> Please send comments/questions to caiman-discuss by 12/18/09. If you >> need more time please let us know, with the holidays coming up we may >> have to extend the review period. >> >> Thank you for your time. >> >> Regards, >> sarah >> _______________________________________________ >> caiman-discuss mailing list >> caiman-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
