Hi Sarah, Thanks for the explanations.
Sarah Jelinek wrote: > Hi Keith, > > No worries.. I can sift through. Thank you for the review. My comments > inline... > > >> Hi Sarah, >> >> Forgive me, but I haven't as of yet had a chance to read through >> anyone else's comments or your replies to them. If any of my comments >> are addressed elsewhere, let me know. >> >> General impressions: I think this looks really solid. At least, I >> think about what the installers might look like in this framework, >> and it seems like a Good Thing (tm). >> >> From a terminology standpoint, the word "class" seemed to be used in >> capacities in the document, at least to my eye. In some cases, it >> seemed to refer to a literal object-oriented type class, while in >> others, "class" seemed to mean something more like 'module' or >> 'component' (i.e., a larger "grouping" of sorts). > > That is a fair statement. Although, I would argue we have similar > implementation of "classes" in our code now, such as ICT. I think the > gist of what we are saying is that this a modular architecture that > allows for common functionality to be shared easily. The term 'class' > allows us to convey this. Good explanation. >> >> Checkpointing: In section 2.1, it says the 'list of checkpoints is >> defined by the application.' Since we're talking install phases here, >> I'm assuming checkpoints mean more than they do in the DC sense. DC >> checkpoints are ZFS snapshots (plus some other bookkeeping); in a >> full install, in the early stages we don't have a zpool set-up yet, >> so the checkpoints have a different definition. Am I right in >> assuming that the checkpoint definitions have to include how to >> "rollback" to the starting point? Is that part of the >> 'cleanup_checkpoint' functionality, or is that different? > > They do mean more than in the DC sense. A checkpoint a piece of > functionality, such Target Instantiation, that runs as a unit, that > cannot be stopped and restarted in the middle. Think of a Checkpoint as > as container for a set of functionality. That makes sense. > > As for a 'rollback', yes, in some cases we need to be able to restart > and cleanup from a previous checkpoint run. This can be managed by > zfs, when zfs is available, and for other checkpoints we need to > determine if there is cleanup required if the user wants to rollback. > For example, with target discovery we don't have any cleanup > specifically. Except for data stored in the data cache. That will be > managed with the design of the data cache. Based on this explanation, it sounds like, for example, target instantiation wouldn't necessarily "roll back" either, but could potentially be restarted by reading the data cache and re-creating the partition/slice/zpool layout. Is that more or less correct? > > > > >> >> Loggers and "Progress receivers": These two concepts seem very >> similar in function (although not in the data they receive). One >> thought I had was that, if each logging statement were composed of >> the following data, then the progress receiver(s) could be a special >> logger that only consumes certain portions of that data. >> - Current time >> - Current progress % >> - Estimated time remaining >> - Message string >> - Optionally, location in code (execution stack or file name and line >> number, for example) >> > > Sure, this could work. A progress receiver could be a logger.. we just > have to be sure that the data can be used by the progress receiver in > a way that can be displayed. > >> 2.1 Execution Engine: I know this isn't a complete API, but for the >> cancel() function, is this a "safe" cancel? Will we have a way to >> stop_at_next_checkpoint() or something similar? > > Yes, a safe cancel, as much as we can make it safe. If the user > cancels the process and we cancel the checkpoint, there may be some > cruft leftover. Stop at next checkpoint will be provided in some way. > Haven't figured out the api for that yet. I think I was mixing terms in my question here, but I like the answer I got. > > >> get_loggers() vs. log_message() - For logging, what is the expected >> usage: log_message(), or iterate through the loggers from >> get_loggers() and log things explicitly to each? (Am I getting too >> low level here?) > > Good question :-). The intent is that logging goes through the engine. > So, you do log to each log you are interested in logging to. I imagine > we will have an 'all' flag or something. we do have a log_message() > method on both the ExecutionEngine and the Logging class. This may be > incorrect. I will take a look at this. The logging systems I've dealt with (in Python and Java) have the individual loggers manage what data they care about - a logger set to print at the "error" level would ignore statements passed in at the "debug" level. The proposal in this document seems to be the reverse - the engine (or the application) retrieves the loggers and decides which ones to log to. I would probably strongly recommend using the former paradigm (and if this conversation is better off waiting for the logging service proposal, I can do that), because it provides a single interface for changing the logging level for one or more loggers (that interface being provided by the base logger class). Someone looking for debug output can set that on the specific logger, instead of forcing the individual application to keep track of what the current logging level is and send messages to different loggers based on that. (If I'm misunderstanding the intent of these API functions, or if this conversation is better had when the logging service has a design doc, just let me know). > > >> 3.1 Manifest Parser: parse() - it seems like the data collection >> classes may need to implement this interface in some fashion as well, >> yes? > > Why? If the parser parses the manifest, then stores the data in the > data cache, the data collection itself doesn't have to parse anything. > Am I misunderstanding what you are getting at? The end-to-end goal is to get a file (xml, for example) converted into an in-memory object. It seems to me that the code that best knows how to construct an object (including what combinations of instantiation parameters are invalid), would be the object itself. When I saw this API function, my first thought jumped to the parser reading in a manifest file, then breaking the file into chunks based on what objects it needs to instantiate. It then would instantiate those objects by passing the 'chunk' to the object's constructor, collect the objects it has built, and pass them to the data cache. Reading that back to myself, though, I may be splitting hairs here in terms of definition, or, delving down too far into implementation. > >> 3.2 Target Discovery: We have DeviceTarget as a base, and >> LogicalTarget as a base. Is there a common base for each (even if it >> initially provides a very minimal interface)? > > Probably. And, actually we are working on this with the data modeling > now. If we come up with a common base for each then I will update the > document to reflect that. Ok. Even if there's little or no functionality that's common between them now, it almost seems like there could potentially be something at some point in the future. I think it's worth considering the possibility before locking ourselves into these concepts as being separate (or together), as it may or may not be easy to converge them later, depending on implementation. > > >> 3.2.1.2: Logical Device: What is the DeviceTarget equivalent of >> instantiate()? > > The DeviceTargets are unique in that the Disk type is not something we > ever create. The Partition and Slice types are things we create, via a > label of the appropriate kind. If you look at the methods on > DeviceTarget you will see: > > create_label() is essentially the equivalent of the instantiate() > method for the LogicalDevice class types. Ah, thank you. > > >> 3.4 Transfer Data: I'm assuming get_type() returns cpio, IPS, etc. >> Would it be better to subclass TransferData based on type? > > Maybe.. And, that may be what we end of up doing. We are starting the > Transfer design work in a couple of weeks and we will be looking at this. Ok. I'll keep an eye out for that then. > > >> >> 5.3 DC: How do DC's current 'checkpoints' relate to the 'checkpoints' >> of the install engine? Will each finalizer be an install engine >> checkpoint? Or are they different concepts? > > DC's checkpoints will have to be rewritten, at least to implement the > checkpoint interface. They will also have to store and retrieve data > from the data cache as required. So, yes, they could stay as they are, > but more likely the common things, such as Transfer(im-pop in DC land) > will use the Transfer CommonApplicationClass that will be used by the > other installers. The other functionality will be ApplicationClass > specific to DC. It would be preferable to have the DC use as much of > the common stuff as possible, but we will have to see how that works > out. > > They are the same concepts in terms of a grouping of functionality > running as a whole, and that we can stop and restart as defined by the > groupings. Thanks for the explanation! - Keith > > sarah > **** > > >> - Keith >> >> Sarah Jelinek wrote: >>> Hi Caimaniacs, >>> >>> I know you have been waiting for this bestseller to hit the shelves! >>> :-). >>> >>> The Caiman team has been working on an updated architecture and we >>> have the architecture document ready for review. The >>> opensolaris-caiman project architecture page is located >>> here(formerly known as Caiman Unified Design): >>> >>> http://hub.opensolaris.org/bin/view/Project+caiman/CUD >>> >>> The Caiman architecture document open for review is located here: >>> >>> http://hub.opensolaris.org/bin/download/Project+caiman/CUD/caimanarchitecture.pdf >>> >>> >>> >>> Please send comments/questions to caiman-discuss by 12/18/09. If you >>> need more time please let us know, with the holidays coming up we >>> may have to extend the review period. >>> >>> Thank you for your time. >>> >>> Regards, >>> sarah >>> _______________________________________________ >>> caiman-discuss mailing list >>> caiman-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss >
