Dave Miner wrote: > Sundar Yamunachari wrote: >> Dave: >> >> Thanks for the comments. >> >> Dave Miner wrote: > > ... >>> page 7, section 4.1: Perhaps I missed something, but I don't recall >>> support of upgrading instances installed on SVM as being on the planned >>> feature list, so what are we doing with this information? Also, we >>> should be using "GPT" here instead of "EFI", right? >> The SVM information is recorded here so that we can provide a useful >> explanation to the user why some instances are not upgradeable. It >> should be GPT . I will fix that. > > I presume you will clarify the text regarding the usage of the SVM > information? Yes. I will describe the usage of SVM information. > > ... >>> page 11, 4.1.2, om_initiateTargetDiscovery: the description says >>> that the "progress of the callback will be returned..." when it >>> should say "progress of the discovery will be returned...". Also, >>> the text here says, "If target discovery cannot be started the >>> installation application must exit." I think it's fundamentally >>> incorrect for a component to be dictating that its caller must react >>> in any particular way to a failure, certainly not in a fashion so >>> drastic as "must exit". Whenever possible, the installation >>> application and its supporting functions should be written to allow >>> for the user to attempt to re-initiate a process, not exit and tell >>> them "type suninstall to try again" or somesuch. Finally, how does >>> the caller obtain details on the reason for a failure? >> With out the disk data, we can't proceed. We want to provide as much >> information as possible to the dwarf GUI so that what action needs to >> be taken. The document needs more polish when it comes to >> Orchestrator Disk Discovery functions (pointing to me :-) ). > > I understand that we can't do an installation for this project without > having any visible disks (though eventually that may not necessarily > be a requirement), but I continue to object to this specification > attempting to tell the caller that it "must exit". Saying that the > discovery needs to be re-run and that no further progress will be > possible until a successful discovery completes is fine; how the > calling application chooses to deal with that state of affairs is its > own business. okay. Fair enough. We will fix the wording > > ... >>> om_getDiskPartitionInfo: why not just take a DiskInfo as the first >>> argument? And what is meant by "A failure will be fatal in this case"? >> The DiskInfo structure returned from om_getDiskInfo is a linked list >> of DiskInfo. Why the whole structure needs to be passed back when >> diskName is enough to get the partitions. >> > > If this were the long-term design I'd be pushing strongly to have the > callers not directly referencing structure members but instead using > accessor functions, because that gives us much more flexibility in > evolving the design, so generally I'm suggesting that callers not have > to grovel around in the structures to use results from one call to > make another call. The intention is not to ask the callers to use the sub members of the defined structures. When the caller asks for disk info, the partition infomation will not be returned. When the partition information is required, it will be treated as a separate structure. If it is too confusing, I will define separate structures for the callers to use. > >>> om_getSolarisInstances: there's no way to free the space this >>> allocates other than to commit the installation? What happens if >>> the user goes back to the beginning of the installer, do we just >>> leak that memory? >> No memory leaks :-( . All allocated memory is kept on track by the >> Orchestrator. Initially we were planning not to have any free >> functions. The memory is freed when the Orchestrator detects that it >> is no longer needed (like user commits to install/upgrade). There >> are always possibilities that using these functions may not lead to >> commit to install/upgrade. So we decided to add a generic function to >> free all the allocated memory. > > It's not at all clear to me that the orchestrator can accurately infer > from the application's access pattern what memory it needs to keep > around and what it can release, so providing some additional functions > here may well be needed. I'd think you would want them internally, > anyway, even if you're correct that the caller doesn't need to worry > about this at all. I still don't get what actually happens with this > if the user backs up all the way to the beginning of the installer. Each time a caller calls to get disks/partitions/Solaris instances, space is allocated and returned to the caller. Each time Orchestrator allocates data, the information (pointers) is saved in Orchestrator so that it can be released later. When the caller calls om_releaseTargetData(), All the data allocated for this caller will be released. Probably, we may add a handle (as Niall suggested) to identify the data for each caller. Or we can have a free function for each type of data (disk data and Solaris instances data). > > ... >>> om_releaseTargetData: this seems to have an excessively drastic >>> effect; shouldn't we provide some slightly less drastic ways of >>> freeing up results that we don't need? >> All the allocated data are either disk specific or instance specific. >> I don't understand the point of providing specific functions to free >> partition and/or slice data. This only frees the data allocated by >> the Orchestrator on behalf of the consumer. The Orchestrator cache >> will be freed only when there is a subsequent TargetDiscovery() call >> or commit to install/upgrade. > > Given the very large set of complaints we get about the install memory > footprint, I think attempting to allow the app to aggressively > minimize its memory usage is worth some small effort. And again, I > think you'd want those functions, anyway, for internal usage, so I > don't see there's any significant extra work here. Okay. I will look in to it.
Thanks for the suggestions, Sundar > > Dave
