Sarah Jelinek wrote:
> Jean McCormack wrote:
>> Glenn Lagasse wrote:
>>> * Dave Miner (Dave.Miner at Sun.COM) wrote:
>>>> Jean McCormack wrote:
>>>>> Jens Deppe wrote:
>>>>>> Jean McCormack wrote:
>>>>>>> Jens Deppe wrote:
>>>>>>>> Jean McCormack wrote:
>>>>>>>>> Jens Deppe wrote:
>>>>>>>>>> Hi Jean,
>>>>>>>>>>
>>>>>>>>>> One comment inline...
>>>>>>>>>>
>>>>>>>>>> On 05/21/09 14:19, Jean McCormack wrote:
>>>>>>>>>>> Progress Reporting:
>>>>>>>>>>>
>>>>>>>>>>> The output to the console to reflect the progress of the 
>>>>>>>>>>> auto-install should be of the format:
>>>>>>>>>>>
>>>>>>>>>>> (pseudo progress bar) High level description of current 
>>>>>>>>>>> functionality
>>>>>>>>>>>
>>>>>>>>>>> Example (general idea, wording is not exact) :
>>>>>>>>>>>
>>>>>>>>>>> (.....................) Discovering available services
>>>>>>>>>>> (..... ) Choosing service
>>>>>>>>>>>
>>>>>>>>>>> The .'s indicate percentage completion. This means we have a 
>>>>>>>>>>> dependency upon IPS to supply size information for the 
>>>>>>>>>>> packages.
>>>>>>>>>>> A return is only implemented when the install moves from one 
>>>>>>>>>>> major block of functionality to the next. Otherwise, the 
>>>>>>>>>>> text is overwritten with updates to the dots to indicate 
>>>>>>>>>>> progress.
>>>>>>>>>>>
>>>>>>>>>>> The use of virtual console was considered as a possibility 
>>>>>>>>>>> if a more detailed progress
>>>>>>>>>>> is required. Preliminary investigation indicates that this 
>>>>>>>>>>> currently is not in our microroot and would
>>>>>>>>>>> be too large to include there.
>>>>>>>>>> Please consider enabling the log file to be 
>>>>>>>>>> retrieved/accessed remotely. Simply exposing it via an http 
>>>>>>>>>> service would be a *big* help. Especially when installing 
>>>>>>>>>> systems remotely.
>>>>>>>>> Does this meet your needs?
>>>>>>>>>
>>>>>>>>> The log file will also be written to the AI server at 
>>>>>>>>> /var/ai/client_logs/ip_address/install_log.
>>>>>>>> Not really as we are (for now) not using the AI server to 
>>>>>>>> provision clients and deliver the manifests. (We're using a 
>>>>>>>> Begin service to derived the manifest).
>>>>>>>>
>>>>>>>> So, what mechanism/protocol will be used to move the install 
>>>>>>>> log from client to server? Will it be streamed during the 
>>>>>>>> install or only sent once the install completes?
>>>>>>> Streamed during the install.
>>>>>> So is the source exposed on the client via http or some other 
>>>>>> common means whereby we could monitor it without relying on the 
>>>>>> AI server?
>>>>> At this point, there's no plan for that. You could build a custom 
>>>>> ai image with ssh enabled and ssh to the system. However, be aware 
>>>>> there are security concerns with doing so. See the caiman-discuss 
>>>>> discussion with respect to this.
>>>>>
>>>> Recall that ssh isn't enabled because the use of well-known 
>>>> passwords makes the system essentially wide open to compromise 
>>>> during the installation period, but allowing some *controlled* 
>>>> means of monitoring the installation directly with the client seems 
>>>> worth considering.
>>>
>>> I apologize being late to this discussion. The VM constructor project
>>> is planning to make use of an AI client to perform a hands-off 
>>> automated
>>> install inside a virtual machine in order to provide pre-constructed vm
>>> images. To support this, we'll need a bootable automated installer
>>> image that doesn't require a webserver setup. That's not really
>>> relevant to this discussion, what is relevant is in the area of
>>> observability. Ideally we would like to be able to monitor and react to
>>> events going on inside the VM while the AI client is performing it's
>>> installation. In a hands-off manner of course. Things like error
>>> conditions, progress reporting, and completion status so that we can
>>> report to the user running the distribution constructor what's going on
>>> and react appropriately to problems. So we would need something like
>>> what Dave proposes for allowing some controlled means of monitoring the
>>> installation directly with the client. Perhaps that's some 'server'
>>> like process running on the client that interested parties can connect
>>> to and receive data or something else. Whatever the implementation, we
>>> would need the data presented in such a way as that we can easily
>>> (relative term) relay that information to the user (and the DC logs)
>>> about what's going on as well as dealing with any errors that crop up
>>> during the various steps of the installation.
>> Glenn and I talked today about this requirement.
>>
>> The basics are in this case the VM is the AI client and there is no 
>> AI server.
>> The DC is running the VM but the VM knowns nothing about DC. However,
>> he would like the DC to be able to monitor what is going on. 
>
> So, in the DC today, there is a socket opened for the finalizer 
> scripts that data can be passed on. If the AI client uses this to pass 
> data to the DC as the install is progressing, would this solve the 
> observability requirements?
That's a question for Glenn. Sounds to me like it would, if it's possible.


>
>> The more generic
>> case of this was brought up by Jens Deppe on the caiman-discuss alias 
>> last week.
>> (Subject: AI client redesign for progress reporting and error 
>> reporting/logging)
>>
>> We talked about some general ideas:
>>
>> 1) ssh - no due to security issues
>> 2) Have the DC start up a webserver - no, too complicated and might 
>> not work for the
>> generic case which doesn't have DC running
>> 3) Glenn's idea to have a deamon running on the client which could be 
>> connected to from the outside
>> to receive data.
>
>>
>> Glenn will send a more detailed write up on #3.
>>
>> 4) Jens broached the idea of exposing the source on the client via 
>> http or something similar.
>>
> What source are you referring to here. I haven't kept up completely 
> with Jens email thread, so thought I would short-circuit reading it 
> all and ask :-).
Here's the quote:

So is the source exposed on the client via http or some other common 
means whereby we could monitor it without relying on the AI server?

Jean

>
> thanks,
> sarah
>>
>> We'd like to get feedback on these iitems and solicit input for more 
>> ideas.
>>
>> Jean
>>
>>
>>
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>


Reply via email to