Hi William,

those changes look good.

Thank you,
Jan


William Schumann wrote:
> Jan,
> I implemented all of your suggestions and regenerated webrev.
> William
>
> jan damborsky wrote:
>> Hi William,
>>
>> I have only nits:
>>
>> auto_install.c
>> --------------
>> 431, 458: comment says 'write' - it probably should be 'finalize'
>>
>> 426          *      there be no info about partitions for TI,
>> ->
>> 426          * there is no info about partitions for TI,
>>
>>
>> As far as dealing with slice 1 when dedicated to swap device
>> is concerned, your suggestion below sounds reasonable.
>>
>> Thank you,
>> Jan
>>
>>
>> William Schumann wrote:
>>> Jan,
>>> This was put on hold for the November release, but it is high 
>>> priority for the SPARC release.
>>>
>>> Jan Damborsky wrote:
>>>> Hi William,
>>>>
>>>> in general the changes look good, I have
>>>> only couple of nits - please see below.
>>>> Also, could you please attach test procedures
>>>> used when validating that new modifications
>>>> work and no regressions were introduced ?
>>>>
>>> Testing partition and slice editing:
>>> -set up AI server
>>> -ran test copy of auto-install and liborchestrator out of /tmp
>>> -created test manifest based on ai_combined_manifest.xml
>>> -copied ai_manifest.default.xml and ai_manifest.rng into /tmp
>>> -command line: LD_DBG_LVL=4 LD_LOAD_LIBRARY=/tmp /tmp/auto-install 
>>> -p <my test manifest>
>>> -did several permutations of custom slices and partitions in manifest
>>>
>>> Regression testing, a.k.a. does GUI still work?  :
>>> -used RC2 ISO with VirtualBox
>>> -put new builds of liborchestrator.so.1 and libti.so.1 into /tmp
>>> -run as root "crle -e LD_LIBRARY_PATH=/tmp" to add /tmp as a ld 
>>> search path
>>> -verify with ldd /usr/lib/gui-install that test shared libraries are 
>>> used
>>> -as "jack", LS_DBG_LVL=4 pfexec gui-install
>>> -checked /tmp/install_log - call to TI looks fine, swap slice 
>>> created when needed
>>> -used format(1M) to check slices and partition
>>> --- saw swap slice in slice 1, slice 0 used rest of available space
>>> -tested with and without predefined Solaris2 partition
>>>>
>>>> auto_td.c
>>>> ---------
>>>> om_write_partition_table(), om_write_vtoc()
>>>> - it seems after those functions were changed,
>>>>  they don't carry out real target modifications -
>>>>  this happens in do_ti(). Could you please rename
>>>>  those functions as the current names are misleading -
>>>>  I might recommend something like
>>>>  om_prepare_partition_table(), om_prepare_vtoc()
>>> renamed several of these
>>>>
>>>> perform_slim_install.c
>>>> ----------------------
>>>>
>>>> It seems slim_set_slice_attrs() is no longer used.
>>>> If this is the case, could you please completely
>>>> remove that function from slim_util.c ?
>>>>
>>> Removed, and that left only two functions in slim_util.c - for 
>>> preparing TI attributes for slices and partitions, which I moved 
>>> into disk_slices.c and disk_parts.c respectively, and deleted 
>>> slim_util.c.  Changes in slim_util.c pertaining to swap and dump 
>>> creation were merged.  This brings up a question of what is proper 
>>> behavior if the user has other plans for slice 1, but AI decides 
>>> that swap and dump should be dedicated to slice 1.  Code will allow 
>>> customization of slice 1 to take priority over using slice 1 for 
>>> swap and dump.
>>>
>>> Note that this does not have SPARC-specific code yet.
>>> William
>>>>
>>>>
>>>> William Schumann wrote:
>>>>> http://cr.opensolaris.org/~wmsch/bug-4233/
>>>>> http://defect.opensolaris.org/bz/show_bug.cgi?id=4233
>>>>>
>>>>> Calling TI twice for slices and partitions was fixed by using a 
>>>>> single calling point for each in perform_slim_install.c for GUI 
>>>>> and AI.
>>>>>
>>>>> The actual installation failure turned out to be due to a TI bug 
>>>>> in which mktemp(3c) was being called using a pointer to a constant 
>>>>> text string, which mktemp() was overwriting.  On the second call 
>>>>> to TI, the constant text area wasn't valid (did not contain "X"s) 
>>>>> and mktemp() failed.
>>>>> _______________________________________________
>>>>> caiman-discuss mailing list
>>>>> caiman-discuss at opensolaris.org
>>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>
>>


Reply via email to