Jan:

I was able to install osol_109 with LiveCD on line1-dellp350 successfully.



jan damborsky wrote:
> Hi Mary,
> 
> 
> On 03/26/09 01:55, Ethan Quach wrote:
>>
>>
>> mary ding wrote:
>>> Jan and Ethan:
>>>
>>> line1-hpdc5750 and line1-dellp350 hangs and install cannot continue.  
>>> So we need swap for 768 MB and 1024 MB.
>>
>> Its interesting that it failed with the 1024Mb system, but others
>> with less memory succeeded.  This could mean its all just a bit
>> random to begin with.
> 
> I agree with Ethan - since installation succeeded on system
> with 896MB, it might be worth to take a look why it fails on
> line1-dellp350 with 1G.
> 
> After you are done with testing and this system is free, could
> I please take a closer look at what might be going on there ?
> 
> Thanks for testing this !
> Jan
> 
>>
>> Thanks for trying these out.  I'll look forward to the results
>> from tonight's Sparc tests with 1.25Gb and 1.5Gb memory.
>>
>>
>> thanks,
>> -ethan
>>
>>>
>>> I will try to setup VB and get some data between 1.1 GB to 1.5 GB.
>>>
>>>
>>>
>>> mary ding wrote:
>>>> Jan and Ethan:
>>>>
>>>> These are the AI install results with 109 after deleting swap:
>>>>
>>>> line1-hpdc5750 - 768 MB -  IPS download still in progress
>>>> line1-hpdx2300 - 896 MB -  Isntall is sucessful and there is no 
>>>> corruption of solaris.zlib
>>>> line1-hpdc5700 - 1016 MB - Install is successful and there is no 
>>>> corruption of solaris.zlib
>>>> line1-acer6900 - 999 MB - Install is successful and there is no 
>>>> corruption of solaris.zlib
>>>> line1-hpdc7600 - 1016 MB - Install is successful and there is no 
>>>> corruption of solaris.zlib
>>>> line1-dellp350 - 1024 MB - IPS download still in progress
>>>>
>>>>
>>>> The line1-dellp350 had:
>>>>
>>>> The physical processor has 2 virtual processors (0 1)
>>>>   x86 (GenuineIntel F27 family 15 model 2 step 7 clock 3050 MHz)
>>>>     Intel(r) Pentium(r) 4 CPU 3.06GHz
>>>>
>>>> The line1-hpdc5750 had:
>>>>
>>>> The physical processor has 2 virtual processors (0 1)
>>>>   x86 (AuthenticAMD 40FB2 family 15 model 75 step 2 clock 2000 MHz)
>>>>     AMD Athlon(tm) 64 X2 Dual Core Processor 3800+
>>>>
>>>>
>>>>
>>>>
>>>> jan damborsky wrote:
>>>>> Hi Mary,
>>>>>
>>>>>
>>>>> On 03/25/09 19:30, mary ding wrote:
>>>>>> Jan:
>>>>>>
>>>>>> I am doing AI Install on the following x86 machines now with b109:
>>>>>>
>>>>>> 768 MB, 896 MB, 999 MB, 1016 MB and 1024 MB
>>>>>
>>>>> I am curios about the results - based on my observations
>>>>> done when I tested fix for 4166 (around build 107) I assume
>>>>> that first two would fail.
>>>>> We will see how things changed :-)
>>>>>
>>>>> Thanks for trying this !
>>>>> Jan
>>>>>
>>>>>>
>>>>>> I delete the zfs swap and now it is downloading IPS packages from 
>>>>>> the repo.  If I leave the zfs swap around, then corruption will 
>>>>>> occur.  I did the checksum after install to verify they are the 
>>>>>> same as the solaris.zlib on the server to find out whether I run 
>>>>>> into 6804.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> jan damborsky wrote:
>>>>>>> Hi Mary,
>>>>>>>
>>>>>>>
>>>>>>> On 03/25/09 18:38, mary ding wrote:
>>>>>>>> Ethan and William:
>>>>>>>>
>>>>>>>> I agree with Ethan about this.  We shouldn't require user to do 
>>>>>>>> anything manual or customize manifest to workaround this ugly 
>>>>>>>> zfs bug.
>>>>>>>>
>>>>>>>> So far, this is what I found out for sparc and x86.  I hit this 
>>>>>>>> bug everytime when my system had more than 700 MB for sparc and 
>>>>>>>> x86.  With 512 MB, the install works but it is very slow.
>>>>>>>>
>>>>>>>> With system that had 1 GB of memory, if I delete zfs swap, IPS 
>>>>>>>> install will fail and run out of memory.
>>>>>>>
>>>>>>> Do you happen to know if the behavior is the same
>>>>>>> for sparc & x86 systems with 1GB of memory ?
>>>>>>>
>>>>>>> I am asking, since after fix for 4166 bug was integrated,
>>>>>>> AI is supposed to work on x86 systems with 1GB of memory
>>>>>>> without swap device. If it fails for you, might I please
>>>>>>> take a look at x86 machine in question ?
>>>>>>>
>>>>>>> Thank you,
>>>>>>> Jan
>>>>>>>
>>>>>>>>
>>>>>>>> If I do this on system with 2 GB of memory and delete zfs swap, 
>>>>>>>> the install works.
>>>>>>>>
>>>>>>>> If you need any help testing the workaround, I will be happy to 
>>>>>>>> try them out since I had machines available for testing purpose.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Ethan Quach wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> William Schumann wrote:
>>>>>>>>>> RE: Bugzilla bug 6084 AI fails due to solaris.zlib becoming 
>>>>>>>>>> corrupt during install
>>>>>>>>>> http://defect.opensolaris.org/bz/show_bug.cgi?id=6804
>>>>>>>>>>
>>>>>>>>>> 6817316 data corruption seen with swapping to a zvol
>>>>>>>>>> http://monaco.sfbay/detail.jsf?cr=6817316
>>>>>>>>>>
>>>>>>>>>> The problem occurs when a ZFS volume is used for swap in the 
>>>>>>>>>> Automated Installer.  There is an alternative swap solution - 
>>>>>>>>>> to use a slice for swap instead of a ZFS volume.
>>>>>>>>>>
>>>>>>>>>> Currently, AI follows the logic of creating swap on a ZFS 
>>>>>>>>>> volume if there is enough space.  If not, the target partition 
>>>>>>>>>> (x86) or disk (SPARC) slice 1 will be used for swap if there 
>>>>>>>>>> is enough space.
>>>>>>>>>>
>>>>>>>>>> What I propose here to add a feature: a new AI manifest 
>>>>>>>>>> element <target_device_swap_slice_number>, which would force 
>>>>>>>>>> the creation of swap on an indicated slice instead of on a ZFS 
>>>>>>>>>> volume.
>>>>>>>>>
>>>>>>>>> This is a workaround, but this solution would seem to require that
>>>>>>>>> the user manually do something to work around the issue after 
>>>>>>>>> hitting
>>>>>>>>> it.  Could we devise a work around that works out of the box?  
>>>>>>>>> Or at
>>>>>>>>> least works out of the box for most scenarios?
>>>>>>>>>
>>>>>>>>> For example, for systems with XX Gb memory, we could still 
>>>>>>>>> create the
>>>>>>>>> swap zvol, and just not add it during the microroot.  I think 
>>>>>>>>> Mary's
>>>>>>>>> found that XX so far equals 2Gb, but she doesn't really have any
>>>>>>>>> systems with anything between 1Gb and 2Gb of memory.  With some 
>>>>>>>>> VBox
>>>>>>>>> testing perhaps we can find that number to be something 
>>>>>>>>> smaller, and
>>>>>>>>> then our bug case would only be for systems with memory between
>>>>>>>>> 700Mb and XXGb --and only there would we institute some really
>>>>>>>>> ugly hack.  For those cases we could resort to what you've 
>>>>>>>>> described
>>>>>>>>> above, or just move our 700Mb number up to XXGb.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> thanks,
>>>>>>>>> -ethan
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> An advantage of this approach is that, since we are as yet 
>>>>>>>>>> uncertain of which configurations will exhibit bug 6084, there 
>>>>>>>>>> is a simple manual configuration change to work around it.  It 
>>>>>>>>>> would also serve as an AI feature (although perhaps not a 
>>>>>>>>>> terribly interesting one).
>>>>>>>>>>
>>>>>>>>>> I would estimate a day of work to code this and do unit 
>>>>>>>>>> testing.  The additional risk would be small.
>>>>>>>>>>
>>>>>>>>>> I will mention here that the swapping to a zvol bug needs to 
>>>>>>>>>> be fixed, and that perhaps we shouldn't be thinking in terms 
>>>>>>>>>> of a workaround, but a high priority fix.
>>>>>>>>>>
>>>>>>>>>> Any feedback would be appreciated,
>>>>>>>>>> William
>>>>>>>>>> _______________________________________________
>>>>>>>>>> caiman-discuss mailing list
>>>>>>>>>> caiman-discuss at opensolaris.org
>>>>>>>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>>>>>> _______________________________________________
>>>>>>>>> caiman-discuss mailing list
>>>>>>>>> caiman-discuss at opensolaris.org
>>>>>>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> caiman-discuss mailing list
>>>>>>>> caiman-discuss at opensolaris.org
>>>>>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> caiman-discuss mailing list
>>>> caiman-discuss at opensolaris.org
>>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>>>
>>> _______________________________________________
>>> caiman-discuss mailing list
>>> caiman-discuss at opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
>> _______________________________________________
>> caiman-discuss mailing list
>> caiman-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/caiman-discuss
> 


Reply via email to