Hi William,

William Schumann wrote:
> I've resumed testing on a SPARC 5220 with 1GB memory.  David Comay has 
> added the slim_install cluster to ipkg.sfbay:29048, which has allowed 
> me to move forward with testing.
>
> I'm about leave for the day, but wanted to report that it appears to 
> be running, but has been doing a pkg install of SUNWcsd for over an 
> hour now, which I'm attributing to a slow link between the lab in 
> Prague and ipkg.sfbay at this moment.

I have mirrored that repository and it is now available
on 10.18.138.30:7777 which is located in Prague lab.

ipkg.sfbay:29049 with build 105 is in process of mirroring
right now - I assume the process could finish today.
Once mirroring is done 105 IPS Sparc repo will be available
on 10.18.138.30:7778.
(Sparc IPS repositories located on ipkg.sfbay as well as mirrors
on 10.18.138.30 are available only internally at time of being).

>
> Only strange thing I saw is a state in which TI ZFS release was 
> failing when I attempted to rerun auto-install.  dumpadm showed a dump 
> device, but swap -l showed no swap.

That is interesting - since as you point below that ZFS volume dedicated 
to swap was
created, it would need to be investigated, why it was not part of swap 
pool when the
installer was restarted.
Is this behavior reproducible ? If it is reproducible, could you please 
rerun the installer
in debug mode and capture install_log file ?

>   At Jan's suggestion, I added swap device /dev/zvol/dsk/rpool/swap, 
> then was able to manually delete dump with dumpadm -d swap, and then 
> repeat the installation.

Yes, this is the procedure recommended by ZFS team.
If I understood correctly, 'dumpadm -d swap' is the only
way how ZFS volume dedicated to dump can be released
which is necessary condition for being able to destroy
ZFS pool containing dump ZFS volume.

Thank you,
Jan

>
> Will resume testing tomorrow.
> William
>
>
> jan damborsky wrote:
>> Alok Aggarwal wrote:
>>>
>>> On Fri, 12 Dec 2008, William Schumann wrote:
>>>
>>>> I've started testing AI for sparc.  I have had one successful test 
>>>> getting to ICT before failing (orchestrator code 800)
>>>>
>>>> It seems to handle manifests correctly and creates and preserves 
>>>> slices as expected.
>>>
>>> Good work, William!
>>>
>>>> There is one error message I see early in the Transfer phase.  From 
>>>> stdout/stderr:
>>>>
>>>> list of packages to be installed is:
>>>> SUNWcsd
>>>>                   SUNWcs
>>>>                   slim_install
>>>> Error in atexit._run_exitfuncs:
>>>> Traceback (most recent call last):
>>>> File "/usr/lib/python2.4/atexit.py", line 24, in _run_exitfuncs
>>>>   func(*targs, **kargs)
>>>> File "/usr/lib/python2.4/threading.py", line 638, in __exitfunc
>>>>   self._Thread__delete()
>>>> File "/usr/lib/python2.4/threading.py", line 522, in __delete
>>>>   del _active[_get_ident()]
>>>> KeyError: 8
>>>>
>>>> It seems like cleanup on exiting a thread in Python may have some 
>>>> problems under SPARC.  Hope it isn't serious.
>>>
>>> I hit this something similar while developing AI for
>>> 2008.11. In my case it was due to SUNWbeadm and
>>> SUNWinstall-libs not being in sync.
>>>
>>> Does your test machine have these packages in sync?
>>
>> I can also see that kind of error messages in 
>> application-auto-installer:default.log
>> file when testing AI with rc2 image on x86. I think that in that case
>> packages in AI image are in sync, so probably there might be some
>> other issue causing that.
>> It doesn't seems to be fatal, since installation is always successful,
>> but it will need to be investigated. I have found out that Jeffrey
>> filed bug for tracking this:
>>
>> 4547 _run_exitfuncs error message appears in auto-installer log
>>
>> Jan
>>


Reply via email to