Hi all

I'm experiencing a little problem here...
Setup:
hw: X2100M2, 2*250GB disk, 5GB RAM.
Disks: ufs roots on both s0, these are the BE's. s1 dump, s3 swap (zvol, 
mirror), s4 zpool (mirror).
I lucreated a new be from nv_82, and luupgraded it to nv_117.

Things went smoothly (a few failed pkagadds in /var/sadm, but those are all 
gnome-pkg-s - we used to have a gui), after luactivate the system rebooted.
This hasn't been the first luupgrae on this system, so things went as usual.

...then reboot again... and again...

After having added the -kdv flags to the kernel  boot line in grub, it turns 
out, that the root fs cannot be mounted...
Failsefe doesn't reboot, but does not offer to mount this BE either, only nv_82 
is found. If I try to mount nv_117 later on, mount gives an i/o error (it's 
c1t1d0s0, nv_82 is c1t0d0s0) This happens with nv_117' failsafe.

With both nv_82's "full boot" and failsafe install I can mount both BE's, and 
read and write the filesystems.
so the data is there, and the fs is okay.

I have found similar problems here, on opensolaris.org, but all assume, that 
somehow  I can boot into the failing BE so that I can check the bootpath.
Here I can't do this, but since disk1's path is okay, and the x2100m2 has only 
two disks, I _think_ that the bootpath is okay. It seemingly is, when checked 
from the other BE.
I have tried to clean up the device tree from nv_82, but no luck...

Since the bootpath should be okay I suspect a driver issue, but I'm lost here...

Any hints, tips, help, pointers will be greatly appreciated!!

thanks in advance!
-- 
This message posted from opensolaris.org
_______________________________________________
opensolaris-help mailing list
opensolaris-help@opensolaris.org

Reply via email to