Hi again
in meantime I upgraded to s10u4 including recommended patches.
Then I tried again to import the zpool with same behaviour.
The stack dump is exactly the same as in previous message.
to complete label print:
# zdb -lv /dev/rdsk/c2t0d0s0
LABEL
Hello
running Solaris on x86 and zfs on a hw raid areca arc-1220. (8 * 400G, RAID5
one volume 2.8T)
After a disk failure I replaced the disk and the raid synchronized successfully.
(state of RAID and Volume shows NORMAL)
But the os would'nt boot anymore. (bootloop)
Only solution was removing /etc
For those within Sun (particularly the SPA experts), this core is
available at:
/net/mdb.eng/cores/eschrock/christophe_kalt/*.0
- Eric
On Mon, Jul 31, 2006 at 04:04:43PM -0400, Christophe Kalt wrote:
> On Jul 31, Bill Moore wrote:
> | Interesting. When you do the import, try doing this:
> |
>
On Jul 31, Bill Moore wrote:
| Interesting. When you do the import, try doing this:
|
| zpool import -o ro yourpool
|
| And see if that fares any better. If it works, could you send the
| output of "zpool status -v"? Also, how big is the pool in question?
Same panic.
It's a 250GB drive, s
Interesting. When you do the import, try doing this:
zpool import -o ro yourpool
And see if that fares any better. If it works, could you send the
output of "zpool status -v"? Also, how big is the pool in question?
Either access to the machine, or a way to copy the crash dump would be
usef
Feeling a bit brave (i guess), i upgraded one of our systems
to Solaris 10/u2 (from u1), and moved quite a bit of data to
zfs. This was 4 days ago. Found the system in a reboot loop
this morning.
Eventually got the system to boot (by wiping
/etc/zfs/zpool.cache), but one of the pools causes the