Claus Guttesen wrote:
...

>>> Jul 25 13:15:00 malene arcmsr: [ID 419778 kern.notice] arcmsr0: scsi
>>> id=1 lun=3 ccb='0xffffff02e0ca0800' outstanding command timeout
>>> Jul 25 13:15:00 malene arcmsr: [ID 610198 kern.notice] arcmsr0: scsi
>>> id=1 lun=3 fatal error on target, device was gone
>> The command timed out because your system configuration was unexpectedly
>> changed in a manner which arcmsr doesn't support.
> 
> Are there alternative jbod-capable sas-controllers in the same range
> as the arc-1680? That is compatible with solaris? I choosed the
> arc-1680 since it's well-supported on FreeBSD and Solaris.

I don't know, quite probably :) Have a look at the the HCL for both
Solaris 10, Solaris Express and OpenSolaris 2008.05 -

http://www.sun.com/bigadmin/hcl/
http://www.sun.com/bigadmin/hcl/data/sx/
http://www.sun.com/bigadmin/hcl/data/os/

>>> /usr/sbin/zpool status
>>>  pool: ef1
>>>  state: DEGRADED
>>> status: One or more devices are faulted in response to persistent errors.
>>>        Sufficient replicas exist for the pool to continue functioning in a
>>>        degraded state.
>>> action: Replace the faulted device, or use 'zpool clear' to mark the
>>> device
>>>        repaired.
>>>  scrub: resilver in progress, 0.02% done, 5606h29m to go
>>> config:
>>>
>>>        NAME            STATE     READ WRITE CKSUM
>>>        ef1             DEGRADED     0     0     0
>>>          raidz2        DEGRADED     0     0     0
>>>            spare       ONLINE       0     0     0
>>>              c3t0d0p0  ONLINE       0     0     0
>>>              c3t1d2p0  ONLINE       0     0     0
>>>            c3t0d1p0    ONLINE       0     0     0
>>>            c3t0d2p0    ONLINE       0     0     0
>>>            c3t0d0p0    FAULTED     35 1.61K     0  too many errors
>>>            c3t0d4p0    ONLINE       0     0     0
>>>            c3t0d5p0    DEGRADED     0     0    34  too many errors
>>>            c3t0d6p0    ONLINE       0     0     0
>>>            c3t0d7p0    ONLINE       0     0     0
>>>            c3t1d0p0    ONLINE       0     0     0
>>>            c3t1d1p0    ONLINE       0     0     0
>>>        spares
>>>          c3t1d2p0      INUSE     currently in use
>>>
>>> errors: No known data errors
>> a double disk failure while resilvering - not a good state for your
>> pool to be in.
> 
> The degraded disk came after I pulled the first disk and was not intended. :-)

That's usually the case :)


>> Can you wait for the resilver to complete? Every minute that goes
>> by tends to decrease the estimate on how long remains.
> 
> The resilver had approx. three hours remaining when the second disk
> was marked as degraded. After that the resilver process (and access as
> such) to the raidz2-pool stopped.

I think that's probably to be expected.

>> In addition, why are you using p0 devices rather than GPT-labelled
>> disks (or whole-disk s0 slices) ?
> 
> My ignorance. I'm a fairly seasoned FreeBSD-administrator and had
> previously used da0, da1, da2 etc. when I defined a similar raidz2 on
> FreeBSD. But when I installed solaris I initially saw lun 0 on target
> 0 and 1 and then tried the devices that I saw. And the p0-device in
> /dev/dsk was the first to respond to my zpool create-command. :^)

Not to worry - every OS handles things a little different in
that area.

> Modifying /kernel/drv/sd.conf made all the lun's visible.

Yes - by default the Areca will only present targets, not any
luns underneath so sd.conf modification is necessary. I'm working
on getting that fixed.




James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp       http://www.jmcp.homeunix.com/blog
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to