Moshe,
You might want to check if you have multiple paths to these disks.
- Sanjeev
On Wed, Feb 17, 2010 at 07:59:28PM -0800, Moshe Vainer wrote:
> I have another very weird one, looks like a reoccurance of the same issue but
> with the new firmware.
>
> We have the following disks:
>
> AVAILA
Hello Cindy,
I have got my LSI controllers and exchanged them for the Areca. The result is
stunning:
1. exported pool (in this strange state I reported here)
2. changed controller and re-ordered the drives as before posting this matter
(c-b-a back to a-b-c)
3. Booted Osol
4. imported pool
Resu
The links look fine, and i am pretty sure (though not 100%) that this is
related to the vdev id assignment. What i am not sure is whether this is still
an areca firmware issue or opensolaris issue.
ls -l /dev/dsk/c7t1d?p0
lrwxrwxrwx 1 root root 62 2010-02-08 17:43 /dev/dsk/c7t1d0p0 ->
../../d
On 2/17/2010 9:59 PM, Moshe Vainer wrote:
I have another very weird one, looks like a reoccurance of the same issue but
with the new firmware.
We have the following disks:
AVAILABLE DISK SELECTIONS:
0. c7t1d0
/p...@0,0/pci8086,3...@3/pci17d3,1...@0/d...@1,0
1. c7t1d1
I have another very weird one, looks like a reoccurance of the same issue but
with the new firmware.
We have the following disks:
AVAILABLE DISK SELECTIONS:
0. c7t1d0
/p...@0,0/pci8086,3...@3/pci17d3,1...@0/d...@1,0
1. c7t1d1
/p...@0,0/pci8086,3...@3/pci17d3,1
On 3/02/10 01:31 AM, Tonmaus wrote:
Hi James,
am I right to understand that in a nutshell the problem is that if
page 80/83 information is present but corrupt/inaccurate/forged (name
> it as you want), zfs will not get to down to the GUID?
Hi Tonmaus,
If page83 information is present, ZFS wi
Hi James,
am I right to understand that in a nutshell the problem is that if page 80/83
information is present but corrupt/inaccurate/forged (name it as you want), zfs
will not get to down to the GUID?
regards,
Tonmaus
--
This message posted from opensolaris.org
__
Thanks. That fixed it.
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Even if the pool is created with whole disks, you'll need to
use the s* identifier as I provided in the earlier reply:
# zdb -l /dev/dsk/cvtxdysz
Cindy
On 02/02/10 01:07, Tonmaus wrote:
If I run
# zdb -l /dev/dsk/c#t#d#
the result is "failed to unpack label" for any disk attached to contro
On 2/02/10 06:52 PM, Moshe Vainer wrote:
I beileve to have seen the same issue. Mine was documented as:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6843555
Areca did issue a fixed firmware, but i can't say whether that indeed
was the end of it, since we didn't do a controlled di
I beileve to have seen the same issue. Mine was documented as:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6843555
Areca did issue a fixed firmware, but i can't say whether that indeed was the
end of it, since we didn't do a controlled disk mixing experiment since then.
I did fi
Goog morning Cindy,
> Hi,
>
> Testing how ZFS reacts to a failed disk can be
> difficult to anticipate
> because some systems don't react well when you remove
> a disk.
I am in the process of finding that out for my systems. That's why I am doing
these tests.
> On an
> x4500, for example, you h
If I run
# zdb -l /dev/dsk/c#t#d#
the result is "failed to unpack label" for any disk attached to controllers
running on ahci or arcmsr controllers.
Cheers,
Tonmaus
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-disc
Frank,
ZFS, Sun device drivers, and the MPxIO stack all work as expected.
Cindy
On 02/01/10 14:55, Frank Cusack wrote:
On February 1, 2010 4:15:10 PM -0500 Frank Cusack
wrote:
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
wrote:
Whether disk swapping on the fly or a controller fir
Hi,
Testing how ZFS reacts to a failed disk can be difficult to anticipate
because some systems don't react well when you remove a disk. On an
x4500, for example, you have to unconfigure a disk before you can remove
it.
Before removing a disk, I would consult your h/w docs to see what the
recomm
On February 1, 2010 4:15:10 PM -0500 Frank Cusack
wrote:
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
wrote:
Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depends on the driver-->ZFS
interaction and we can't speak for al
Hi again,
> Follow recommended practices for replacing devices in
> a live pool.
Fair enough. On the other hand I guess it has become clear that the pool went
offline as a part of the procedure. That was partly as I am not sure about the
hotplug capabilities of the controller, partly as I wante
Hi Cindys,
> I'm still
> not sure if you physically swapped c7t11d0 for c7t9d0 or if c7t9d0 is
> still connected and part of your pool.
The latter is not the case according to status, the first is definitely the
case. format reports the drive as present and correctly labelled.
> ZFS has rec
On February 1, 2010 1:09:21 PM -0700 Cindy Swearingen
wrote:
Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depends on the driver-->ZFS
interaction and we can't speak for all hardware.
With mpxio disks are known by multiple names.
Hi Frank,
If you want to replace one disk with another disk, then physically
replace the disk and let ZFS know by using the zpool replace command
or set the autoreplace property.
Whether disk swapping on the fly or a controller firmware update
renumbers the devices causes a problem really depen
On February 1, 2010 10:19:24 AM -0700 Cindy Swearingen
wrote:
ZFS has recommended ways for swapping disks so if the pool is exported,
the system shutdown and then disks are swapped, then the behavior is
unpredictable and ZFS is understandably confused about what happened. It
might work for some
ZFS can generally detect device changes on Sun hardware, but for other
hardware, the behavior is unknown.
The most harmful pool problem I see besides inadequate redundancy levels
or no backups, is device changes. Recovery can be difficult.
Follow recommended practices for replacing devices in a
10 disks connected in the following order:
0 1 2 3 4 5 6 7 8 9
Export pool. Remove three drives from the system:
0 1 3 4 6 7 8
Plug them back in, but into different slots:
0 1 9 3 4 1 6 7 8 5
Import the pool.
What's supposed to happen is that ZFS detects the drives, figures out where
t
Its Monday morning so it still doesn't make sense. :-)
I suggested putting the disks back because I'm still not sure if you
physically swapped c7t11d0 for c7t9d0 or if c7t9d0 is still connected
and part of your pool. You might trying detaching the spare as described
in the docs. If you put the d
> Hi--
>
> Were you trying to swap out a drive in your pool's
> raidz1 VDEV
> with a spare device? Was that your original
> intention?
Not really. I just wanted to see what happens if the physical controller port
changes, i.e. what practical relevance it would have if I put the disks in the
sam
Hi--
Were you trying to swap out a drive in your pool's raidz1 VDEV
with a spare device? Was that your original intention?
If so, then you need to use the zpool replace command to replace
one disk with another disk including a spare.
I would put the disks back to where they were and retry with
Hi all,
this is what I get from 'zpool status pool' after swapping 3 of 10 members of a
zpool for testing purpose.
[i]u...@zfs2:~$ zpool status pool
pool: pool
state: ONLINE
scrub: scrub in progress for 0h8m, 4,70% done, 2h51m to go
config:
NAME STATE READ WRITE CKSUM
27 matches
Mail list logo