In message <4fc509e8.8080...@jvm.de>, Stephan Budach writes:
>If now I'd only knew how to get the actual S11 release level of my box.
>Neither uname -a nor cat /etc/release does give me a clue, since they
>display all the same data when run on different hosts that are on
>different updates.
$ p
On 2012-May-29 22:04:39 +1000, Edward Ned Harvey
wrote:
>If you have a drive (or two drives) with bad sectors, they will only be
>detected as long as the bad sectors get used. Given that your pool is less
>than 100% full, it means you might still have bad hardware going undetected,
>if you pass
Am 29.05.12 18:59, schrieb Richard Elling:
On May 29, 2012, at 8:12 AM, Cindy Swearingen wrote:
Hi--
You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solari
On May 29, 2012, at 8:12 AM, Cindy Swearingen wrote:
> Hi--
>
> You don't see what release this is but I think that seeing the checkum
> error accumulation on the spare was a zpool status formatting bug that
> I have seen myself. This is fixed in a later Solaris release.
>
Once again, Cindy bea
Hi--
You don't see what release this is but I think that seeing the checkum
error accumulation on the spare was a zpool status formatting bug that
I have seen myself. This is fixed in a later Solaris release.
Thanks,
Cindy
On 05/28/12 22:21, Stephan Budach wrote:
Hi all,
just to wrap this is
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Now, I will run a scrub once more to veryfy the zpool.
If you have a drive (or two drives) with bad sectors, they will only be
detected as long as the bad sectors get used.
Hi Richard,
Am 29.05.12 06:54, schrieb Richard Elling:
On May 28, 2012, at 9:21 PM, Stephan Budach wrote:
Hi all,
just to wrap this issue up: as FMA didn't report any other error than
the one which led to the degradation of the one mirror, I detached
the original drive from the zpool which
On May 28, 2012, at 9:21 PM, Stephan Budach wrote:
> Hi all,
>
> just to wrap this issue up: as FMA didn't report any other error than the one
> which led to the degradation of the one mirror, I detached the original drive
> from the zpool which flagged the mirror vdev as ONLINE (although ther
Hi all,
just to wrap this issue up: as FMA didn't report any other error than
the one which led to the degradation of the one mirror, I detached the
original drive from the zpool which flagged the mirror vdev as ONLINE
(although there was still a cksum error count of 23 on the spare drive).
Am 28.05.12 00:35, schrieb Richard Elling:
On May 27, 2012, at 12:52 PM, Stephan Budach wrote:
Hi,
today I issued a scrub on one of my zpools and after some time I
noticed that one of the vdevs became degraded due to some drive
having cksum errors. The spare kicked in and the drive got
res
On May 27, 2012, at 12:52 PM, Stephan Budach wrote:
> Hi,
>
> today I issued a scrub on one of my zpools and after some time I noticed that
> one of the vdevs became degraded due to some drive having cksum errors. The
> spare kicked in and the drive got resilvered, but why does the spare driv
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> today I issued a scrub on one of my zpools and after some time I noticed that
> one of the vdevs became degraded due to some drive having cksum errors.
> The spare kicked in
Hi,
today I issued a scrub on one of my zpools and after some time I noticed
that one of the vdevs became degraded due to some drive having cksum
errors. The spare kicked in and the drive got resilvered, but why does
the spare drive now also show almost the same number of cksum errors, as
the
13 matches
Mail list logo