Yeah, I'm having a combination of this and the "resilver constantly
restarting" issue.

And nothing to free up space.

It was recommended to me to replace any expanders I had between the HBA and
the drives with extra HBAs, but my array doesn't have expanders.

If your's does, you may want to try that.

Otherwise, wait it out :(




On Wed, Sep 29, 2010 at 6:37 PM, Scott Meilicke <sc...@kmclan.net> wrote:

> I should add I have 477 snapshots across all files systems. Most of them
> are hourly snaps (225 of them anyway).
>
> On Sep 29, 2010, at 3:16 PM, Scott Meilicke wrote:
>
> > This must be resliver day :)
> >
> > I just had a drive failure. The hot spare kicked in, and access to the
> pool over NFS was effectively zero for about 45 minutes. Currently the pool
> is still reslivering, but for some reason I can access the file system now.
> >
> > Resliver speed has been beaten to death I know, but is there a way to
> avoid this? For example, is more enterprisy hardware less susceptible to
> reslivers? This box is used for development VMs, but there is no way I would
> consider this for production with this kind of performance hit during a
> resliver.
> >
> > My hardware:
> > Dell 2950
> > 16G ram
> > 16 disk SAS chassis
> > LSI 3801 (I think) SAS card (1068e chip)
> > Intel x25-e SLOG off of the internal PERC 5/i RAID controller
> > Seagate 750G disks (7200.11)
> >
> > I am running Nexenta CE 3.0.3 (SunOS rawhide 5.11 NexentaOS_134f i86pc
> i386 i86pc Solaris)
> >
> >  pool: data01
> > state: DEGRADED
> > status: One or more devices is currently being resilvered.  The pool will
> >       continue to function, possibly in a degraded state.
> > action: Wait for the resilver to complete.
> > scan: resilver in progress since Wed Sep 29 14:03:52 2010
> >    1.12T scanned out of 5.00T at 311M/s, 3h37m to go
> >    82.0G resilvered, 22.42% done
> > config:
> >
> >       NAME           STATE     READ WRITE CKSUM
> >       data01         DEGRADED     0     0     0
> >         raidz2-0     ONLINE       0     0     0
> >           c1t8d0     ONLINE       0     0     0
> >           c1t9d0     ONLINE       0     0     0
> >           c1t10d0    ONLINE       0     0     0
> >           c1t11d0    ONLINE       0     0     0
> >           c1t12d0    ONLINE       0     0     0
> >           c1t13d0    ONLINE       0     0     0
> >           c1t14d0    ONLINE       0     0     0
> >         raidz2-1     DEGRADED     0     0     0
> >           c1t22d0    ONLINE       0     0     0
> >           c1t15d0    ONLINE       0     0     0
> >           c1t16d0    ONLINE       0     0     0
> >           c1t17d0    ONLINE       0     0     0
> >           c1t23d0    ONLINE       0     0     0
> >           spare-5    REMOVED      0     0     0
> >             c1t20d0  REMOVED      0     0     0
> >             c8t18d0  ONLINE       0     0     0  (resilvering)
> >           c1t21d0    ONLINE       0     0     0
> >       logs
> >         c0t1d0       ONLINE       0     0     0
> >       spares
> >         c8t18d0      INUSE     currently in use
> >
> > errors: No known data errors
> >
> > Thanks for any insights.
> >
> > -Scott
> > --
> > This message posted from opensolaris.org
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> Scott Meilicke
>
>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to