[zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Lamp Zy
Hi, One of my drives failed in Raidz2 with two hot spares: # zpool status pool: fwgpool0 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Brandon High
On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zy wrote: > I'd expect the spare drives to auto-replace the failed one but this is not > happening. > > What am I missing? Is the autoreplace property set to 'on'? # zpool get autoreplace fwgpool0 # zpool set autoreplace=on fwgpool0 > I really would like to

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-25 Thread Lamp Zy
Thanks Brandon, On 04/25/2011 05:47 PM, Brandon High wrote: On Mon, Apr 25, 2011 at 4:56 PM, Lamp Zy wrote: I'd expect the spare drives to auto-replace the failed one but this is not happening. What am I missing? Is the autoreplace property set to 'on'? # zpool get autoreplace fwgpool0 # zp

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Nikola M.
On 04/26/11 01:56 AM, Lamp Zy wrote: > Hi, > > One of my drives failed in Raidz2 with two hot spares: What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c). Latest zpool/zfs versions available by numerical designation in all OpenSolaris based distributions, are zpool 28 and zfs v.

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Cindy Swearingen
Hi-- I don't know why the spare isn't kicking in automatically, it should. A documented workaround is to outright replace the failed disk with one of the spares, like this: # zpool replace fwgpool0 c4t5000C5001128FE4Dd0 c4t5000C50014D70072d0 The autoreplace pool property has nothing to do with

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Richard Elling
On Apr 26, 2011, at 8:22 AM, Cindy Swearingen wrote: > Hi-- > > I don't know why the spare isn't kicking in automatically, it should. This can happen if the FMA agents aren't working properly. FYI, in NexentaStor we have added a zfs-monitor FMA agent to check the health of disks in use for ZFS

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-26 Thread Paul Kraus
On Tue, Apr 26, 2011 at 4:59 PM, Richard Elling wrote: > > On Apr 26, 2011, at 8:22 AM, Cindy Swearingen wrote: > >> Hi-- >> >> I don't know why the spare isn't kicking in automatically, it should. > > This can happen if the FMA agents aren't working properly. > > FYI, in NexentaStor we have added

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Lamp Zy > > One of my drives failed in Raidz2 with two hot spares: > What zpool & zfs version are you using? What OS version? Are all the drives precisely the same size (Same make/model numb

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Lamp Zy
On 04/26/2011 01:25 AM, Nikola M. wrote: On 04/26/11 01:56 AM, Lamp Zy wrote: Hi, One of my drives failed in Raidz2 with two hot spares: What are zpool/zfs versions? (zpool upgrade Ctrl+c, zfs upgrade Cttr+c). Latest zpool/zfs versions available by numerical designation in all OpenSolaris base

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Brandon High
On Wed, Apr 27, 2011 at 12:51 PM, Lamp Zy wrote: > Any ideas how to identify which drive is the one that failed so I can > replace it? Try the following: # fmdump -eV # fmadm faulty -B -- Brandon High : bh...@freaks.com ___ zfs-discuss mailing list z

Re: [zfs-discuss] Spare drives sitting idle in raidz2 with failed drive

2011-04-27 Thread Paul Kraus
On Wed, Apr 27, 2011 at 3:51 PM, Lamp Zy wrote: > Great. So, now how do I identify which drive out of the 24 in the storage > unit is the one that failed? > > I looked on the Internet for help but the problem is that this drive > completely disappeared. Even "format" and "iostat -En" show only 23