Re: [zfs-discuss] Replacing a failed drive

2009-10-02 Thread Cindy Swearingen

Yes, you can use the zpool replace process with any kind of drive:
failed, failing, or even healthy.

cs

On 10/02/09 12:15, Dan Transue wrote:
Does the same thing apply for a "failing" drive?  I have a drive that 
has not failed but by all indications, it's about to  Can I do the 
same thing here?


-dan

Jeff Bonwick wrote:

Yep, you got it.

Jeff

On Fri, Jun 19, 2009 at 04:15:41PM -0700, Simon Breden wrote:
  

Hi,

I have a ZFS storage pool consisting of a single RAIDZ2 vdev of 6 drives, and I 
have a question about replacing a failed drive, should it occur in future.

If a drive fails in this double-parity vdev, then am I correct in saying that I 
would need to (1) unplug the old drive once I've identified the drive id 
(c1t0d0 etc), (2) plug in the new drive on the same SATA cable, and (3) issue a 
'zpool replace pool_name drive_id' command etc, at which point ZFS will 
resilver the new drive from the parity data ?

Thanks,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


--
 * Dan Transue *
*Sun Microsystems, Inc.*
495 S. High Street, #200
Columbus, OH 43215 US
Phone x30944 / 877-932-9964
Mobile 484-554-6951
Fax 877-932-9964
Email dan.tran...@sun.com




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed drive

2009-10-02 Thread Dan Transue
Does the same thing apply for a "failing" drive?  I have a drive that 
has not failed but by all indications, it's about to  Can I do the 
same thing here?


-dan

Jeff Bonwick wrote:

Yep, you got it.

Jeff

On Fri, Jun 19, 2009 at 04:15:41PM -0700, Simon Breden wrote:
  

Hi,

I have a ZFS storage pool consisting of a single RAIDZ2 vdev of 6 drives, and I 
have a question about replacing a failed drive, should it occur in future.

If a drive fails in this double-parity vdev, then am I correct in saying that I 
would need to (1) unplug the old drive once I've identified the drive id 
(c1t0d0 etc), (2) plug in the new drive on the same SATA cable, and (3) issue a 
'zpool replace pool_name drive_id' command etc, at which point ZFS will 
resilver the new drive from the parity data ?

Thanks,
Simon
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


--
 * Dan Transue *
*Sun Microsystems, Inc.*
495 S. High Street, #200
Columbus, OH 43215 US
Phone x30944 / 877-932-9964
Mobile 484-554-6951
Fax 877-932-9964
Email dan.tran...@sun.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed drive

2009-06-20 Thread Simon Breden
Great, thanks a lot Jeff.

Cheers,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed drive

2009-06-19 Thread Jeff Bonwick
Yep, you got it.

Jeff

On Fri, Jun 19, 2009 at 04:15:41PM -0700, Simon Breden wrote:
> Hi,
> 
> I have a ZFS storage pool consisting of a single RAIDZ2 vdev of 6 drives, and 
> I have a question about replacing a failed drive, should it occur in future.
> 
> If a drive fails in this double-parity vdev, then am I correct in saying that 
> I would need to (1) unplug the old drive once I've identified the drive id 
> (c1t0d0 etc), (2) plug in the new drive on the same SATA cable, and (3) issue 
> a 'zpool replace pool_name drive_id' command etc, at which point ZFS will 
> resilver the new drive from the parity data ?
> 
> Thanks,
> Simon
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing a failed drive

2009-06-19 Thread Simon Breden
Hi,

I have a ZFS storage pool consisting of a single RAIDZ2 vdev of 6 drives, and I 
have a question about replacing a failed drive, should it occur in future.

If a drive fails in this double-parity vdev, then am I correct in saying that I 
would need to (1) unplug the old drive once I've identified the drive id 
(c1t0d0 etc), (2) plug in the new drive on the same SATA cable, and (3) issue a 
'zpool replace pool_name drive_id' command etc, at which point ZFS will 
resilver the new drive from the parity data ?

Thanks,
Simon
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Replacing a failed drive with ZFS

2007-06-04 Thread Stephen Person

DISCLAIMERS:

ZFS bits on this server are old:
# pkginfo -l SUNWzfsr |grep -i version
VERSION:  11.11,REV=2006.01.03.01.17
OS is an old build of Nevada:
SunOS 5.11 snv_31

Experts,

I have what is hopefully a simple question.  We have a ZFS pool 
(dilbert) consisting of 6 2-way mirrors.  Each of the mirrors consists 
of 2 drives on seperate controllers on a D1000.  Recently one of the 
drives croaked.  I attempted to offline it, which should have worked as 
the data on it was mirrored, but failed:


# zpool status -v
  pool: dilbert
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are 
unaffected.

action: Determine if the device needs to be replaced, and clear the errors
using 'zpool online' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
dilbert  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t0d0   ONLINE   0 0 0
c2t8d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t1d0   ONLINE   0 0 0
c2t9d0   ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t2d0   ONLINE   0 0 0
c2t10d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t3d0   ONLINE   0 0 0
c2t11d0  ONLINE  55 130.2 0
  mirror ONLINE   0 0 0
c1t4d0   ONLINE   0 0 0
c2t12d0  ONLINE   0 0 0
  mirror ONLINE   0 0 0
c1t5d0   ONLINE   0 0 0
c2t13d0  ONLINE   0 0 0
# zpool offline dilbert c2t11d0
cannot offline /dev/dsk/c2t11d0: no valid replicas
#

We replaced the failed drive with a good one, which started the resilvering:


# devfsadm
# zpool replace dilbert c2t11d0
# zpool status -v
  pool: dilbert
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress, 0.35% done, 0h28m to go
config:

NAME   STATE READ WRITE CKSUM
dilbertDEGRADED 0 0 0
  mirror   ONLINE   0 0 0
c1t0d0 ONLINE   0 0 0
c2t8d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t1d0 ONLINE   0 0 0
c2t9d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t2d0 ONLINE   0 0 0
c2t10d0ONLINE   0 0 0
  mirror   DEGRADED 0 0 0
c1t3d0 ONLINE   0 0 0
replacing  DEGRADED 0 0 0
  c2t11d0s0/o  FAULTED 55 152.6 0  cannot open
  c2t11d0  ONLINE   0 0 0  4.54M resilvered
  mirror   ONLINE   0 0 0
c1t4d0 ONLINE   0 0 0
c2t12d0ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t5d0 ONLINE   0 0 0
c2t13d0ONLINE   0 0 0
#

This appears to have worked fine, but now (3 days later) the pool is 
still in a degraded state althought he resilvering appears to have 
completed.


# zpool status
  pool: dilbert
 state: DEGRADED
 scrub: resilver completed with 0 errors on Fri Jun  1 10:31:39 2007
config:

NAME   STATE READ WRITE CKSUM
dilbertDEGRADED 0 0 0
  mirror   ONLINE   0 0 0
c1t0d0 ONLINE   0 0 0
c2t8d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t1d0 ONLINE   0 0 0
c2t9d0 ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c1t2d0 ONLINE   0 0 0
c2t10d0ONLINE   0 0 0
  mirror   DEGRADED 0 0 0
c1t3d0 ONLINE   0 0 0
replacing  DEGRADED 0 0 0
  c2t11d0s0/o  FAULTED 55 152.6 0  cannot open
  c2t11d0  ONLINE   0 0 0  3.16G resilvered
  mirror   ONLINE   0 0 0
c1t4d0 ONLINE   0 0 0
c2t12d0ONLINE   0 0 0
  mirror   ONLI