ZFS: drive replacement performance

2009-08-03 Thread Gabor Radnai
> On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith > wrote: *> > On Tue, Jul 07, 2009, Freddie Cash wrote: *> >* > *> >* > This is why we've started using glabel(8) to label our drives, and then *> >* add *> >* > the labels to the pool: *>

Re: ZFS: drive replacement performance

2009-07-08 Thread Jonathan
On 7/7/2009 8:13 PM, Mahlon E. Smith wrote: I also tried another export/import cycle, in the random hope that would stop the active replace -- no dice. *However*, on the import, now I see this flooding my console (wasn't there previously, strangely): Jul 7 16:50:15 disobedience root: ZFS: vdev

Re: glabel metadata protection (WAS: ZFS: drive replacement performance)

2009-07-08 Thread Pete French
> I would say in this case you're *not* giving the entire disk to the > pool, you're giving ZFS a geom that's one sector smaller than the disk. > ZFS never sees or can touch the glabel metadata. Is ZFS happy if the size of it's disc changes underneath it ? I have expanded a zpool a couple of t

Re: ZFS: drive replacement performance

2009-07-07 Thread Brooks Davis
On Wed, Jul 08, 2009 at 11:53:17AM +1000, Emil Mikulic wrote: > On Tue, Jul 07, 2009 at 03:53:58PM -0500, Brooks Davis wrote: > > I'm seeing essentially the same think on an 8.0-BETA1 box with an 8-disk > > raidz1 pool. Every once in a while the system makes it to 0.05% done > > and gives a vaguel

Re: ZFS: drive replacement performance

2009-07-07 Thread Emil Mikulic
On Tue, Jul 07, 2009 at 03:53:58PM -0500, Brooks Davis wrote: > I'm seeing essentially the same think on an 8.0-BETA1 box with an 8-disk > raidz1 pool. Every once in a while the system makes it to 0.05% done > and gives a vaguely reasonable rebuild time, but it quickly drops back > to reports 0.00

Re: glabel metadata protection (WAS: ZFS: drive replacement performance)

2009-07-07 Thread Barry Pederson
Dan Naumov wrote: If I use glabel to label a disk and then create a pool using /dev/label/disklabel, won't ZFS eventually overwrite the glabel metadata in the last sector since the disk in it's entirety is given to the pool? I would say in this case you're *not* giving the entire disk to the

Re: ZFS: drive replacement performance

2009-07-07 Thread Mahlon E. Smith
On Tue, Jul 07, 2009, Freddie Cash wrote: > > I think (never tried) you can use "zpool scrub -s store" to stop the > resilver. If not, you should be able to re-do the replace command. Hmm. I think I may be stuck. % zpool scrub -s store % zpool status | grep scrub scrub: resilver in progres

glabel metadata protection (WAS: ZFS: drive replacement performance)

2009-07-07 Thread Dan Naumov
>> Not to derail this discussion, but can anyone explain if the actual >> glabel metadata is protected in any way? If I use glabel to label a >> disk and then create a pool using /dev/label/disklabel, won't ZFS >> eventually overwrite the glabel metadata in the last sector since the >> disk in it's

Re: ZFS: drive replacement performance

2009-07-07 Thread Brooks Davis
On Wed, Jul 08, 2009 at 08:32:12AM +1000, Andrew Snow wrote: > Mahlon E. Smith wrote: > >> Strangely, the ETA is jumping all over the place, from 50 hours to 2000+ >> hours. Never seen the percent complete over 0.01% done, but then it >> goes back to 0.00%. > > Are you taking snapshots from cron

Re: ZFS: drive replacement performance

2009-07-07 Thread Andrew Snow
Mahlon E. Smith wrote: Strangely, the ETA is jumping all over the place, from 50 hours to 2000+ hours. Never seen the percent complete over 0.01% done, but then it goes back to 0.00%. Are you taking snapshots from crontab? Older versions of the ZFS code re-started scrubbing whenever a snaps

Re: ZFS: drive replacement performance

2009-07-07 Thread Brooks Davis
On Wed, Jul 08, 2009 at 01:40:02AM +0300, Dan Naumov wrote: > On Wed, Jul 8, 2009 at 1:32 AM, Freddie Cash wrote: > > On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith wrote: > > > >> On Tue, Jul 07, 2009, Freddie Cash wrote: > >> > > >> > This is why we've started using glabel(8) to label our drive

Re: ZFS: drive replacement performance

2009-07-07 Thread Dan Naumov
On Wed, Jul 8, 2009 at 1:32 AM, Freddie Cash wrote: > On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith wrote: > >> On Tue, Jul 07, 2009, Freddie Cash wrote: >> > >> > This is why we've started using glabel(8) to label our drives, and then >> add >> > the labels to the pool: >> >   # zpool create st

Re: ZFS: drive replacement performance

2009-07-07 Thread Freddie Cash
On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith wrote: > On Tue, Jul 07, 2009, Freddie Cash wrote: > > > > This is why we've started using glabel(8) to label our drives, and then > add > > the labels to the pool: > > # zpool create store raidz1 label/disk01 label/disk02 label/disk03 > > > > Tha

Re: ZFS: drive replacement performance

2009-07-07 Thread Mahlon E. Smith
On Tue, Jul 07, 2009, Freddie Cash wrote: > > This is why we've started using glabel(8) to label our drives, and then add > the labels to the pool: > # zpool create store raidz1 label/disk01 label/disk02 label/disk03 > > That way, it does matter where the kernel detects the drives or what the >

Re: ZFS: drive replacement performance

2009-07-07 Thread Brooks Davis
On Tue, Jul 07, 2009 at 12:56:14PM -0700, Mahlon E. Smith wrote: > > I've got a 9 sata drive raidz1 array, started at version 6, upgraded to > version 13. I had an apparent drive failure, and then at some point, a > kernel panic (unrelated to ZFS.) The reboot caused the device numbers > to shuff

Re: ZFS: drive replacement performance

2009-07-07 Thread Freddie Cash
On Tue, Jul 7, 2009 at 12:56 PM, Mahlon E. Smith wrote: > I've got a 9 sata drive raidz1 array, started at version 6, upgraded to > version 13. I had an apparent drive failure, and then at some point, a > kernel panic (unrelated to ZFS.) The reboot caused the device numbers > to shuffle, so I d

ZFS: drive replacement performance

2009-07-07 Thread Mahlon E. Smith
I've got a 9 sata drive raidz1 array, started at version 6, upgraded to version 13. I had an apparent drive failure, and then at some point, a kernel panic (unrelated to ZFS.) The reboot caused the device numbers to shuffle, so I did an 'export/import' to re-read the metadata and get the array b