On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith mahlon at martini.nu
http://lists.freebsd.org/mailman/listinfo/freebsd-stable wrote:
* On Tue, Jul 07, 2009, Freddie Cash wrote:
* *
* * This is why we've started using glabel(8) to label our drives, and then
* * add
* * the labels to the
I would say in this case you're *not* giving the entire disk to the
pool, you're giving ZFS a geom that's one sector smaller than the disk.
ZFS never sees or can touch the glabel metadata.
Is ZFS happy if the size of it's disc changes underneath it ? I have
expanded a zpool a couple of
On 7/7/2009 8:13 PM, Mahlon E. Smith wrote:
I also tried another export/import cycle, in the random hope that would
stop the active replace -- no dice. *However*, on the import, now I see
this flooding my console (wasn't there previously, strangely):
Jul 7 16:50:15 disobedience root: ZFS:
I've got a 9 sata drive raidz1 array, started at version 6, upgraded to
version 13. I had an apparent drive failure, and then at some point, a
kernel panic (unrelated to ZFS.) The reboot caused the device numbers
to shuffle, so I did an 'export/import' to re-read the metadata and get
the array
On Tue, Jul 7, 2009 at 12:56 PM, Mahlon E. Smith mah...@martini.nu wrote:
I've got a 9 sata drive raidz1 array, started at version 6, upgraded to
version 13. I had an apparent drive failure, and then at some point, a
kernel panic (unrelated to ZFS.) The reboot caused the device numbers
to
On Tue, Jul 07, 2009 at 12:56:14PM -0700, Mahlon E. Smith wrote:
I've got a 9 sata drive raidz1 array, started at version 6, upgraded to
version 13. I had an apparent drive failure, and then at some point, a
kernel panic (unrelated to ZFS.) The reboot caused the device numbers
to shuffle,
On Tue, Jul 07, 2009, Freddie Cash wrote:
This is why we've started using glabel(8) to label our drives, and then add
the labels to the pool:
# zpool create store raidz1 label/disk01 label/disk02 label/disk03
That way, it does matter where the kernel detects the drives or what the
On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith mah...@martini.nu wrote:
On Tue, Jul 07, 2009, Freddie Cash wrote:
This is why we've started using glabel(8) to label our drives, and then
add
the labels to the pool:
# zpool create store raidz1 label/disk01 label/disk02 label/disk03
On Wed, Jul 8, 2009 at 1:32 AM, Freddie Cashfjwc...@gmail.com wrote:
On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith mah...@martini.nu wrote:
On Tue, Jul 07, 2009, Freddie Cash wrote:
This is why we've started using glabel(8) to label our drives, and then
add
the labels to the pool:
On Wed, Jul 08, 2009 at 01:40:02AM +0300, Dan Naumov wrote:
On Wed, Jul 8, 2009 at 1:32 AM, Freddie Cashfjwc...@gmail.com wrote:
On Tue, Jul 7, 2009 at 3:26 PM, Mahlon E. Smith mah...@martini.nu wrote:
On Tue, Jul 07, 2009, Freddie Cash wrote:
This is why we've started using glabel(8)
Mahlon E. Smith wrote:
Strangely, the ETA is jumping all over the place, from 50 hours to 2000+
hours. Never seen the percent complete over 0.01% done, but then it
goes back to 0.00%.
Are you taking snapshots from crontab? Older versions of the ZFS code
re-started scrubbing whenever a
On Wed, Jul 08, 2009 at 08:32:12AM +1000, Andrew Snow wrote:
Mahlon E. Smith wrote:
Strangely, the ETA is jumping all over the place, from 50 hours to 2000+
hours. Never seen the percent complete over 0.01% done, but then it
goes back to 0.00%.
Are you taking snapshots from crontab?
Not to derail this discussion, but can anyone explain if the actual
glabel metadata is protected in any way? If I use glabel to label a
disk and then create a pool using /dev/label/disklabel, won't ZFS
eventually overwrite the glabel metadata in the last sector since the
disk in it's entirety
On Tue, Jul 07, 2009, Freddie Cash wrote:
I think (never tried) you can use zpool scrub -s store to stop the
resilver. If not, you should be able to re-do the replace command.
Hmm. I think I may be stuck.
% zpool scrub -s store
% zpool status | grep scrub
scrub: resilver in progress
Dan Naumov wrote:
If I use glabel to label a
disk and then create a pool using /dev/label/disklabel, won't ZFS
eventually overwrite the glabel metadata in the last sector since the
disk in it's entirety is given to the pool?
I would say in this case you're *not* giving the entire disk to the
On Tue, Jul 07, 2009 at 03:53:58PM -0500, Brooks Davis wrote:
I'm seeing essentially the same think on an 8.0-BETA1 box with an 8-disk
raidz1 pool. Every once in a while the system makes it to 0.05% done
and gives a vaguely reasonable rebuild time, but it quickly drops back
to reports 0.00%
On Wed, Jul 08, 2009 at 11:53:17AM +1000, Emil Mikulic wrote:
On Tue, Jul 07, 2009 at 03:53:58PM -0500, Brooks Davis wrote:
I'm seeing essentially the same think on an 8.0-BETA1 box with an 8-disk
raidz1 pool. Every once in a while the system makes it to 0.05% done
and gives a vaguely
17 matches
Mail list logo