[zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
storage pool and then shelved the other two for spares. One of the disks failed 
last night so I shut down the server and replaced it with a spare. When I tried 
to zpool replace the disk I get:

zpool replace tank c10t0d0 
cannot replace c10t0d0 with c10t0d0: device is too small

The 4 original disk partition tables look like this:

Current partition table (original):
Total disk sectors available: 312560317 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  312560350
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3125603518.00MB  312576734

Spare disk partition table looks like this:

Current partition table (original):
Total disk sectors available: 312483549 + 16384 (reserved sectors)

Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  312483582
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3124835838.00MB  312499966
 
So it seems that two of the disks are slightly different models and are about 
40mb smaller then the original disks. 

I know I can just add a larger disk but I would rather user the hardware I have 
if possible.
1) Is there anyway to replace the failed disk with one of the spares?
2) Can I recreate the zpool using 3 of the original disks and one of the 
slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
3) If #2 is possible would I still be able to use the last still shelved disk 
as a spare?

If #2 is possible I would probably recreate the zpool as raidz2 instead of the 
current raidz1.

Any info/comments would be greatly appreciated.

Robert
  
--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 10:01 AM, Tim Cook wrote:

 
 
 On Fri, Mar 4, 2011 at 10:22 AM, Robert Hartzell b...@rwhartzell.net wrote:
 In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
 storage pool and then shelved the other two for spares. One of the disks 
 failed last night so I shut down the server and replaced it with a spare. 
 When I tried to zpool replace the disk I get:
 
 zpool replace tank c10t0d0
 cannot replace c10t0d0 with c10t0d0: device is too small
 
 The 4 original disk partition tables look like this:
 
 Current partition table (original):
 Total disk sectors available: 312560317 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.04GB  312560350
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3125603518.00MB  312576734
 
 Spare disk partition table looks like this:
 
 Current partition table (original):
 Total disk sectors available: 312483549 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
  0usrwm34  149.00GB  312483582
  1 unassignedwm 0   0   0
  2 unassignedwm 0   0   0
  3 unassignedwm 0   0   0
  4 unassignedwm 0   0   0
  5 unassignedwm 0   0   0
  6 unassignedwm 0   0   0
  8   reservedwm 3124835838.00MB  312499966
 
 So it seems that two of the disks are slightly different models and are about 
 40mb smaller then the original disks.
 
 I know I can just add a larger disk but I would rather user the hardware I 
 have if possible.
 1) Is there anyway to replace the failed disk with one of the spares?
 2) Can I recreate the zpool using 3 of the original disks and one of the 
 slightly smaller spares? Will zpool/zfs adjust its size to the smaller disk?
 3) If #2 is possible would I still be able to use the last still shelved disk 
 as a spare?
 
 If #2 is possible I would probably recreate the zpool as raidz2 instead of 
 the current raidz1.
 
 Any info/comments would be greatly appreciated.
 
 Robert
 
 
 
 
 You cannot.  That's why I suggested two years ago that they chop off 1% from 
 the end of the disk at install time to equalize drive sizes.  That way you 
 you wouldn't run into this problem trying to replace disks from a different 
 vendor or different batch.  The response was that Sun makes sure all drives 
 are exactly the same size (although I do recall someone on this forum having 
 this issue with Sun OEM disks as well).  It's ridiculous they don't take into 
 account the slight differences in drive sizes from vendor to vendor.  Forcing 
 you to single-source your disks is a bad habit to get into IMO.
 
 --Tim
 


Well that sucks... So I guess the only option is to replace the disk with a 
larger one? Or are you saying thats not possible either?
I can upgrade to larger disks but then there is no guarantee that I can even 
buy 4 identical disks off the shelf at any one time.

Thanks for the info

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 11:46 AM, Cindy Swearingen wrote:

 Robert,
 
 Which Solaris release is this?
 
 Thanks,
 
 Cindy
 


Solaris 11 express 2010.11

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell

On Mar 4, 2011, at 11:19 AM, Eric D. Mudama wrote:

 On Fri, Mar  4 at  9:22, Robert Hartzell wrote:
 In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz 
 storage pool and then shelved the other two for spares. One of the disks 
 failed last night so I shut down the server and replaced it with a spare. 
 When I tried to zpool replace the disk I get:
 
 zpool replace tank c10t0d0
 cannot replace c10t0d0 with c10t0d0: device is too small
 
 The 4 original disk partition tables look like this:
 
 Current partition table (original):
 Total disk sectors available: 312560317 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
 0usrwm34  149.04GB  312560350
 1 unassignedwm 0   0   0
 2 unassignedwm 0   0   0
 3 unassignedwm 0   0   0
 4 unassignedwm 0   0   0
 5 unassignedwm 0   0   0
 6 unassignedwm 0   0   0
 8   reservedwm 3125603518.00MB  312576734
 
 Spare disk partition table looks like this:
 
 Current partition table (original):
 Total disk sectors available: 312483549 + 16384 (reserved sectors)
 
 Part  TagFlag First Sector Size Last Sector
 0usrwm34  149.00GB  312483582
 1 unassignedwm 0   0   0
 2 unassignedwm 0   0   0
 3 unassignedwm 0   0   0
 4 unassignedwm 0   0   0
 5 unassignedwm 0   0   0
 6 unassignedwm 0   0   0
 8   reservedwm 3124835838.00MB  312499966
 
 So it seems that two of the disks are slightly different models and are 
 about 40mb smaller then the original disks.
 
 
 One comment: The IDEMA LBA01 spec size of a 160GB device is
 312,581,808 sectors.
 
 Instead of those WD models, where neither the old nor new drives
 follow the IDEMA recommendation, consider buying a drive that reports
 that many sectors.  Almost all models these days should be following
 the IDEMA recommendations due to all the troubles people have had.
 
 --eric
 
 -- 
 Eric D. Mudama
 edmud...@bounceswoosh.org
 


Thats encouraging, if I have to I would rather buy one new disk then 4.
Thanks, Robert 

--   
   Robert Hartzell
b...@rwhartzell.net
 RwHartzell.Net, Inc.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-17 Thread Robert Hartzell

On 08/16/10 10:38 PM, George Wilson wrote:

Robert Hartzell wrote:

On 08/16/10 07:47 PM, George Wilson wrote:

The root filesystem on the root pool is set to 'canmount=noauto' so you
need to manually mount it first using 'zfs mount dataset name'. Then
run 'zfs mount -a'.

- George



mounting the dataset failed because the /mnt dir was not empty and
zfs mount -a failed I guess because the first command failed.




It's possible that as part of the initial import that one of the mount
points tried to create a directory under /mnt. You should first unmount
everything associated with that pool, then ensure that /mnt is empty and
mount the root filesystem first. Don't mount anything else until the
root is mounted.

- George


Awesome! That worked... just recovered 100GB of data! Thanks for the help

--
  Robert Hartzell
b...@rwhartzell.net
  RwHartzell.Net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell
I have a disk which is 1/2 of a boot disk mirror from a failed system 
that I would like to extract some data from. So i install the disk to a 
test system and do:


zpool import -R /mnt -f rpool bertha

which gives me:


bertha102G   126G84K  /mnt/bertha
bertha/ROOT  34.3G   126G19K  legacy
bertha/ROOT/snv_134  34.3G   126G  10.9G  /mnt
bertha/Vbox  46.9G   126G  46.9G  /mnt/export/Vbox
bertha/dump  2.00G   126G  2.00G  -
bertha/export8.05G   126G31K  /mnt/export
bertha/export/home   8.05G  52.0G  8.01G  /mnt/export/home
bertha/mail  1.54M  5.00G  1.16M  /mnt/var/mail
bertha/swap 4G   130G   181M  -
bertha/zones 6.86G   126G24K  /mnt/export/zones
bertha/zones/bz1 6.05G   126G24K  /mnt/export/zones/bz1
bertha/zones/bz1/ROOT6.05G   126G21K  legacy
bertha/zones/bz1/ROOT/zbe6.05G   126G  6.05G  legacy
bertha/zones/bz2  821M   126G24K  /mnt/export/zones/bz2
bertha/zones/bz2/ROOT 821M   126G21K  legacy
bertha/zones/bz2/ROOT/zbe 821M   126G   821M  legacy




cd /mnt ; ls
bertha export var
ls bertha
boot etc

where is the rest of the file systems and data?

--
  Robert Hartzell
b...@rwhartzell.net
  RwHartzell.Net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell

On 08/16/10 07:39 PM, Mark Musante wrote:


On 16 Aug 2010, at 22:30, Robert Hartzell wrote:



cd /mnt ; ls
bertha export var
ls bertha
boot etc

where is the rest of the file systems and data?


By default, root filesystems are not mounted. Try doing a zfs mount
bertha/ROOT/snv_134


This didn't work,,,


pfexec zfs mount bertha/ROOT/snv_134
cannot mount '/mnt': directory is not empty


do I need set the mount point to a different location?

--
  Robert Hartzell
b...@rwhartzell.net
  RwHartzell.Net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell

On 08/16/10 07:47 PM, George Wilson wrote:

The root filesystem on the root pool is set to 'canmount=noauto' so you
need to manually mount it first using 'zfs mount dataset name'. Then
run 'zfs mount -a'.

- George



mounting the dataset failed because the /mnt dir was not empty and zfs 
mount -a failed I guess because the first command failed.



--
  Robert Hartzell
b...@rwhartzell.net
  RwHartzell.Net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss