Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Richard Elling

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
   zpool export vault
   zpool import vault

which will clear the old entries out of the zpool.cache and look for
the new devices.

More below...

Brian Leonard wrote:

I had a machine die the other day and take one of its zfs pools with it. I booted the new machine, 
with the same disks but a different SATA controller, and the rpool was mounted but another pool 
vault was not.  If I try to import it I get invalid vdev configuration.  
fmdump shows zfs.vdev.bad_label, and checking the label with zdb I find labels 2 and 3 missing.  
How can I get my pool back?  Thanks.

snv_98

zpool import
  pool: vault
id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

vault   UNAVAIL  insufficient replicas
  mirrorUNAVAIL  corrupted data
c6d1p0  ONLINE
c7d1p0  ONLINE


fmdump -eV
Jun 04 2009 07:43:47.165169453 ereport.fs.zfs.vdev.bad_label
nvlist version: 0
class = ereport.fs.zfs.vdev.bad_label
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
vdev = 0xaa3f2fd35788620b
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
vdev_guid = 0xaa3f2fd35788620b
vdev_type = mirror
parent_guid = 0x2bb202be54c462e
parent_type = root
prev_state = 0x7
__ttl = 0x1
__tod = 0x4a27c183 0x9d8492d

Jun 04 2009 07:43:47.165169794 ereport.fs.zfs.zpool
nvlist version: 0
class = ereport.fs.zfs.zpool
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
__ttl = 0x1
__tod = 0x4a27c183 0x9d84a82


zdb -l /dev/rdsk/c6d1p0
  


It is unusual to have a vdev on a partition (c6d1p0).  It is
more common to have a vdev on a slice in the partition
(eg. c6d1s0).  The view of partition and slice into a device
may overlap, but not completely overlap. For example,
on one of my machines:
   c0t0d0p0 is physical blocks 0-976735935
   c0t0d0s0 is physical blocks 16065-308512259

If your system has the same starting block, but different sizes
for the c6d1p0 and c6d1s0, then zfs may not be able to see
the labels at the end (label 2 and 3).

Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find
the proper, complete slice.
-- richard



LABEL 0

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
guid=6231056817092537765
path='/dev/dsk/c1t0d0p0'
devid='id1,s...@f3b789a3f48e44b86000263f9/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
whole_disk=0
DTL=76

LABEL 1

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
   

Re: [zfs-discuss] LUN expansion

2009-06-04 Thread George Wilson

Leonid,

I will be integrating this functionality within the next week:

PSARC 2008/353 zpool autoexpand property
6475340 when lun expands, zfs should expand too

Unfortunately, the won't help you until they get pushed to Opensolaris. 
The problem you're facing is that the partition table needs to be 
expanded to use the newly created space. This all happens automatically 
with my code changes but if you want to do this you'll have to change 
the partition table and export/import the pool.


Your other option is to wait till these bits show up in Opensolaris.

Thanks,
George

Leonid Zamdborg wrote:

Hi,

I have a problem with expanding a zpool to reflect a change in the underlying 
hardware LUN.  I've created a zpool on top of a 3Ware hardware RAID volume, 
with a capacity of 2.7TB.  I've since added disks to the hardware volume, 
expanding the capacity of the volume to 10TB.  This change in capacity shows up 
in format:

0. c0t0d0 lt;AMCC-9650SE-16M DISK-4.06-10.00TBgt;
/p...@0,0/pci10de,3...@e/pci13c1,1...@0/s...@0,0

When I do a prtvtoc /dev/dsk/c0t0d0, I get:

* /dev/dsk/c0t0d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 21484142592 sectors
* 5859311549 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector 
*  34   222   255

*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 5859294943 5859295198
   8 1100  5859295199 16384 5859311582

The new capacity, unfortunately, shows up as inaccessible.  I've tried exporting and 
importing the zpool, but the capacity is still not recognized.  I kept seeing things 
online about Dynamic LUN Expansion, but how do I do this?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Brian Leonard
 Since you did not export the pool, it may be looking for the wrong
 devices.  Try this:
 zpool export vault
 zpool import vault

That was the first thing I tried, with no luck.

 Above, I used slice 0 as an example, your system may use a
 different slice.  But you can run zdb -l on all of them to find

Aha, zdb found complete label sets for the vault pool on /dev/rdsk/c6d1 and 
c7d1.  The incomplete labels were c6d1p0 and c7d1p0.  Could I just zpool 
replace c6d1p0 with c6d1 and c7d1p0 with c7d0?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-04 Thread Leonid Zamdborg
 The problem you're facing is that the partition table
 needs to be 
 expanded to use the newly created space. This all
 happens automatically 
 with my code changes but if you want to do this
 you'll have to change 
 the partition table and export/import the pool.

George,

Is there a reasonably straightforward way of doing this partition table edit 
with existing tools that won't clobber my data?  I'm very new to ZFS, and 
didn't want to start experimenting with a live machine.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Richard Elling

Brian Leonard wrote:

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
zpool export vault
zpool import vault



That was the first thing I tried, with no luck.

  

Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find



Aha, zdb found complete label sets for the vault pool on /dev/rdsk/c6d1 and 
c7d1.  The incomplete labels were c6d1p0 and c7d1p0.  Could I just zpool replace c6d1p0 
with c6d1 and c7d1p0 with c7d0?
  


h... export the pool again.  Then try simply zpool import and
it should show the way it sees vault.  Reply with that output.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] rpool mirroring

2009-06-04 Thread noz
I've been playing around with zfs root pool mirroring and came across some 
problems.

I have no problems mirroring the root pool if I have both disks attached during 
OpenSolaris installation (installer sees 2 disks).

The problem occurs when I only have one disk attached to the system during 
install.  After OpenSolaris installation completes, I attach the second disk 
and try to create a mirror but I cannot.

Here are the steps I go through:
1) install OpenSolaris onto 16GB disk
2) after successful install, shutdown, and attach second disk (also 16GB)
3) fdisk -B
4) partition
5) zfs attach

Step 5 fails, giving a disk too small error.

What I noticed about the second disk is that it has a 9th partition called 
alternates that takes up about 15MBs.  This partition doesn't exist in the 
first disk and I believe is what's causing the problem.  I can't figure out how 
to delete this partition and I don't know why it's there.  How do I mirror the 
root pool if I don't have both disks attached during OpenSolaris installation?  
I realize I can just use a disk larger than 16GBs, but that would be a waste.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Brian Leonard
 h... export the pool again.  Then try simply zpool import 
 and it should show the way it sees vault.  Reply with that output.

zpool export vault
cannot open 'vault': no such pool


zpool import
  pool: vault
id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

vault   UNAVAIL  insufficient replicas
  mirrorUNAVAIL  corrupted data
c6d1p0  ONLINE
c7d1p0  ONLINE
[
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool mirroring

2009-06-04 Thread Cindy . Swearingen

Hi Noz,

This problem was reported recently and this bug was filed:

6844090 zfs should be able to mirror to a smaller disk

I believe slice 9 (alternates) is an older method for providing
alternate disk blocks on x86 systems. Apparently, it can be removed by
using the format -e command. I haven't tried this though.

I don't think removing slice 9 will help though if these two disks
are not identical, hence the bug.

You can workaround this problem by attaching a slightly larger disk.

Cindy


noz wrote:

I've been playing around with zfs root pool mirroring and came across some 
problems.

I have no problems mirroring the root pool if I have both disks attached during 
OpenSolaris installation (installer sees 2 disks).

The problem occurs when I only have one disk attached to the system during 
install.  After OpenSolaris installation completes, I attach the second disk 
and try to create a mirror but I cannot.

Here are the steps I go through:
1) install OpenSolaris onto 16GB disk
2) after successful install, shutdown, and attach second disk (also 16GB)
3) fdisk -B
4) partition
5) zfs attach

Step 5 fails, giving a disk too small error.

What I noticed about the second disk is that it has a 9th partition called 
alternates that takes up about 15MBs.  This partition doesn't exist in the 
first disk and I believe is what's causing the problem.  I can't figure out how to delete 
this partition and I don't know why it's there.  How do I mirror the root pool if I don't 
have both disks attached during OpenSolaris installation?  I realize I can just use a 
disk larger than 16GBs, but that would be a waste.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Victor Latushkin

Brian Leonard wrote:

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
zpool export vault
zpool import vault


That was the first thing I tried, with no luck.


Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find


Aha, zdb found complete label sets for the vault pool on
/dev/rdsk/c6d1 and c7d1.  The incomplete labels were c6d1p0 and
c7d1p0.  Could I just zpool replace c6d1p0 with c6d1 and c7d1p0 with
c7d0?


You cannot import pool, so you cannot do any replacements with 'zpool 
replace'.


Check contents of /dev/dsk and /dev/rdsk to see if there are some 
missing links there for devices in question. You may want to run


devfsadm -c disk -sv
devfsadm -c disk -Csv

and see if it reports anything.

Try to move c6d1p0 and c7d1p0 out of /dev/dsk and /dev/rdsk and see if 
you can import the pool.


victor
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool mirroring

2009-06-04 Thread Richard Elling

noz wrote:

I've been playing around with zfs root pool mirroring and came across some 
problems.

I have no problems mirroring the root pool if I have both disks attached during 
OpenSolaris installation (installer sees 2 disks).

The problem occurs when I only have one disk attached to the system during 
install.  After OpenSolaris installation completes, I attach the second disk 
and try to create a mirror but I cannot.

Here are the steps I go through:
1) install OpenSolaris onto 16GB disk
2) after successful install, shutdown, and attach second disk (also 16GB)
3) fdisk -B
4) partition
  


This is a critical step and it is important that you create a SMI
label, not an EFI label.  For the exact steps, please consult the
ZFS Administration Guide which has a section on this very
process.
-- richard


5) zfs attach

Step 5 fails, giving a disk too small error.

What I noticed about the second disk is that it has a 9th partition called 
alternates that takes up about 15MBs.  This partition doesn't exist in the 
first disk and I believe is what's causing the problem.  I can't figure out how to delete 
this partition and I don't know why it's there.  How do I mirror the root pool if I don't 
have both disks attached during OpenSolaris installation?  I realize I can just use a 
disk larger than 16GBs, but that would be a waste.
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Richard Elling

Victor took the words right out of my fingers :-) more below...

Victor Latushkin wrote:

Brian Leonard wrote:

Since you did not export the pool, it may be looking for the wrong
devices.  Try this:
zpool export vault
zpool import vault


That was the first thing I tried, with no luck.


Above, I used slice 0 as an example, your system may use a
different slice.  But you can run zdb -l on all of them to find


Aha, zdb found complete label sets for the vault pool on
/dev/rdsk/c6d1 and c7d1.  The incomplete labels were c6d1p0 and
c7d1p0.  Could I just zpool replace c6d1p0 with c6d1 and c7d1p0 with
c7d0?


You cannot import pool, so you cannot do any replacements with 'zpool 
replace'.


Check contents of /dev/dsk and /dev/rdsk to see if there are some 
missing links there for devices in question. You may want to run


devfsadm -c disk -sv
devfsadm -c disk -Csv

and see if it reports anything.

Try to move c6d1p0 and c7d1p0 out of /dev/dsk and /dev/rdsk and see if 
you can import the pool.


Another way to do this is to create a new directory and symlink
only the slices (actually, /dev/* is just a directory of symlinks)
Then you can tell zpool to only look at that directory and not /dev.
Something like:

 mkdir /mytmpdev
 cd /mytmpdev
 for i in /dev/rdsk/c[67]d*s* ; do
   ln -s $i
 done
 zpool import -d /mytmpdev

This should show the proper slices for vault.
-- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool mirroring

2009-06-04 Thread noz
 I believe slice 9 (alternates) is an older method for
 providing
 alternate disk blocks on x86 systems. Apparently, it
 can be removed by
 using the format -e command. I haven't tried this
 though.

format -e worked!!  It is resilvering as I type this message.  Thanks!!
 
 I don't think removing slice 9 will help though if
 these two disks
 are not identical, hence the bug.

They are identical though.  The only difference is s0 on the second disk is 
slightly smaller than s0 on the first disk due to s9 stealing about 15MBs of 
space.  So when I invoked zpool attach -f rpool c7d0s0 c7d1s0, I get the too 
small error.  After deleting s9, everything worked okay.

Thanks Cindy!!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool mirroring

2009-06-04 Thread Frank Middleton

On 06/04/09 06:44 PM, cindy.swearin...@sun.com wrote:

Hi Noz,

This problem was reported recently and this bug was filed:

6844090 zfs should be able to mirror to a smaller disk


Is this filed on bugs or defects? I had the exact same problem,
and it turned out to be a rounding error in Solaris format/fdisk.
The only way I could fix it was to use Linux (well, Fedora) sfdisk
to make both partitions exactly the same number of bytes. The
alternates partition seems to be hard wired on older disks  and
AFAIK there's no way to use that space. sfdisk is on the Fedora
live CD if you don't have a handy Linux system to get it from.
BTW the disks were nominally the same size but had different
geometries.

Since I can't find 6844090, I have no idea what it says, but this
really seems to be a bug in fdisk, not ZFS, although I would think
ZFS should be able to mirror to a disk that is only a tiny bit
smaller...

-- Frank
 

I believe slice 9 (alternates) is an older method for providing
alternate disk blocks on x86 systems. Apparently, it can be removed by
using the format -e command. I haven't tried this though.

I don't think removing slice 9 will help though if these two disks
are not identical, hence the bug.

You can workaround this problem by attaching a slightly larger disk.

Cindy


noz wrote:

I've been playing around with zfs root pool mirroring and came across
some problems.

I have no problems mirroring the root pool if I have both disks
attached during OpenSolaris installation (installer sees 2 disks).

The problem occurs when I only have one disk attached to the system
during install. After OpenSolaris installation completes, I attach the
second disk and try to create a mirror but I cannot.

Here are the steps I go through:
1) install OpenSolaris onto 16GB disk
2) after successful install, shutdown, and attach second disk (also 16GB)
3) fdisk -B
4) partition
5) zfs attach

Step 5 fails, giving a disk too small error.

What I noticed about the second disk is that it has a 9th partition
called alternates that takes up about 15MBs. This partition doesn't
exist in the first disk and I believe is what's causing the problem. I
can't figure out how to delete this partition and I don't know why
it's there. How do I mirror the root pool if I don't have both disks
attached during OpenSolaris installation? I realize I can just use a
disk larger than 16GBs, but that would be a waste.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] invalid vdev configuration

2009-06-04 Thread Brian Leonard
 Check contents of /dev/dsk and /dev/rdsk to see if
 there are some 
 missing links there for devices in question. You may
 want to run
 
 devfsadm -c disk -sv
 devfsadm -c disk -Csv
 
 and see if it reports anything.

There were quite a few links it removed, all on c0.
 
 Try to move c6d1p0 and c7d1p0 out of /dev/dsk and
 /dev/rdsk and see if 
 you can import the pool.

That worked! It was able to import the pool on c6d1 and c7d1.  Clearly I have a
little more reading to do regarding how Solaris manages disks.  Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss