[zfs-discuss] invalid vdev configuration

2009-06-03 Thread Brian Leonard
I had a machine die the other day and take one of its zfs pools with it. I 
booted the new machine, with the same disks but a different SATA controller, 
and the rpool was mounted but another pool "vault" was not.  If I try to import 
it I get "invalid vdev configuration".  fmdump shows zfs.vdev.bad_label, and 
checking the label with zdb I find labels 2 and 3 missing.  How can I get my 
pool back?  Thanks.

snv_98

zpool import
  pool: vault
id: 196786381623412270
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

vault   UNAVAIL  insufficient replicas
  mirrorUNAVAIL  corrupted data
c6d1p0  ONLINE
c7d1p0  ONLINE


fmdump -eV
Jun 04 2009 07:43:47.165169453 ereport.fs.zfs.vdev.bad_label
nvlist version: 0
class = ereport.fs.zfs.vdev.bad_label
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
vdev = 0xaa3f2fd35788620b
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
vdev_guid = 0xaa3f2fd35788620b
vdev_type = mirror
parent_guid = 0x2bb202be54c462e
parent_type = root
prev_state = 0x7
__ttl = 0x1
__tod = 0x4a27c183 0x9d8492d

Jun 04 2009 07:43:47.165169794 ereport.fs.zfs.zpool
nvlist version: 0
class = ereport.fs.zfs.zpool
ena = 0x8ebd8837ae1
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = zfs
pool = 0x2bb202be54c462e
(end detector)

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
__ttl = 0x1
__tod = 0x4a27c183 0x9d84a82


zdb -l /dev/rdsk/c6d1p0

LABEL 0

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
guid=6231056817092537765
path='/dev/dsk/c1t0d0p0'
devid='id1,s...@f3b789a3f48e44b86000263f9/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
whole_disk=0
DTL=76

LABEL 1

version=13
name='vault'
state=0
txg=42243
pool_guid=196786381623412270
hostid=997759551
hostname='philo'
top_guid=12267576494733681163
guid=16901406274466991796
vdev_tree
type='mirror'
id=0
guid=12267576494733681163
whole_disk=0
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1000199946240
is_log=0
children[0]
type='disk'
id=0
guid=16901406274466991796
path='/dev/dsk/c1t1d0p0'
devid='id1,s...@f3b789a3f48e44b860003d3320001/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@1,0:q'
whole_disk=0
DTL=77
children[1]
type='disk'
id=1
guid=6231056817092537765
path='/dev/dsk/c1t0d0p0'
devid='id1,s...@f3b789a3f48e44b86000263f9/q'
phys_path='/p...@0,0/pci1043,8...@7/d...@0,0:q'
whole_disk=0
DTL=76

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS create hanging on door call?

2009-06-03 Thread Stephen Green

Darren J Moffat wrote:

Stephen Green wrote:



stgr...@blue:~$ pgrep -lf zfs
 7471 zfs create tank/mysql
stgr...@blue:~$ pfexec truss -p 7471
door_call(7, 0x080F7008)(sleeping...)



I suspect this is probably a nameservice lookup call running
'pfiles 7471' should confirm.


Looks like it's waiting on smbd?

stgr...@blue:~/Projects/silverton-base$ pfexec truss -p 16790
door_call(7, 0x080F7008)(sleeping...)
stgr...@blue:~/Projects/silverton-base$ pfexec pfiles 16790
16790:  zfs create tank/mybook
  Current rlimit: 256 file descriptors
[snip]
   7: S_IFDOOR mode:0644 dev:311,0 ino:40 uid:0 gid:3 size:0
  O_RDONLY  door to smbd[600]
  /var/run/smb_share_door

Perhaps trying to resolve it's own name for sharing?  There's an entry 
for blue in /etc/hosts.


zfs destroy for a file system does not have the same problem.

Steve

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "no pool_props" for OpenSolaris 2009.06 with old SPARC hardware

2009-06-03 Thread Aurélien Larcher
Hi,
thanks Cindy for your kind answer ;)

You're right ;) After digging into the documentation I found exactly what you 
say in the boot manpage 
(http://docs.sun.com/app/docs/doc/819-2240/boot-1m?a=view).

So I've set the bootfs property on the zpool and everything is fine now !
My good ol'Ultra 60 is running now 2009.06
Regards,

Aurelien

PS: for the record I roughly followed the steps of this blog entry => 
http://blogs.sun.com/edp/entry/moving_from_nevada_and_live
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-03 Thread Leonid Zamdborg
I'm running 2008.11.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion

2009-06-03 Thread David Magda

On Jun 3, 2009, at 19:37, Leonid Zamdborg wrote:

The new capacity, unfortunately, shows up as inaccessible.  I've  
tried exporting and importing the zpool, but the capacity is still  
not recognized.  I kept seeing things online about "Dynamic LUN  
Expansion", but how do I do this?


What OS version are you running?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] LUN expansion

2009-06-03 Thread Leonid Zamdborg
Hi,

I have a problem with expanding a zpool to reflect a change in the underlying 
hardware LUN.  I've created a zpool on top of a 3Ware hardware RAID volume, 
with a capacity of 2.7TB.  I've since added disks to the hardware volume, 
expanding the capacity of the volume to 10TB.  This change in capacity shows up 
in format:

0. c0t0d0 
/p...@0,0/pci10de,3...@e/pci13c1,1...@0/s...@0,0

When I do a prtvtoc /dev/dsk/c0t0d0, I get:

* /dev/dsk/c0t0d0 partition map
*
* Dimensions:
* 512 bytes/sector
* 21484142592 sectors
* 5859311549 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*   First SectorLast
*   Sector CountSector 
*  34   222   255
*
*  First SectorLast
* Partition  Tag  FlagsSector CountSector  Mount Directory
   0  400256 5859294943 5859295198
   8 1100  5859295199 16384 5859311582

The new capacity, unfortunately, shows up as inaccessible.  I've tried 
exporting and importing the zpool, but the capacity is still not recognized.  I 
kept seeing things online about "Dynamic LUN Expansion", but how do I do this?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] removing large files takes a really long time

2009-06-03 Thread Jon Sherwood
jsher...@host $cat /etc/release
  Solaris 10 10/08 s10s_u6wos_07b SPARC
   Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
jsher...@avon $uname -a
SunOS host 5.10 Generic_13-03 sun4u sparc SUNW,SPARC-Enterprise
jsher...@host $zpool upgrade
This system is currently running ZFS pool version 10.

All pools are formatted using this version.
jsher...@avon $zfs upgrade
This system is currently running ZFS filesystem version 3.

All filesystems are formatted with the current version.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "no pool_props" for OpenSolaris 2009.06 with old SPARC hardware

2009-06-03 Thread Cindy . Swearingen

Aurelien,

I don't think this scenario has been tested and I'm unclear about what
other steps might be missing, but I suspect that you need to set the
bootfs property on the root pool, depending on your ZFS BE, would look
something like this:

# zpool set bootfs=rpool/ROOT/zfsBE-name rpool

This syntax is supported on SXCE versions since build 88.

Cindy


Aurélien Larcher wrote:

Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can't use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed 
OS 2006.06 on it

To make the disk bootable I used:

installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c0t1d0s0

using the executable from my new rpool.

But when I boot my new disk, I get the error "no pool_props" and the booting process 
returns to prompt with "Fast Device MMU miss".

I read OpenPROM 4.x was needed because of AI ? Did I miss something ?
Can you enlighten me ?

Thanks you,

aurelien

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] removing large files takes a really long time

2009-06-03 Thread Blake
On Wed, Jun 3, 2009 at 5:18 PM, Jon Sherwood  wrote:
> We have a 4TB pool that some files that are 50-150GB in size can take 5-10 
> minutes before it returns a command prompt from rm'ing a file.  Does anyone 
> know why this is?  Is this a bug?  We have compression turned on for the 
> filesystem is this a contributing factor?

What's the version of the OS, zpool, and zfs filesystem?  There was a
bug for behavior like this when removing zvols with lots of snapshots
- maybe it's related?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] removing large files takes a really long time

2009-06-03 Thread Jon Sherwood
We have a 4TB pool that some files that are 50-150GB in size can take 5-10 
minutes before it returns a command prompt from rm'ing a file.  Does anyone 
know why this is?  Is this a bug?  We have compression turned on for the 
filesystem is this a contributing factor?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] "no pool_props" for OpenSolaris 2009.06 with old SPARC hardware

2009-06-03 Thread Aurélien Larcher
Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can't use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed 
OS 2006.06 on it

To make the disk bootable I used:

installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c0t1d0s0

using the executable from my new rpool.

But when I boot my new disk, I get the error "no pool_props" and the booting 
process returns to prompt with "Fast Device MMU miss".

I read OpenPROM 4.x was needed because of AI ? Did I miss something ?
Can you enlighten me ?

Thanks you,

aurelien
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss