Re: [zfs-discuss] bootfs ID on zfs root

2011-05-11 Thread Ketan
So y my system is not coming up .. i jumpstarted the system again ...  but it 
panics like earlier .. so how should i recover it .. and get it up ? 

System was booted from network into single user mode and then rpool imported 
and following is the listing 


# zpool list
NAMESIZE  ALLOC   FREECAP  HEALTH  ALTROOT
rpool68G  4.08G  63.9G 5%  ONLINE  -
# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 9.15G  57.8G98K  /rpool
rpool/ROOT4.08G  57.8G21K  /rpool/ROOT
rpool/ROOT/zfsBE_patched  4.08G  57.8G  4.08G  /
rpool/dump3.01G  60.8G16K  -
rpool/swap2.06G  59.9G16K  -
#



Dataset mos [META], ID 0, cr_txg 4, 137K, 62 objects
Dataset rpool/ROOT/zfsBE_patched [ZPL], ID 47, cr_txg 40, 4.08G, 110376 objects
Dataset rpool/ROOT [ZPL], ID 39, cr_txg 32, 21.0K, 4 objects
Dataset rpool/dump [ZVOL], ID 71, cr_txg 74, 16K, 2 objects
Dataset rpool/swap [ZVOL], ID 65, cr_txg 71, 16K, 2 objects
Dataset rpool [ZPL], ID 16, cr_txg 1, 98.0K, 10 objects


But when system is rebooted it again panics .. Is there any way to recover it ? 
I tried all the things which i know 


SunOS Release 5.10 Version Generic_142900-13 64-bit
Copyright 1983-2010 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/47 fstype zfs

panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] bootfs ID on zfs root

2011-05-11 Thread Ketan
Hello Jim,

Thanks for the reply following is my o/p before setting bootfs parameter 


# zpool get all rpool
NAME   PROPERTY   VALUE SOURCE
rpool  size   68G   -
rpool  capacity   5%-
rpool  altroot- default
rpool  health ONLINE-
rpool  guid   8812174757237060985   default
rpool  version22default
rpool  bootfs rpool/ROOT/zfsBE_patched  local
rpool  delegation ondefault
rpool  autoreplaceoff   default
rpool  cachefile  - default
rpool  failmode   continue  local
rpool  listsnapshots  ondefault
rpool  autoexpand off   default
rpool  free   63.9G -
rpool  allocated  4.08G -

But i still ran the command .. but it didn't help me and system still 
panics 

# zpool set bootfs=rpool/ROOT/zfsBE_patched rpool

# zpool get all rpool
NAME   PROPERTY   VALUE SOURCE
rpool  size   68G   -
rpool  capacity   5%-
rpool  altroot- default
rpool  health ONLINE-
rpool  guid   8812174757237060985   default
rpool  version22default
rpool  bootfs rpool/ROOT/zfsBE_patched  local
rpool  delegation ondefault
rpool  autoreplaceoff   default
rpool  cachefile  - default
rpool  failmode   continue  local
rpool  listsnapshots  ondefault
rpool  autoexpand off   default
rpool  free   63.9G -
rpool  allocated  4.08G -
# init 6
#
The system is being restarted.
syncing file systems... done
rebooting...
Resetting...
POST Sequence 01 CPU Check
POST Sequence 02 Banner
LSB#00 (XSB#00-0): POST 2.14.0 (2010/05/13 13:27)
POST Sequence 03 Fatal Check
POST Sequence 04 CPU Register
POST Sequence 05 STICK
POST Sequence 06 MMU
POST Sequence 07 Memory Initialize
POST Sequence 08 Memory
POST Sequence 09 Raw UE In Cache
POST Sequence 0A Floating Point Unit
POST Sequence 0B SC
POST Sequence 0C Cacheable Instruction
POST Sequence 0D Softint
POST Sequence 0E CPU Cross Call
POST Sequence 0F CMU-CH
POST Sequence 10 PCI-CH
POST Sequence 11 Master Device
POST Sequence 12 DSCP
POST Sequence 13 SC Check Before STICK Diag
POST Sequence 14 STICK Stop
POST Sequence 15 STICK Start
POST Sequence 16 Error CPU Check
POST Sequence 17 System Configuration
POST Sequence 18 System Status Check
POST Sequence 19 System Status Check After Sync
POST Sequence 1A OpenBoot Start...
POST Sequence Complete.
ChassisSerialNumber BCF080207K

Sun SPARC Enterprise M5000 Server, using Domain console
Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
Copyright 2010 Sun Microsystems, Inc. and Fujitsu Limited. All rights reserved.
OpenBoot 4.24.14, 65536 MB memory installed, Serial #75515882.
Ethernet address 0:14:4f:80:47:ea, Host ID: 848047ea.



Rebooting with command: boot
Boot device: rootmirror  File and args:
SunOS Release 5.10 Version Generic_142900-13 64-bit
Copyright 1983-2010 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/47 fstype zfs

panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root

0180b950 genunix:vfs_mountroot+358 (800, 200, 0, 18ef800, 1918000, 
194dc00)
  %l0-3: 010c2400 010c22ec 018f5178 011f5400
  %l4-7: 011f5400 01950400 0600 0200
0180ba10 genunix:main+9c (0, 180c000, 1892260, 1833358, 1839738, 
1940800)
  %l0-3: 0180c000 0180c000 70002000 
  %l4-7: 0189c800  0180c000 0001

And the bootfs ID thing i read it from the following opensolaris link where one 
user was getting was the same error as mine. 

http://opensolaris.org/jive/thread.jspa?messageID=315743

Can you plz tell me some other way to get it rt ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS snapshot query

2011-01-21 Thread Ketan
I saved one of my zfs snapshot on the remote machine with following command. 
And now i want to restore the same snapshot to original server how can i 
receive it on the original server from backup server.


 #zfs send rpool/ROOT/sol10_patched@preConfig | ssh x.x.x.x zfs receive 
emcpool1/sol10_patched@preConfig


On the remote server 



#zfs list -r emcpool1/sol10_patched
NAME   USED  AVAIL  REFER  MOUNTPOINT
emcpool1/sol10_patched5.79G   124G  5.79G  none
emcpool1/sol10_patched@preConfig  0  -  5.79G  -
I tried restoring it from the backup server but got error 



#zfs send emcpool1/sol10_patched@preConfig | ssh 10.63.25.218 
rpool/ROOT/sol10_patched@preConfig
Password:
bash: rpool/ROOT/sol10_patched@preConfig: No such file or directory
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] rpool issue

2010-09-30 Thread Ketan
I tried to do a zfs root via flash install which was not sucessful later i did 
a normal flash installation on my sparc system , but now zpool import shows 
following status 



zpool import
  pool: rootpool
id: 1557419723465062977
 state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
software, or recreate the pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-A5
config:

rootpoolUNAVAIL  newer version
  c0t1d0s2  ONLINE

  pool: rpool
id: 5084939592711816445
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

rpool   FAULTED  corrupted data
  c0t0d0s2  ONLINE


I tried removing zpool cache and rebooted and still its shows 2 unavailable 
pools for import ... 


#ls -l /etc/zfs/
total 0
#

I 'm on UFS based slices .. 

 df -h
Filesystem size   used  avail capacity  Mounted on
/dev/dsk/c0t1d0s0   52G   6.0G45G12%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
proc 0K 0K 0K 0%/proc
mnttab   0K 0K 0K 0%/etc/mnttab
swap29G   1.2M29G 1%/etc/svc/volatile
objfs0K 0K 0K 0%/system/object
sharefs  0K 0K 0K 0%/etc/dfs/sharetab
fd   0K 0K 0K 0%/dev/fd
swap   512M 0K   512M 0%/tmp
swap29G24K29G 1%/var/run
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS flash issue

2010-09-29 Thread Ketan
Thanks Cindy  Enda for the info ..
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS flash issue

2010-09-28 Thread Ketan
I have created a solaris9 zfs root flash archive for sun4v environment which i 
'm tryin to use for upgrading solaris10 u8 zfs root based server using live 
upgrade.

following is my current system status 


lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
zfsBE  yes  yesyes   no -
zfsBEu9yes  no noyes-



when i try to upgrade the with luupgrade i get following error 



luupgrade -f -n zfsBEu9 -s /mnt -a /flash/zfsBEu9.flar

63521 blocks
miniroot filesystem is lofs
Mounting miniroot at /mnt/Solaris_10/Tools/Boot
Validating the contents of the media /mnt.
The media is a standard Solaris media.
Validating the contents of the miniroot /mnt/Solaris_10/Tools/Boot.
Locating the flash install program.
Checking for existence of previously scheduled Live Upgrade requests.
Constructing flash profile to use.
Creating flash profile for BE zfsBEu9.
Performing the operating system flash install of the BE zfsBEu9.
CAUTION: Interrupting this process may leave the boot environment unstable or 
unbootable.
ERROR: The flash install failed; pfinstall returned these diagnostics:

ERROR: Field 2 - Invalid disk name (zfsBEu9)
The Solaris flash install of the BE zfsBEu9 failed.


What could be the reason for this .. is there anything i 'm not doin k ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFSroot LiveUpgrade

2010-07-27 Thread Ketan
i have 2 file systems on my primary disk /  /zones . i want to convert it to 
zfs root with live upgrade but when i live upgrade it creates the ZFS BE but 
instead of creating a separate /zones dataset it uses the same dataset from the 
primary BE (c3t1d0s3 ) ... is there any way i can do it so that i uses /zones 
in zfsBE ?  

/dev/dsk/c3t1d0s0   15G   8.5G   6.1G58%/
/dev/dsk/c3t1d0s3   39G16G23G43%/zones

# lustatus
Boot Environment   Is   Active ActiveCanCopy
Name   Complete NowOn Reboot Delete Status
--  -- - -- --
Sol10_u7   yes  yesyes   no -
lustatus
 # lucreate -c Sol10_u7 -n zfsBE -p rpool
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Volume Issue

2010-07-26 Thread Ketan
I 've a ZFS volume exported to one of my Ldom .. but now the Ldom does not see 
the data and complaing missing device .. is there any way i can mount or see 
what in the volume ... or check if the volume got corrupted or some other issue 
?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs root flash archive issue

2010-07-13 Thread Ketan
I have created a flash archive from a Ldom on T5220 with zfs root solaris 
10_u8. But after creation of flar the info shows the 
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s but not sun4v due to 
which i 'm unable to install this flash archive on another Ldom on the same 
host. Is there any way to modify the content_architectures ? and whats the 
reason for missing sun4v architecture form the content_architecture ? 

flar -i zfsflar_ldom
archive_id=4506fd9b45fba5b2c5e042715da50f0a
files_archived_method=cpio
creation_date=20100713054458
creation_master=e-u013
content_name=zfsBE
creation_node=ezzz-u013
creation_hardware_class=sun4v
creation_platform=SUNW,SPARC-Enterprise-T5220
creation_processor=sparc
creation_release=5.10
creation_os_name=SunOS
creation_os_version=Generic_141444-09
rootpool=rootpool
bootfs=rootpool/ROOT/zfsBE
snapname=zflash.100713.00.07
files_compressed_method=none
files_archived_size=3869213825
files_unarchived_size=3869213825
content_architectures=sun4c,sun4d,sun4m,sun4u,sun4us,sun4s
type=FULL
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs root flash archive issue

2010-07-13 Thread Ketan
Ok thanx will do that .
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Legacy MountPoint for /rpool/ROOT

2010-07-06 Thread Ketan
I have two different servers with ZFS root but both of them has different 
mountpoint for rpool/ROOT  one is /rpool/ROOT and other is legacy. Whats 
the difference between the two and which is the one we should keep. 

And why there is 3 different zfs datasets  rpool, rpool/ROOT and 
rpool/ROOT/zfsBE ? 


# zfs list -r rpool
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool  24.1G   110G98K/rpool
rpool/ROOT6.08G   110G21K/rpool/ROOT
rpool/ROOT/zfsBE   6.08G   110G  6.08G  /

**

zfs list -r rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool 34.0G  99.9G94K  /rpool
rpool/ROOT   27.9G  99.9G18K  legacy
rpool/ROOT/s10s_u6  27.9G  99.9G  27.9G  /
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Legacy MountPoint for /rpool/ROOT

2010-07-06 Thread Ketan
I have two different servers with ZFS root but both of them has different 
mountpoint for rpool/ROOT  one is /rpool/ROOT and other is legacy. Whats 
the difference between the two and which is the one we should keep. 

And why there is 3 different zfs datasets  rpool, rpool/ROOT and 
rpool/ROOT/zfsBE ? 

Can anyone explain this to me . Thanx in advance

# zfs list -r rpool
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool  24.1G   110G98K/rpool
rpool/ROOT6.08G   110G21K/rpool/ROOT
rpool/ROOT/zfsBE   6.08G   110G  6.08G  /

**

zfs list -r rpool
NAMEUSED  AVAIL  REFER  MOUNTPOINT
rpool 34.0G  99.9G94K  /rpool
rpool/ROOT   27.9G  99.9G18K  legacy
rpool/ROOT/s10s_u6  27.9G  99.9G  27.9G  /
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] rpool issue

2010-07-02 Thread Ketan
I have created one of my system from a flash archive which was created from a 
system running zfs root .. but since its update 6 it didn;t work with flash 
archive .. after the system is built from the same flash archive .. system is 
up but i get 
following error for rpool .. 

How can i remove this rpool ...  ? 

# zpool status -v rpool
  pool: rpool
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
rpool UNAVAIL  0 0 0  insufficient replicas
  mirror  UNAVAIL  0 0 0  corrupted data
c1t0d0s0  ONLINE   0 0 0
c1t5d0s0  UNAVAIL  0 0 0  cannot ope
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS ARC cache issue

2010-06-03 Thread Ketan
We are having a server running zfs root with 64G RAM and the system has 3 zones 
running oracle fusion app and zfs cache is using 40G memory as per 

kstat zfs:0:arcstats:size. and system shows only 5G of memory is free rest is 
taken by kernel and 2 remaining zones. 

Now my problem is that fusion guys are getting not enough memory message while 
starting their application due to top and vmstat shows 5G as free memory. But i 
read ZFS cache releases memory as required by the application so why fusion 
application is not starting up. Is there some we can do to decrease the ARC 
Cache usage on the fly without rebooting the global zone ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ARC cache issue

2010-06-03 Thread Ketan
Thanx Rick .. but this guide does not offer any method to reduce the ARC cache 
size on the fly without rebooting the system. And the system's memory 
utilization is running very high since 2 weeks now and just 5G of memory is 
free.  And the arc cache is showing 40G of usage. and its not decreasing its 
just increasing.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ARC cache issue

2010-06-03 Thread Ketan
So you want me to run this on production global zone running 3 other production 
applications .. :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove the exported zpool

2009-07-06 Thread Ketan
Already tried that ... :-( 


# zpool destroy -f emcpool2
cannot open 'emcpool2': no such pool
#
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove the exported zpool

2009-07-05 Thread Ketan
I had a pool which was exported and due to some issues on my SAN i was never 
able to import it again. Can anyone tell me how can i destroy the exported pool 
to free up the LUN. I tried to create a new pool on the same pool but it gives 
me following error 

# zpool create emcpool4 emcpower0c
cannot create 'emcpool4': invalid argument for this pool operation

Can anyone tell me how can i remove the old pool and create a new pool on the 
same LUN
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Copy data from zfs datasets

2009-07-02 Thread Ketan
I 've few data sets in my zfs pool which has been exported to the non global 
zones and i want to copy data on those datasets/file systems to my datasets in 
new pool mounted on global zone, how can i do that ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import issue

2009-06-29 Thread Ketan
didn't help .. tried 


r...@essapl020-u006 # zpool import
  pool: emcpool1
id: 5596268873059055768
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

emcpool1  UNAVAIL  insufficient replicas
  emcpower0c  UNAVAIL  cannot open
r...@essapl020-u006 # zpool upgrade -v
This system is currently running ZFS pool version 10.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
For more information on a particular version, including supported releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.
r...@essapl020-u006 # zpool upgrade emcpool1
This system is currently running ZFS pool version 10.

cannot open 'emcpool1': no such pool
r...@essapl020-u006 # zpool upgrade
This system is currently running ZFS pool version 10.

All pools are formatted using this version.
r...@essapl020-u006 #
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import issue

2009-06-29 Thread Ketan
And i just found out that one of my disk in the pool is showing missing labels 

r...@essapl020-u006 # zdb -l /dev/dsk/emcpower0c

LABEL 0

version=4
name='emcpool1'
state=0
txg=6973090
pool_guid=5596268873059055768
hostid=2228473662
hostname='essapl020-u006'
top_guid=3858675847091731383
guid=3858675847091731383
vdev_tree
type='disk'
id=0
guid=3858675847091731383
path='/dev/dsk/emcpower0c'
phys_path='/pseudo/e...@0:c,blk'
whole_disk=0
metaslab_array=14
metaslab_shift=32
ashift=9
asize=477788372992
is_log=0

LABEL 1

version=4
name='emcpool1'
state=0
txg=6973090
pool_guid=5596268873059055768
hostid=2228473662
hostname='essapl020-u006'
top_guid=3858675847091731383
guid=3858675847091731383
vdev_tree
type='disk'
id=0
guid=3858675847091731383
path='/dev/dsk/emcpower0c'
phys_path='/pseudo/e...@0:c,blk'
whole_disk=0
metaslab_array=14
metaslab_shift=32
ashift=9
asize=477788372992
is_log=0

LABEL 2

failed to read label 2

LABEL 3



is there anyway to recover it ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
Hi , I had a zfs pool which i exported before our SAN maintenance  and 
powerpath upgrade but now after the powerpath upgrade and maintenance i 'm 
unable to import the pool it give following errors 

 # zpool import
  pool: emcpool1
id: 5596268873059055768
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

emcpool1  UNAVAIL  insufficient replicas
  emcpower0c  UNAVAIL  cannot open
 # zpool import -f emcpool1
cannot import 'emcpool1': invalid vdev configuration

any idea what could be the reason for this ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
no idea path changed or not .. but following is output from my format .. and 
nothing has changed 

AVAILABLE DISK SELECTIONS:
   0. c1t0d0 SUN146G cyl 14087 alt 2 hd 24 sec 848
  /p...@0/p...@0/p...@2/s...@0/s...@0,0
   1. c1t1d0 SUN146G cyl 14087 alt 2 hd 24 sec 848
  /p...@0/p...@0/p...@2/s...@0/s...@1,0
   2. c3t5006016841E0A08Dd0 DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
  
/p...@0/p...@0/p...@8/p...@0/p...@2/SUNW,q...@0/f...@0,0/s...@w5006016841e0a08d,0
   3. c3t5006016041E0A08Dd0 DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
  
/p...@0/p...@0/p...@8/p...@0/p...@2/SUNW,q...@0/f...@0,0/s...@w5006016041e0a08d,0
   4. c3t5006016041E0A08Dd1 DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16
  
/p...@0/p...@0/p...@8/p...@0/p...@2/SUNW,q...@0/f...@0,0/s...@w5006016041e0a08d,1
   5. c3t5006016841E0A08Dd1 DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16
  
/p...@0/p...@0/p...@8/p...@0/p...@2/SUNW,q...@0/f...@0,0/s...@w5006016841e0a08d,1
   6. c5t5006016141E0A08Dd0 DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
  
/p...@0/p...@0/p...@8/p...@0/p...@a/SUNW,q...@0/f...@0,0/s...@w5006016141e0a08d,0
   7. c5t5006016941E0A08Dd0 DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
  
/p...@0/p...@0/p...@8/p...@0/p...@a/SUNW,q...@0/f...@0,0/s...@w5006016941e0a08d,0
   8. c5t5006016141E0A08Dd1 DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16
  
/p...@0/p...@0/p...@8/p...@0/p...@a/SUNW,q...@0/f...@0,0/s...@w5006016141e0a08d,1
   9. c5t5006016941E0A08Dd1 DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16
  
/p...@0/p...@0/p...@8/p...@0/p...@a/SUNW,q...@0/f...@0,0/s...@w5006016941e0a08d,1
  10. emcpower0a DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
  /pseudo/e...@0
  11. emcpower1a DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16
  /pseudo/e...@1
Specify disk (enter its number): Specify disk (enter its number):
r...@essapl020-u006 #
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
thats the problem this system has just 2 LUNs assigned and both are present as 
you can see from format output 

10. emcpower0a DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
/pseudo/e...@0
11. emcpower1a DGC-RAID5-0326 cyl 51198 alt 2 hd 256 sec 16
/pseudo/e...@1
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
zpool cache is in /etc/zfs/zpool.cache or it can be viewed as zdb -C

but in my case its blank :-(
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
and regarding the path my other system has same and its working fine  see the 
below output 

 # zpool status
  pool: emcpool1
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
emcpool1  ONLINE   0 0 0
  emcpower0c  ONLINE   0 0 0

errors: No known data errors
  10. emcpower0a DGC-RAID5-0326 cyl 65533 alt 2 hd 16 sec 890
  /pseudo/e...@0
  11. emcpower1a DGC-RAID 5-0326-300.00GB
  /pseudo/e...@1
Specify disk (enter its number): Specify disk (enter its number):
r...@essapl020-u008 #
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
Thanx to all for the efforts but i was able to import the zpool after disabling 
first HBA cards do not know the reason for this but now the pool is imported 
and there was not disk lost :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss