Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
zdb -l /dev/rdsk/c22t2d0s0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3

> -Original Message-
> From: Fred Liu
> Sent: 星期二, 九月 20, 2011 4:06
> To: 'Richard Elling'
> Cc: zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> 
> 
> 
> > -Original Message-
> > From: Richard Elling [mailto:richard.ell...@gmail.com]
> > Sent: 星期二, 九月 20, 2011 3:57
> > To: Fred Liu
> > Cc: zfs-discuss@opensolaris.org
> > Subject: Re: [zfs-discuss] remove wrongly added device from zpool
> >
> > more below…
> >
> > On Sep 19, 2011, at 9:51 AM, Fred Liu wrote:
> >
> > Is this disk supposed to be available?
> > You might need to check the partition table, if one exists, to
> > determine if
> > s0 has a non-zero size.
> >
> 
> Yes. I use format to write an EFI label to it. Now this error is gone.
> But all four label are failed to unpack under "zdb -l" now.
> 
> 
> >
> > This is a bad sign, but can be recoverable, depending on how you got
> > here. zdb is saying
> > that it could not find labels at the end of the disk. Label 2 and
> label
> > 3 are 256KB each, located
> > at the end of the disk, aligned to 256KB boundary. zpool import is
> > smarter than zdb in these
> > cases, and can often recover from it -- up to the loss of all 4
> labels,
> > but you need to make sure
> > that the partition tables look reasonable and haven't changed.
> >
> 
> I have tried zpool import -fFX cn03. But it will do core-dump and
> reboot about 1 hour later.
> 
> >
> > Unless I'm mistaken, these are ACARD SSDs that have an optional CF
> > backup. Let's hope
> > that the CF backup worked.
> 
> Yes. It is ACARD. You mean push the "restore from CF" button to see
> what will happen?
> 
> 
> Thanks for your nice help!
> 
> 
> Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu


> -Original Message-
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: 星期二, 九月 20, 2011 3:57
> To: Fred Liu
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] remove wrongly added device from zpool
> 
> more below…
> 
> On Sep 19, 2011, at 9:51 AM, Fred Liu wrote:
> 
> Is this disk supposed to be available?
> You might need to check the partition table, if one exists, to
> determine if
> s0 has a non-zero size.
> 

Yes. I use format to write an EFI label to it. Now this error is gone.
But all four label are failed to unpack under "zdb -l" now.


> 
> This is a bad sign, but can be recoverable, depending on how you got
> here. zdb is saying
> that it could not find labels at the end of the disk. Label 2 and label
> 3 are 256KB each, located
> at the end of the disk, aligned to 256KB boundary. zpool import is
> smarter than zdb in these
> cases, and can often recover from it -- up to the loss of all 4 labels,
> but you need to make sure
> that the partition tables look reasonable and haven't changed.
> 

I have tried zpool import -fFX cn03. But it will do core-dump and reboot about 
1 hour later.

> 
> Unless I'm mistaken, these are ACARD SSDs that have an optional CF
> backup. Let's hope
> that the CF backup worked.

Yes. It is ACARD. You mean push the "restore from CF" button to see what will 
happen?


Thanks for your nice help!


Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
more below…

On Sep 19, 2011, at 9:51 AM, Fred Liu wrote:

>> 
>> No, but your pool is not imported.
>> 
> 
> YES. I see.
>> and look to see which disk is missing"?
>> 
>> The label, as displayed by "zdb -l" contains the heirarchy of the
>> expected pool config.
>> The contents are used to build the output you see in the "zpool import"
>> or "zpool status"
>> commands. zpool is complaining that it cannot find one of these disks,
>> so look at the
>> labels on the disks to determine what is or is not missing. The next
>> steps depend on
>> this knowledge.
> 
> zdb -l /dev/rdsk/c22t2d0s0
> cannot open '/dev/rdsk/c22t2d0s0': I/O error

Is this disk supposed to be available?
You might need to check the partition table, if one exists, to determine if
s0 has a non-zero size.

> root@cn03:~# zdb -l /dev/rdsk/c22t3d0s0
> 
> LABEL 0
> 
>version: 22
>name: 'cn03'
>state: 0
>txg: 18269872
>pool_guid: 1907858070511204110
>hostid: 13564652
>hostname: 'cn03'
>top_guid: 11074483144412112931
>guid: 11074483144412112931
>vdev_children: 6
>vdev_tree:
>type: 'disk'
>id: 1
>guid: 11074483144412112931
>path: '/dev/dsk/c22t3d0s0'
>devid: 
> 'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e202020202020202035363238363739005f31/a'
>phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
>whole_disk: 1
>metaslab_array: 37414
>metaslab_shift: 24
>ashift: 9
>asize: 1895563264
>is_log: 0
>create_txg: 18269863
> 
> LABEL 1
> 
>version: 22
>name: 'cn03'
>state: 0
>txg: 18269872
>pool_guid: 1907858070511204110
>hostid: 13564652
>hostname: 'cn03'
>top_guid: 11074483144412112931
>guid: 11074483144412112931
>vdev_children: 6
>vdev_tree:
>type: 'disk'
>id: 1
>guid: 11074483144412112931
>path: '/dev/dsk/c22t3d0s0'
>devid: 
> 'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e202020202020202035363238363739005f31/a'
>phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
>whole_disk: 1
>metaslab_array: 37414
>metaslab_shift: 24
>ashift: 9
>asize: 1895563264
>is_log: 0
>create_txg: 18269863
> 
> LABEL 1
> 
>version: 22
>name: 'cn03'
>state: 0
>txg: 18269872
>pool_guid: 1907858070511204110
>hostid: 13564652
>hostname: 'cn03'
>top_guid: 11074483144412112931
>guid: 11074483144412112931
>vdev_children: 6
>vdev_tree:
>type: 'disk'
>id: 1
>guid: 11074483144412112931
>path: '/dev/dsk/c22t3d0s0'
>devid: 
> 'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e202020202020202035363238363739005f31/a'
>phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
>whole_disk: 1
>metaslab_array: 37414
>metaslab_shift: 24
>ashift: 9
>asize: 1895563264
>is_log: 0
>create_txg: 18269863
> 
> LABEL 2
> 
> failed to unpack label 2
> 
> LABEL 3
> 
> failed to unpack label 3

This is a bad sign, but can be recoverable, depending on how you got here. zdb 
is saying
that it could not find labels at the end of the disk. Label 2 and label 3 are 
256KB each, located
at the end of the disk, aligned to 256KB boundary. zpool import is smarter than 
zdb in these
cases, and can often recover from it -- up to the loss of all 4 labels, but you 
need to make sure 
that the partition tables look reasonable and haven't changed.

> c22t2d0 and c22t3d0 are the devices I physically removed and connected back 
> to the server.
> How can I fix them?

Unless I'm mistaken, these are ACARD SSDs that have an optional CF backup. 
Let's hope
that the CF backup worked.
 -- richard

-- 

ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 













___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> 
> No, but your pool is not imported.
> 

YES. I see.
> and look to see which disk is missing"?
> 
> The label, as displayed by "zdb -l" contains the heirarchy of the
> expected pool config.
> The contents are used to build the output you see in the "zpool import"
> or "zpool status"
> commands. zpool is complaining that it cannot find one of these disks,
> so look at the
> labels on the disks to determine what is or is not missing. The next
> steps depend on
> this knowledge.

zdb -l /dev/rdsk/c22t2d0s0
cannot open '/dev/rdsk/c22t2d0s0': I/O error

root@cn03:~# zdb -l /dev/rdsk/c22t3d0s0

LABEL 0

version: 22
name: 'cn03'
state: 0
txg: 18269872
pool_guid: 1907858070511204110
hostid: 13564652
hostname: 'cn03'
top_guid: 11074483144412112931
guid: 11074483144412112931
vdev_children: 6
vdev_tree:
type: 'disk'
id: 1
guid: 11074483144412112931
path: '/dev/dsk/c22t3d0s0'
devid: 
'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e202020202020202035363238363739005f31/a'
phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
whole_disk: 1
metaslab_array: 37414
metaslab_shift: 24
ashift: 9
asize: 1895563264
is_log: 0
create_txg: 18269863

LABEL 1

version: 22
name: 'cn03'
state: 0
txg: 18269872
pool_guid: 1907858070511204110
hostid: 13564652
hostname: 'cn03'
top_guid: 11074483144412112931
guid: 11074483144412112931
vdev_children: 6
vdev_tree:
type: 'disk'
id: 1
guid: 11074483144412112931
path: '/dev/dsk/c22t3d0s0'
devid: 
'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e202020202020202035363238363739005f31/a'
phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
whole_disk: 1
metaslab_array: 37414
metaslab_shift: 24
ashift: 9
asize: 1895563264
is_log: 0
create_txg: 18269863

LABEL 1

version: 22
name: 'cn03'
state: 0
txg: 18269872
pool_guid: 1907858070511204110
hostid: 13564652
hostname: 'cn03'
top_guid: 11074483144412112931
guid: 11074483144412112931
vdev_children: 6
vdev_tree:
type: 'disk'
id: 1
guid: 11074483144412112931
path: '/dev/dsk/c22t3d0s0'
devid: 
'id1,sd@s4154412020202020414e53393031305f324e4e4e324e4e4e202020202020202035363238363739005f31/a'
phys_path: '/pci@0,0/pci15d9,400@1f,2/disk@3,0:a'
whole_disk: 1
metaslab_array: 37414
metaslab_shift: 24
ashift: 9
asize: 1895563264
is_log: 0
create_txg: 18269863

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3


c22t2d0 and c22t3d0 are the devices I physically removed and connected back to 
the server.
How can I fix them?

Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
On Sep 19, 2011, at 9:16 AM, Fred Liu wrote:
>> 
>> For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0".
>> 1. Confirm that each disk provides 4 labels.
>> 2. Build the vdev tree by hand and look to see which disk is missing
>> 
>> This can be tedious and time consuming.
> 
> Do I need to export the pool first?

No, but your pool is not imported.

> Can you give more details about #2 -- " Build the vdev tree by hand and look 
> to see which disk is missing"?

The label, as displayed by "zdb -l" contains the heirarchy of the expected pool 
config.
The contents are used to build the output you see in the "zpool import" or 
"zpool status"
commands. zpool is complaining that it cannot find one of these disks, so look 
at the
labels on the disks to determine what is or is not missing. The next steps 
depend on
this knowledge.
-- richard

-- 

ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 













___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu

> 
> For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0".
> 1. Confirm that each disk provides 4 labels.
> 2. Build the vdev tree by hand and look to see which disk is missing
> 
> This can be tedious and time consuming.

Do I need to export the pool first?
Can you give more details about #2 -- " Build the vdev tree by hand and look to 
see which disk is missing"?


Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
On Sep 19, 2011, at 8:34 AM, Fred Liu wrote:

> I get some good progress like following:
> 
> zpool import
>  pool: cn03
>id: 1907858070511204110
> state: UNAVAIL
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
>devices and try again.
>   see: http://www.sun.com/msg/ZFS-8000-6X
> config:
> 
>cn03   UNAVAIL  missing device
>  raidz2-0 ONLINE
>c4t5000C5000970B70Bd0  ONLINE
>c4t5000C5000972C693d0  ONLINE
>c4t5000C500097009DBd0  ONLINE
>c4t5000C500097040BFd0  ONLINE
>c4t5000C5000970727Fd0  ONLINE
>c4t5000C50009707487d0  ONLINE
>c4t5000C50009724377d0  ONLINE
>c4t5000C50039F0B447d0  ONLINE
>  c22t3d0  ONLINE
>  c4t50015179591C238Fd0ONLINE
>logs
>  c22t4d0  ONLINE
>  c22t5d0  ONLINE
> 
>Additional devices are known to be part of this pool, though their
>exact configuration cannot be determined.
> 
> Any suggestions?

For each disk, look at the output of "zdb -l /dev/rdsk/DISKNAMEs0". 
1. Confirm that each disk provides 4 labels.
2. Build the vdev tree by hand and look to see which disk is missing

This can be tedious and time consuming.
 -- richard

> 
> Thanks.
> 
> Fred
> 
>> -Original Message-
>> From: Fred Liu
>> Sent: 星期一, 九月 19, 2011 22:28
>> To: 'Richard Elling'
>> Cc: zfs-discuss@opensolaris.org
>> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
>> 
>> 
>>> 
>>> You don't mention which OS you are using, but for the past 5 years of
>>> [Open]Solaris
>>> releases, the system prints a warning message and will not allow this
>>> to occur
>>> without using the force option (-f).
>>>  -- richard
>>> 
>> Yes. There is a warning message, I used zpool add -f.
>> 
>> Thanks.
>> 
>> Fred

-- 

ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 













___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
And:

format
Searching for disks...done

c22t2d0: configured with capacity of 1.77GB


AVAILABLE DISK SELECTIONS:
   0. c4t5000C5003AC39D5Fd0 
  /scsi_vhci/disk@g5000c5003ac39d5f
   1. c4t5000C50039F0B447d0 
  /scsi_vhci/disk@g5000c50039f0b447
   2. c4t5000C5000970B70Bd0 
  /scsi_vhci/disk@g5000c5000970b70b
   3. c4t5000C5000972C693d0 
  /scsi_vhci/disk@g5000c5000972c693
   4. c4t5000C500097009DBd0 
  /scsi_vhci/disk@g5000c500097009db
   5. c4t5000C500097040BFd0 
  /scsi_vhci/disk@g5000c500097040bf
   6. c4t5000C5000970727Fd0 
  /scsi_vhci/disk@g5000c5000970727f
   7. c4t5000C50009724377d0 
  /scsi_vhci/disk@g5000c50009724377
   8. c4t5000C50009707487d0 
  /scsi_vhci/disk@g5000c50009707487
   9. c4t50015179591C238Fd0 
  /scsi_vhci/disk@g50015179591c238f
  10. c4t500151795910D221d0 
  /scsi_vhci/disk@g500151795910d221
  11. c22t2d0 
  /pci@0,0/pci15d9,400@1f,2/disk@2,0
  12. c22t3d0 
  /pci@0,0/pci15d9,400@1f,2/disk@3,0
  13. c22t4d0 
  /pci@0,0/pci15d9,400@1f,2/disk@4,0
  14. c22t5d0 
  /pci@0,0/pci15d9,400@1f,2/disk@5,0

> -Original Message-
> From: Fred Liu
> Sent: 星期一, 九月 19, 2011 23:35
> To: Fred Liu; Richard Elling
> Cc: zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> 
> I get some good progress like following:
> 
> zpool import
>   pool: cn03
> id: 1907858070511204110
>  state: UNAVAIL
> status: One or more devices are missing from the system.
> action: The pool cannot be imported. Attach the missing
> devices and try again.
>see: http://www.sun.com/msg/ZFS-8000-6X
> config:
> 
> cn03   UNAVAIL  missing device
>   raidz2-0 ONLINE
> c4t5000C5000970B70Bd0  ONLINE
> c4t5000C5000972C693d0  ONLINE
> c4t5000C500097009DBd0  ONLINE
> c4t5000C500097040BFd0  ONLINE
> c4t5000C5000970727Fd0  ONLINE
> c4t5000C50009707487d0  ONLINE
> c4t5000C50009724377d0  ONLINE
> c4t5000C50039F0B447d0  ONLINE
>   c22t3d0  ONLINE
>   c4t50015179591C238Fd0ONLINE
> logs
>   c22t4d0  ONLINE
>   c22t5d0  ONLINE
> 
> Additional devices are known to be part of this pool, though
> their
> exact configuration cannot be determined.
> 
> Any suggestions?
> 
> Thanks.
> 
> Fred
> 
> > -Original Message-
> > From: Fred Liu
> > Sent: 星期一, 九月 19, 2011 22:28
> > To: 'Richard Elling'
> > Cc: zfs-discuss@opensolaris.org
> > Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> >
> >
> > >
> > > You don't mention which OS you are using, but for the past 5 years
> of
> > > [Open]Solaris
> > > releases, the system prints a warning message and will not allow
> this
> > > to occur
> > > without using the force option (-f).
> > >   -- richard
> > >
> >  Yes. There is a warning message, I used zpool add -f.
> >
> > Thanks.
> >
> > Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
I get some good progress like following:

zpool import
  pool: cn03
id: 1907858070511204110
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

cn03   UNAVAIL  missing device
  raidz2-0 ONLINE
c4t5000C5000970B70Bd0  ONLINE
c4t5000C5000972C693d0  ONLINE
c4t5000C500097009DBd0  ONLINE
c4t5000C500097040BFd0  ONLINE
c4t5000C5000970727Fd0  ONLINE
c4t5000C50009707487d0  ONLINE
c4t5000C50009724377d0  ONLINE
c4t5000C50039F0B447d0  ONLINE
  c22t3d0  ONLINE
  c4t50015179591C238Fd0ONLINE
logs
  c22t4d0  ONLINE
  c22t5d0  ONLINE

Additional devices are known to be part of this pool, though their
exact configuration cannot be determined.

Any suggestions?

Thanks.

Fred

> -Original Message-
> From: Fred Liu
> Sent: 星期一, 九月 19, 2011 22:28
> To: 'Richard Elling'
> Cc: zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> 
> 
> >
> > You don't mention which OS you are using, but for the past 5 years of
> > [Open]Solaris
> > releases, the system prints a warning message and will not allow this
> > to occur
> > without using the force option (-f).
> >   -- richard
> >
>  Yes. There is a warning message, I used zpool add -f.
> 
> Thanks.
> 
> Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu

> 
> You don't mention which OS you are using, but for the past 5 years of
> [Open]Solaris
> releases, the system prints a warning message and will not allow this
> to occur
> without using the force option (-f).
>   -- richard
> 
 Yes. There is a warning message, I used zpool add -f.

Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
I use opensolaris b134.

Thanks.

Fred

> -Original Message-
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: 星期一, 九月 19, 2011 22:21
> To: Fred Liu
> Cc: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] remove wrongly added device from zpool
> 
> On Sep 19, 2011, at 12:10 AM, Fred Liu  wrote:
> 
> > Hi,
> >
> > For my carelessness, I added two disks into a raid-z2 zpool as normal
> data disk, but in fact
> > I want to make them as zil devices.
> 
> You don't mention which OS you are using, but for the past 5 years of
> [Open]Solaris
> releases, the system prints a warning message and will not allow this
> to occur
> without using the force option (-f).
>   -- richard
> 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Richard Elling
On Sep 19, 2011, at 12:10 AM, Fred Liu  wrote:

> Hi,
> 
> For my carelessness, I added two disks into a raid-z2 zpool as normal data 
> disk, but in fact
> I want to make them as zil devices.

You don't mention which OS you are using, but for the past 5 years of 
[Open]Solaris
releases, the system prints a warning message and will not allow this to occur
without using the force option (-f). 
  -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
The core dump:

r10: ff19a5592000 r11:0 r12:0   
r13:0 r14:0 r15: ff00ba4a5c60   
fsb: fd7fff172a00 gsb: ff19a5592000  ds:0   
 es:0  fs:0  gs:0   
trp:e err:0 rip: f782f81a   
 cs:   30 rfl:10246 rsp: ff00b9bf0a40   
 ss:   38   

ff00b9bf0830 unix:die+10f ()
ff00b9bf0940 unix:trap+177b ()  
ff00b9bf0950 unix:cmntrap+e6 () 
ff00b9bf0ab0 procfs:prchoose+72 ()  
ff00b9bf0b00 procfs:prgetpsinfo+2b ()   
ff00b9bf0ce0 procfs:pr_read_psinfo+4e ()
ff00b9bf0d30 procfs:prread+72 ()
ff00b9bf0da0 genunix:fop_read+6b () 
ff00b9bf0f00 genunix:pread+22c ()   
ff00b9bf0f10 unix:brand_sys_syscall+20d ()  

syncing file systems... done
dumping to /dev/zvol/dsk/rpool/dump, offset 65536, content: kernel  
 0:17 100% done 
100% done: 1041082 pages dumped, dump succeeded 
rebooting...

> -Original Message-
> From: Fred Liu
> Sent: 星期一, 九月 19, 2011 22:00
> To: Fred Liu; 'Edward Ned Harvey'; 'Krunal Desai'
> Cc: 'zfs-discuss@opensolaris.org'
> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> 
> I also used zpool import -fFX cn03 in b134 and b151a(via live SX11 live
> cd). It resulted a core dump and reboot after about 15 min.
> I can see all the leds are blinking on the HDD within  this 15 min.
> Can replacing  empty ZIL devices help?
> 
> Thanks.
> 
> Fred
> > -Original Message-
> > From: Fred Liu
> > Sent: 星期一, 九月 19, 2011 21:54
> > To: 'Edward Ned Harvey'; 'Krunal Desai'
> > Cc: zfs-discuss@opensolaris.org
> > Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> >
> > >
> > > I'll tell you what does not help.  This email.  Now that you know
> > what
> > > you're trying to do, why don't you post the results of your "zpool
> > > import" command?  How about an error message, and how you're trying
> > to
> > > go about fixing your pool?  Nobody here can help you without
> > > information.
> > >
> > >
> > User tty   login@  idle   JCPU   PCPU  what
> > root console   9:25pm  w
> > root@cn03:~# df
> > Filesystem   1K-blocks  Used Available Use% Mounted on
> > rpool/ROOT/opensolaris
> >   94109412   6880699  87228713   8% /
> > swap 108497952   344 108497608   1%
> > /etc/svc/volatile
> > /usr/lib/libc/libc_hwcap1.so.1
> >   94109412   6880699  87228713   8%
> /lib/libc.so.1
> > swap 108497616 8 108497608   1% /tmp
> > swap 10849768880 108497608   1% /var/run
> > rpool/export 4686423 46841   1% /export
> > rpool/export/home4686423 46841   1% /export/home
> > rpool/export/home/fred
> >  48710  5300 43410  11%
> > /export/home/fred
> > rpool10215515880 102155078   1% /rpool
> > root@cn03:~# !z
> > zpool import cn03
> > cannot import 'cn03': one or more devices is currently unavailable
> > Destroy and re-create the pool from
> > a backup source.
> >
> > Thanks.
> >
> > Fred
> >
> >
> >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
I also used zpool import -fFX cn03 in b134 and b151a(via live SX11 live cd). It 
resulted a core dump and reboot after about 15 min.
I can see all the leds are blinking on the HDD within  this 15 min.
Can replacing  empty ZIL devices help?

Thanks.

Fred
> -Original Message-
> From: Fred Liu
> Sent: 星期一, 九月 19, 2011 21:54
> To: 'Edward Ned Harvey'; 'Krunal Desai'
> Cc: zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] remove wrongly added device from zpool
> 
> >
> > I'll tell you what does not help.  This email.  Now that you know
> what
> > you're trying to do, why don't you post the results of your "zpool
> > import" command?  How about an error message, and how you're trying
> to
> > go about fixing your pool?  Nobody here can help you without
> > information.
> >
> >
> User tty   login@  idle   JCPU   PCPU  what
> root console   9:25pm  w
> root@cn03:~# df
> Filesystem   1K-blocks  Used Available Use% Mounted on
> rpool/ROOT/opensolaris
>   94109412   6880699  87228713   8% /
> swap 108497952   344 108497608   1%
> /etc/svc/volatile
> /usr/lib/libc/libc_hwcap1.so.1
>   94109412   6880699  87228713   8% /lib/libc.so.1
> swap 108497616 8 108497608   1% /tmp
> swap 10849768880 108497608   1% /var/run
> rpool/export 4686423 46841   1% /export
> rpool/export/home4686423 46841   1% /export/home
> rpool/export/home/fred
>  48710  5300 43410  11%
> /export/home/fred
> rpool10215515880 102155078   1% /rpool
> root@cn03:~# !z
> zpool import cn03
> cannot import 'cn03': one or more devices is currently unavailable
> Destroy and re-create the pool from
> a backup source.
> 
> Thanks.
> 
> Fred
> 
> 
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> 
> I'll tell you what does not help.  This email.  Now that you know what
> you're trying to do, why don't you post the results of your "zpool
> import" command?  How about an error message, and how you're trying to
> go about fixing your pool?  Nobody here can help you without
> information.
> 
> 
User tty   login@  idle   JCPU   PCPU  what 
root console   9:25pm  w
root@cn03:~# df 
Filesystem   1K-blocks  Used Available Use% Mounted on  
rpool/ROOT/opensolaris  
  94109412   6880699  87228713   8% /   
swap 108497952   344 108497608   1% /etc/svc/volatile   
/usr/lib/libc/libc_hwcap1.so.1  
  94109412   6880699  87228713   8% /lib/libc.so.1  
swap 108497616 8 108497608   1% /tmp
swap 10849768880 108497608   1% /var/run
rpool/export 4686423 46841   1% /export 
rpool/export/home4686423 46841   1% /export/home
rpool/export/home/fred  
 48710  5300 43410  11% /export/home/fred   
rpool10215515880 102155078   1% /rpool  
root@cn03:~# !z 
zpool import cn03   
cannot import 'cn03': one or more devices is currently unavailable  
Destroy and re-create the pool from 
a backup source.

Thanks.

Fred  



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
> From: Krunal Desai [mailto:mov...@gmail.com]
> 
> On Mon, Sep 19, 2011 at 9:29 AM, Fred Liu  wrote:
> > Yes. I have connected them back to server. But it does not help.
> > I am really sad now...

I'll tell you what does not help.  This email.  Now that you know what you're 
trying to do, why don't you post the results of your "zpool import" command?  
How about an error message, and how you're trying to go about fixing your pool? 
 Nobody here can help you without information.


> I cringed a little when I read the thread title. I did this on
> accident once as well, but "lucky" for me, I had enough scratch
> storage around in various sizes to cobble together a JBOD (risky) and
> use it as a holding area for my data while I remade the pool.
> 
> I'm a home user and only have around 21TB or so, so it was feasible
> for me. Probably not so feasible for you enterprise guys with 1000s of
> users and 100s of filesystems!

No enterprise guys with 1000s of users and 100s of filesystems are making this 
mistake.  Even if it does happen, on a pool that significant, the obvious 
response is to add redundancy instead of recreating the pool.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Krunal Desai
On Mon, Sep 19, 2011 at 9:29 AM, Fred Liu  wrote:
> Yes. I have connected them back to server. But it does not help.
> I am really sad now...

I cringed a little when I read the thread title. I did this on
accident once as well, but "lucky" for me, I had enough scratch
storage around in various sizes to cobble together a JBOD (risky) and
use it as a holding area for my data while I remade the pool.

I'm a home user and only have around 21TB or so, so it was feasible
for me. Probably not so feasible for you enterprise guys with 1000s of
users and 100s of filesystems!

--khd
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> 
> So...  You accidentally added non-redundant disks to a pool.  They were
> not
> part of the raidz2, so the redundancy in the raidz2 did not help you.
> You
> removed the non-redundant disks, and now the pool is faulted.
> 
> The only thing you can do is:
> Add the disks back to the pool (re-insert them to the system).  Then
> you
> should be able to import the pool.
> 
> Now, you don't want these devices in the pool.  You must either destroy
> &
> recreate your pool, or add redundancy to your non-redundant devices.
> 

Yes. I have connected them back to server. But it does not help.
I am really sad now...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
> 
> For my carelessness, I added two disks into a raid-z2 zpool as normal data
> disk, 

> -Original Message-
> From: Fred Liu [mailto:fred_...@issi.com]
>
> I also did another huge "mistake" which really brings me into the deep
pain.
> I physically removed these two added devices for I though raidz2 can
afford
> it.

So...  You accidentally added non-redundant disks to a pool.  They were not
part of the raidz2, so the redundancy in the raidz2 did not help you.  You
removed the non-redundant disks, and now the pool is faulted.

The only thing you can do is:
Add the disks back to the pool (re-insert them to the system).  Then you
should be able to import the pool.

Now, you don't want these devices in the pool.  You must either destroy &
recreate your pool, or add redundancy to your non-redundant devices.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread David Magda
On Mon, September 19, 2011 08:07, Edward Ned Harvey wrote:

> This one missing feature of ZFS, IMHO, does not result in "a long way for
> zfs to go" in relation to netapp.  I shut off my netapp 2 years ago in
> favor of ZFS, because ZFS performs so darn much better, and has such
> immensely greater robustness.  Try doing ndmp, cifs, nfs, iscsi on netapp
> (all extra licenses).  Try experimenting with the new version of netapp to
> see how good it is (you can't unless you buy a whole new box.)

As another datum, at $WORK we're going to Isilon. Our NetApp is being
retired by the end of the year as it just can't handle the load of HPC. We
also have the regular assortment of web, mail, code repositories, etc.,
VMs that also live on Isilon. We're quite happy, especially with the more
recent Isilon hardware that uses SSDs to store/cache metadata. NFS and
CIFS are quite good, but we haven't really tried their iSCSI stuff yet;
they don't have FC at all.

We also have a bunch of Blue Arc, but find it much more finicky than
Isilon. Perhaps Hitachi will help them stabilize things a bit.

As for experimenting with NetApp, they do have a "simulator" that you can
run in a VM if you wish (or actual hardware AFAICT).


A bit more on topic, bp* rewrite has been a long-time coming, and AFAICT,
it  won't be in Solaris 11. As it stands, I don't care much about changing
RAID levels, but not being able to remove a mistakenly added device is
something is becoming more and more conspicuous. For better or worse I'm
not doing as much Solaris stuff (esp. with the new Ellison pricing model),
but still pay attention to what's going on, and this (missing) feature is
one of those "WTF?" things that is the fly in the otherwise very tasty
soup that is ZFS.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> 
> You can add mirrors to those lonely disks.
> 

Can it repair the pool?

Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> 
> This one missing feature of ZFS, IMHO, does not result in "a long way
> for
> zfs to go" in relation to netapp.  I shut off my netapp 2 years ago in
> favor
> of ZFS, because ZFS performs so darn much better, and has such
> immensely
> greater robustness.  Try doing ndmp, cifs, nfs, iscsi on netapp (all
> extra
> licenses).  Try experimenting with the new version of netapp to see how
> good
> it is (you can't unless you buy a whole new box.)  Try mirroring a
> production box onto a lower-cost secondary backup box (there is no such
> thing).  Try storing your backup on disk and rotating your disks
> offsite.
> Try running any "normal" utilities - iostat, top, wireshark - you can't.
> Try backing up with commercial or otherwise modular (agent-based)
> backup
> software.  You can't.  You have to use CIFS/NFS/NDMP.
> 
> Just try finding a public mailing list like this one where you can even
> so
> much as begin such a conversation about netapp...  Been there done that,
> it's not even in the same ballpark.
> 
> etc etc.  (end rant.)  I hate netapp.
> 
> 

Yeah, It is  kind of touchy topic, we may discuss more in the future.
I want to focus on how to repair my pool first. ;-(

> 
> Um...
> 
> Wanna post your "zpool status" and "cat /etc/release" and "zpool
> upgrade"
> 

I exported the pool for I want to use zpool import -F to fix it.
But now I get " one or more devices is currently unavailable Destroy 
and re-create the pool from  a backup source."

I use opensolaris b134 and zpool version 22.


Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
> From: Fred Liu [mailto:fred_...@issi.com]
> 
> Yeah, I also realized this when I send out this message. In NetApp, it is
so
> easy to change raid group size. There is still a long way for zfs to go.
> Hope I can see that in the future.

This one missing feature of ZFS, IMHO, does not result in "a long way for
zfs to go" in relation to netapp.  I shut off my netapp 2 years ago in favor
of ZFS, because ZFS performs so darn much better, and has such immensely
greater robustness.  Try doing ndmp, cifs, nfs, iscsi on netapp (all extra
licenses).  Try experimenting with the new version of netapp to see how good
it is (you can't unless you buy a whole new box.)  Try mirroring a
production box onto a lower-cost secondary backup box (there is no such
thing).  Try storing your backup on disk and rotating your disks offsite.
Try running any "normal" utilities - iostat, top, wireshark - you can't.
Try backing up with commercial or otherwise modular (agent-based) backup
software.  You can't.  You have to use CIFS/NFS/NDMP.  

Just try finding a public mailing list like this one where you can even so
much as begin such a conversation about netapp...  Been there done that,
it's not even in the same ballpark.

etc etc.  (end rant.)  I hate netapp.


> I also did another huge "mistake" which really brings me into the deep
pain.
> I physically removed these two added devices for I though raidz2 can
afford
> it.
> But now the whole pool corrupts. I don't know where I can go ...
> Any help will be tremendously appreciated.

Um...

Wanna post your "zpool status" and "cat /etc/release" and "zpool upgrade"

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Tomas Forsman
On 19 September, 2011 - Fred Liu sent me these 0,9K bytes:

> > 
> > That's a huge bummer, and it's the main reason why device removal has
> > been a
> > priority request for such a long time...  There is no solution.  You
> > can
> > only destroy & recreate your pool, or learn to live with it that way.
> > 
> > Sorry...
> > 
> 
> Yeah, I also realized this when I send out this message. In NetApp, it is so
> easy to change raid group size. There is still a long way for zfs to go.
> Hope I can see that in the future.
> 
> I also did another huge "mistake" which really brings me into the deep pain.
> I physically removed these two added devices for I though raidz2 can afford 
> it.
> But now the whole pool corrupts. I don't know where I can go ...
> Any help will be tremendously appreciated.

You can add mirrors to those lonely disks.

/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
> 
> That's a huge bummer, and it's the main reason why device removal has
> been a
> priority request for such a long time...  There is no solution.  You
> can
> only destroy & recreate your pool, or learn to live with it that way.
> 
> Sorry...
> 

Yeah, I also realized this when I send out this message. In NetApp, it is so
easy to change raid group size. There is still a long way for zfs to go.
Hope I can see that in the future.

I also did another huge "mistake" which really brings me into the deep pain.
I physically removed these two added devices for I though raidz2 can afford it.
But now the whole pool corrupts. I don't know where I can go ...
Any help will be tremendously appreciated.

Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Fred Liu
> 
> For my carelessness, I added two disks into a raid-z2 zpool as normal data
> disk, but in fact
> I want to make them as zil devices.

That's a huge bummer, and it's the main reason why device removal has been a
priority request for such a long time...  There is no solution.  You can
only destroy & recreate your pool, or learn to live with it that way.

Sorry...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] remove wrongly added device from zpool

2011-09-19 Thread Fred Liu
Hi,

For my carelessness, I added two disks into a raid-z2 zpool as normal data 
disk, but in fact
I want to make them as zil devices.

Any remedy solutions?


Many thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss