[zfs-discuss] How to get rid of phantom pool ?

2011-02-15 Thread Alxen4
I had a pool on external drive.Recently the drive failed,but pool still shows 
up when run 'zpoll status'

Any attempt to remove/delete/export pool ends up with unresponsiveness(The 
system is still up/running perfectly,it's just this specific command kind of 
hangs so I have to open new ssh session)

zpool status shows state: UNAVAIL

When try zpool clear get "cannot clear errors for backup: I/O error"

Please help me out to get rid of this phantom pool.


Many,many thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Please help destroy pool.

2010-08-18 Thread Alxen4
Thanks Cindy,

I just needed to delete all luns before

sbdadm delete-lu 600144F00800270514BC4C1E29FB0001

itadm delete-target -f
iqn.1986-03.com.sun:02:f38e0b34-be30-ca29-dfbd-d1d28cd75502

And then I was able to destroy ZFS system itself.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Thanks.Everything is clear now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Thanks...Now I think I understand...

Let me summarize it andd let me know if I'm wrong.

Disabling ZIL converts all synchronous calls to asynchronous which makes ZSF to 
report data acknowledgment before it actually was written to stable storage 
which in turn improves performance but might cause data corruption in case of 
server crash.

Is it correct ?

In my case I'm having serious performance issues with NFS over ZFS.
My NFS Client is ESXi so the major question is there risk of corruption for 
VMware images if I disable ZIL ?


Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Any argumentation why ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Solaris startup script location

2010-08-18 Thread Alxen4
Is there any way run start-up script before non-root pool is mounted ?

For example I'm trying to use ramdisk as ZIL device (ramdiskadm )
So I need to create ramdisk before actual pool is mounted otherwise it 
complains that log device is missing :)

For sure I can manually remove/and add it  by script and put the script in 
regular rc2.d location...I'm just looking for more elegant way to it.


Thanks a lot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Please help destroy pool.

2010-08-17 Thread Alxen4
I have a pool with zvolume (Opensolaris b134)

When I try zpool destroy tank I get "pool is busy"

# zpool destroy -f tank
cannot destroy 'tank': pool is busy


When I try destroy zvolume first I get " dataset is busy"

# zfs destroy -f tank/macbook0-data
cannot destroy 'tank/macbook0-data': dataset is busy

zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT

tank/fs1134K  16.7T  44.7K  /tank/fs1
tank/fs2135K  16.7T  44.7K  /tank/fs2
tank/macbook0-data   4.13T  20.3T   522G  -
tank/fs3 145G  16.7T   145G  /tank/fs3


What next should I try ?

Please help.

Thanks in advance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Reconfigure zpool

2010-08-06 Thread Alxen4
Thank you very much for the answer

Yea,that what I was afraid of.

There is something I really cannot understand about zpool structuring...

What is a role these 4 drives play in that tank pool with current configuration 
?
If they are not part of raidz3 array what is a point for Solaris to accept that 
configuration ?

I realize that I made mistake by doing 'zpool add' instead of 'zpool 
attach',but still...

Like let's say I write data into the tank pool.How would the data to be 
distributed on the pool ? Would it go to raidz3 part of it AND to 4 stand alone 
drives or to raidz3 part only ?

If it goes to raidz3 part only why it's not possible to remove these 4 drives 
from the pool ?

Also could you please provide some examples for multiple raidz vdevs ?

I'm really new in ZFS world :)

Again thank you very much.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Reconfigure zpool

2010-08-06 Thread Alxen4
I have zpool like that

  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz3-0  ONLINE   0 0 0
___c6t0d0  ONLINE   0 0 0
___c6t1d0  ONLINE   0 0 0
___c6t2d0  ONLINE   0 0 0
___c6t3d0  ONLINE   0 0 0
___c6t4d0  ONLINE   0 0 0
___c6t5d0  ONLINE   0 0 0
___c6t6d0  ONLINE   0 0 0
___c6t7d0  ONLINE   0 0 0
___vc7t0d0  ONLINE   0 0 0
___c7t1d0  ONLINE   0 0 0
___c7t2d0  ONLINE   0 0 0
___c7t3d0  ONLINE   0 0 0
c7t4d0ONLINE   0 0 0
c7t5d0ONLINE   0 0 0
c7t6d0ONLINE   0 0 0
c7t7d0ONLINE   0 0 0


In my understanding last 4 drives(c7t4d0,c7t5d0,c7t6d0,c7t7d0)

are part of tank zpool but not of raidz3 array.

Ho do I move them to be part of raidz3 ?

Thanks a lot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help identify failed drive

2010-07-18 Thread Alxen4
This is a situation:

I've got an error on one of the drives in 'zpool status' output:

 zpool status tank

  pool: tank
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-9P
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz2ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c2t0d0  ONLINE   0 0 0
c2t2d0  ONLINE   0 0 0
c2t3d0  ONLINE   1 0 0
c2t4d0  ONLINE   0 0 0
c2t5d0  ONLINE   0 0 0
c2t7d0  ONLINE   0 0 0

So I would like to replace 'c2t3d0'.

I know for a fact the pool has 7 physical drives : 5 of Seagate and 2 of WD.

I want to know if 'c2t3d0' Seagate or WD.

If I run 'iostat -En' it shows that all  c*t*d0 drives are Seagate and 
sd11/sd12 are WD.

This totally confuses me...
Why there are two different types of drives in iostat output : c*t*d0 and sd* 
???
How come all c*t*d0 appear as Seagate.I know for sure two of them are WD.
Why WD drives appears as sd* and not as c*t*d0 ?

Please help.


--

# iostat -En


c1t1d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 54 Predictive Failure Analysis: 0

c2t0d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t1d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t2d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t3d0   Soft Errors: 0 Hard Errors: 9 Transport Errors: 9
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 7 Device Not Ready: 0 No Device: 2 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t4d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t5d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t6d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

c2t7d0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST3500320AS  Revision: SD15 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

[b]sd11 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD5001AALS-0 Revision: 1D05 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0

sd12 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD5001AALS-0 Revision: 0K05 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0[/b]





Thanks a lot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Help destroying phantom clone (zfs filesystem)

2010-07-01 Thread Alxen4
It looks like I have some leftovers of old clones that I cannot delete:

Clone name is  tank/WinSrv/Latest

I'm trying:

zfs destroy -f -R tank/WinSrv/Latest
cannot unshare 'tank/WinSrv/Latest': path doesn't exist: unshare(1M) failed

Please help me to get rid of this garbage.

Thanks a lot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss