Re: [zfs-discuss] b134 pool borked!

2010-05-04 Thread Michael Mattsson
90 reads and not a single comment? Not the slightest hint of what's going on? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Brad
Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Ian Collins
On 05/ 5/10 11:09 AM, Brad wrote: I yanked a disk to simulate failure to the test pool to test hot spare failover - everything seemed fine until the copy back completed. The hot spare is still showing in used...do we need to remove the spare from the pool to get it to deattach? Once the

[zfs-discuss] Migrating ZFS/data pool to new pool on the same system

2010-05-04 Thread Jonathan
Can anyone confirm my action plan is the proper way to do this? The reason I'm doing this is I want to create 2xraidz2 pools instead of expanding my current 2xraidz1 pool. So I'll create a 1xraidz2 vdev, migrate my current 2xraidz1 pool over, destroy that pool and then add it as a 1xraidz2 vde

Re: [zfs-discuss] Sharing with zfs

2010-05-04 Thread Frank Middleton
On 05/ 4/10 05:37 PM, Vadim Comanescu wrote: Im wondering is there a way to actually delete a zvol ignoring the fact that it has attached LU? You didn't say what version of what OS you are running. As of b134 or so it seems to be impossible to delete a zfs iscsi target. You might look at the th

[zfs-discuss] replaced disk...copy back completed but spare is in use

2010-05-04 Thread Brad
I yanked a disk to simulate failure to the test pool to test hot spare failover - everything seemed fine until the copy back completed. The hot spare is still showing in used...do we need to remove the spare from the pool to get it to deattach? # zpool status pool: ZPOOL.TEST state: ONLINE

[zfs-discuss] Sharing with zfs

2010-05-04 Thread Vadim Comanescu
Hello, Im new to this discussion lists so i hope im posting in the right place. I started using zfs not too long ago. Im trying to figure out the ISCSI and NFS sharing for the moment. For the ISCSI sharing at the moment im using COMSTAR. A created the appropriate target, also a LU corespondent to t

Re: [zfs-discuss] diff between sharenfs and sharesmb

2010-05-04 Thread Cindy Swearingen
Hi Dick, Experts on the cifs-discuss list could probably advise you better. You might even check the cifs-discuss archive because I hear that the SMB/NFS sharing scenario has been covered previously on that list. Thanks, Cindy On 05/04/10 03:06, Dick Hoogendijk wrote: I have some ZFS datasets

Re: [zfs-discuss] Replacement brackets for Supermicro UIO SAS cards....

2010-05-04 Thread Travis Tabbal
Thanks! I might just have to order a few for the next time I take the server apart. Not that my bent up versions don't work, but I might as well have them be pretty too. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Robert Milkowski
On 04/05/2010 18:19, Tony MacDoodle wrote: How would one determine if I should have a separate ZIL disk? We are using ZFS as the backend of our Guest Domains boot drives using LDom's. And we are seeing bad/very slow write performance? if you can disable ZIL and compare the performance to when

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
Ok, thanks. So, if I understand correctly, it will just remove the device from the VDEV and continue to use the good ones in the stripe. Mike --- Michael Sullivan michael.p.sulli...@me.com http://www.kamiogi.net/ Japan Mobile: +81-80-3202-2599 US Phone: +1-561-283-2034 On 5

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Marc Nicholas
The L2ARC will continue to function. -marc On 5/4/10, Michael Sullivan wrote: > HI, > > I have a question I cannot seem to find an answer to. > > I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. > > I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be > relocated

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Freddie Cash
On Tue, May 4, 2010 at 12:16 PM, Michael Sullivan < michael.p.sulli...@mac.com> wrote: > I have a question I cannot seem to find an answer to. > > I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. > > I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be > relocated

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
Anybody has an idea what I can do about it? On 04/05/2010 16:43, "eXeC001er" wrote: > Perhaps the problem is that the old version of pool have shareiscsi, but new > version have not this option, and for share LUN via iscsi you need to make > lun-mapping. > > > > 2010/5/4 Przemyslaw Ceglowski

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Tomas Ă–gren
On 05 May, 2010 - Michael Sullivan sent me these 0,9K bytes: > HI, > > I have a question I cannot seem to find an answer to. > > I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. > > I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will > be relocated back to the spo

[zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-04 Thread Michael Sullivan
HI, I have a question I cannot seem to find an answer to. I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's. I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be relocated back to the spool. I'd probably have it mirrored anyway, just in case. However you cannot

Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Cindy Swearingen
No, beadm doesn't take care of all the steps that I provided previously and included below. Cindy You can use the OpenSolaris beadm command to migrate a ZFS BE over to another root pool, but you will also need to perform some manual migration steps, such as - copy over your other rpool datasets

Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Brandon High
On Tue, May 4, 2010 at 10:19 AM, Tony MacDoodle wrote: > How would one determine if I should have a separate ZIL disk? We are using > ZFS as the backend of our Guest Domains boot drives using LDom's. And we are > seeing bad/very slow write performance? There's a dtrace script that Richard Elling

[zfs-discuss] Performance of the ZIL

2010-05-04 Thread Tony MacDoodle
How would one determine if I should have a separate ZIL disk? We are using ZFS as the backend of our Guest Domains boot drives using LDom's. And we are seeing bad/very slow write performance? Thanks ___ zfs-discuss mailing list zfs-discuss@opensolaris.or

[zfs-discuss] Replacement brackets for Supermicro UIO SAS cards....

2010-05-04 Thread Trey Palmer
I just wanted to share this useful info as I haven't seen it anywhere. My scrounging-genius colleague, Lawrence, found standard PCI-e replacement brackets for the justifiably popular Supermicro AOC-USAS-L8i cards. They cost a few bucks each, fit perfectly and allow us to use these cards exte

Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Brandon High
On Tue, May 4, 2010 at 7:19 AM, Cindy Swearingen wrote: > Using beadm to migrate your BEs to another root pool (and then > performing all the steps to get the system to boot) is different > than just outright renaming your existing root pool on import. Does beadm take care of all the other steps

[zfs-discuss] b134 pool borked!

2010-05-04 Thread Michael Mattsson
My pool panic'd while updating to Lucid Lynx hosted inside an iSCSI LUN. And now it won't come back up. I have dedup and compression on. These are my current findings: * iostat -En won't list 8 of my disks * zdb lists all my disks except my cache device * The following commands panics the box in

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread eXeC001er
Perhaps the problem is that the old version of pool have shareiscsi, but new version have not this option, and for share LUN via iscsi you need to make lun-mapping. 2010/5/4 Przemyslaw Ceglowski > Jim, > > On May 4, 2010, at 3:45 PM, Jim Dunham wrote: > > >> > >> On May 4, 2010, at 2:43 PM, Ri

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
Jim, On May 4, 2010, at 3:45 PM, Jim Dunham wrote: >> >> On May 4, 2010, at 2:43 PM, Richard Elling wrote: >> >> >On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote: >> > >> >> It does not look like it is: >> >> >> >> r...@san01a:/export/home/admin# svcs -a | grep iscsi >> >> online

Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Cindy Swearingen
Brandon, Using beadm to migrate your BEs to another root pool (and then performing all the steps to get the system to boot) is different than just outright renaming your existing root pool on import. Since pool renaming isn't supported, I don't think we have identified all the boot/mount-at-boot

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn
On Mon, 3 May 2010, Richard Elling wrote: This is not a problem on Solaris 10. It can affect OpenSolaris, though. That's precisely the opposite of what I thought. Care to explain? In Solaris 10, you are stuck with LiveUpgrade, so the root pool is not shared with other boot environments. R

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-04 Thread Bob Friesenhahn
On Mon, 3 May 2010, Edward Ned Harvey wrote: That's precisely the opposite of what I thought. Care to explain? If you have a primary OS disk, and you apply OS Updates ... in order to access those updates in Sol10, you need a registered account and login, with paid solaris support. Then, if yo

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Jim Dunham
Przem, > On May 4, 2010, at 2:43 PM, Richard Elling wrote: > >> On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote: >> >>> It does not look like it is: >>> >>> r...@san01a:/export/home/admin# svcs -a | grep iscsi >>> online May_01 svc:/network/iscsi/initiator:default >>> online

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
On May 4, 2010, at 2:43 PM, Richard Elling wrote: >On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote: > >> It does not look like it is: >> >> r...@san01a:/export/home/admin# svcs -a | grep iscsi >> online May_01 svc:/network/iscsi/initiator:default >> online May_01 svc:/ne

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Richard Elling
On May 4, 2010, at 5:19 AM, Przemyslaw Ceglowski wrote: > It does not look like it is: > > r...@san01a:/export/home/admin# svcs -a | grep iscsi > online May_01 svc:/network/iscsi/initiator:default > online May_01 svc:/network/iscsi/target:default This is COMSTAR. > _ > Przem

Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-05-04 Thread Scott Steagall
On 05/04/2010 09:29 AM, Kyle McDonald wrote: > On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote: >> "valrh...@gmail.com" writes: >> >> >>> I have been using DVDs for small backups here and there for a decade >>> now, and have a huge pile of several hundred. They have a lot of >>> overlapping co

Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-05-04 Thread Kyle McDonald
On 3/2/2010 10:15 AM, Kjetil Torgrim Homme wrote: > "valrh...@gmail.com" writes: > > >> I have been using DVDs for small backups here and there for a decade >> now, and have a huge pile of several hundred. They have a lot of >> overlapping content, so I was thinking of feeding the entire stack

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-04 Thread Peter Karlsson
Hi Matt, Don't know if it's recommended or not, but I've been doing it for close to 3 years on my OpenSolaris laptop, it saved me a few times like last week when my internal drive died :) /peter On 2010-05-04 20.33, Matt Keenan wrote: Hi, Just wondering whether mirroring a USB drive with m

[zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-04 Thread Matt Keenan
Hi, Just wondering whether mirroring a USB drive with main laptop disk for backup purposes is recommended or not. Current setup, single root pool set up on 200GB internal laptop drive : $ zpool status pool: rpool state: ONLINE scrub: non requested config : NAMESTATE RE

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
It does not look like it is: r...@san01a:/export/home/admin# svcs -a | grep iscsi online May_01 svc:/network/iscsi/initiator:default online May_01 svc:/network/iscsi/target:default _ Przem > > > >From: Rick McNeal [ramcn...@gmail.com] >

[zfs-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Przemyslaw Ceglowski
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol's 'vol01/zvol01' and 'vol01/zvol02

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Victor Latushkin
On May 4, 2010, at 2:02 PM, Robert Milkowski wrote: > On 16/02/2010 21:54, Jeff Bonwick wrote: >>> People used fastfs for years in specific environments (hopefully >>> understanding the risks), and disabling the ZIL is safer than fastfs. >>> Seems like it would be a useful ZFS dataset parameter.

Re: [zfs-discuss] zpool rename?

2010-05-04 Thread Richard L. Hamilton
[...] > To answer Richard's question, if you have to rename a > pool during > import due to a conflict, the only way to change it > back is to > re-import it with the original name. You'll have to > either export the > conflicting pool, or (if it's rpool) boot off of a > LiveCD which > doesn't use

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Robert Milkowski
On 16/02/2010 21:54, Jeff Bonwick wrote: People used fastfs for years in specific environments (hopefully understanding the risks), and disabling the ZIL is safer than fastfs. Seems like it would be a useful ZFS dataset parameter. We agree. There's an open RFE for this: 6280630 zil synch

[zfs-discuss] diff between sharenfs and sharesmb

2010-05-04 Thread Dick Hoogendijk
I have some ZFS datasets that are shared through CIFS/NFS. So I created them with sharenfs/sharesmb options. I have full access from windows (through cifs) to the datasets, however, all files and directories are created with (UNIX) permisions of (--)/(d--). So, although I can access th