Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-11 Thread Eric Schrock
On Sep 11, 2009, at 8:48 PM, Paul B. Henson wrote: x4500's have Marvell SATA controllers, not LSI. My issue with Intel SSD's being marked faulty in X4500's has yet to be resolved. The last time I rebooted it fm started marking the SSD failed again due to invalid self-check log data. I had so

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-11 Thread Paul B. Henson
On Thu, 10 Sep 2009, Alex Li wrote: > We finally resolved this issue by change LSI driver. For details, please > refer to here > http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/ I believe you hijacked my thread ;). x4500's have Marvell SATA controllers, not LSI. My issue

[zfs-discuss] ZFS Export, Import = Windows sees wrong groups in ACLs

2009-09-11 Thread Owen Davies
I had a OpenSolaris server running basically as a fileserver for all my windows machines. The CIFS server was running in WORKGROUP mode. I had several users defined on the server to match my windows users. I had these users in a few groups (the most important being Parents and Kids). For var

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Maurice Volaski
At 8:25 PM +0300 9/11/09, Markus Kovero wrote: I believe failover is best to be done manually just to be sure active node is really dead before importing it on another node, otherwise there could be serious issues I think. I believe there are many users of Linux-HA, aka heartbeat, who do fa

Re: [zfs-discuss] deduplication

2009-09-11 Thread C. Bergström
Brandon High wrote: On Fri, Jul 17, 2009 at 11:42 AM, Brandon High wrote: The keynote was given on Wednesday. Any more willingness to discuss dedup on the list now? Two months and still no word on deduplication. Is there anything to announce? Can we make a FAQ on this somewhere?

Re: [zfs-discuss] deduplication

2009-09-11 Thread Brandon High
On Fri, Jul 17, 2009 at 11:42 AM, Brandon High wrote: > The keynote was given on Wednesday. Any more willingness to discuss > dedup on the list now? Two months and still no word on deduplication. Is there anything to announce? -B -- Brandon High : bh...@freaks.com If violence doesn't solve you

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tim Cook
On Fri, Sep 11, 2009 at 4:46 PM, Chris Du wrote: > You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS > share most of the design, but they are still different, cache, interface, > firmware are all different. > > Then by much better, I don't mean just IOPS, it's all the 3, be

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Eric D. Mudama
On Fri, Sep 11 at 16:15, Tim Cook wrote: The question wasn't about consumer vs. enterprise drives. He said the SAS interface improves IOPS. Please don't change the topic of discussion mid-thread. Sorry, wasn't trying to derail, but most people don't make the distinctions you do. I thin

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Chris Du
You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS share most of the design, but they are still different, cache, interface, firmware are all different. Then by much better, I don't mean just IOPS, it's all the 3, better IOPS, command queue and error recovery, etc. -- Th

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tim Cook
On Fri, Sep 11, 2009 at 3:20 PM, Eric D. Mudama wrote: > On Fri, Sep 11 at 13:14, Tim Cook wrote: > >> Better IOPS? Do you have some numbers to back that claim up? I've never >> heard of anyone getting "much better" IOPS out of a drive by simply >> changing the interface from SATA to SAS. Or

[zfs-discuss] NFS export issue

2009-09-11 Thread Thomas Uebermeier
Hello, I have a ZFS filesystem structure, which is basically like this: /foo /foo/bar /foo/baz all are from one pool and /foo does only contain the other directories/mounts (no other files) When I try to export /foo via dfstab, I can see the directories bar and baz, but these are empty. Can I

Re: [zfs-discuss] Raid-Z Issue

2009-09-11 Thread Frank Middleton
On 09/11/09 03:20 PM, Brandon Mercer wrote: They are so well known that simply by asking if you were using them suggests that they suck. :) There are actually pretty hit or miss issues with all 1.5TB drives but that particular manufacturer has had a few more than others. FWIW I have a few of

Re: [zfs-discuss] Raid-Z Issue

2009-09-11 Thread Volker A. Brandt
Brandon Mercer writes: > On Fri, Sep 11, 2009 at 2:57 PM, Volker A. Brandt wrote: > >> Seagate 1.5 TB drives? > > > > This sounds somewhat ominous.  Are there known problems? > > They are so well known that simply by asking if you were using them > suggests that they suck. :) There are actually

Re: [zfs-discuss] snv_121 zfs issue

2009-09-11 Thread Greg
I have tried to unmount the zfs volume and remount it. However, this does not help the issue. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Eric D. Mudama
On Fri, Sep 11 at 13:14, Tim Cook wrote: Better IOPS? Do you have some numbers to back that claim up? I've never heard of anyone getting "much better" IOPS out of a drive by simply changing the interface from SATA to SAS. Or SATA to FATA for that matter. A 7200RPM drive is limited by

Re: [zfs-discuss] Raid-Z Issue

2009-09-11 Thread Brandon Mercer
On Fri, Sep 11, 2009 at 2:57 PM, Volker A. Brandt wrote: >> Seagate 1.5 TB drives? > > This sounds somewhat ominous.  Are there known problems? They are so well known that simply by asking if you were using them suggests that they suck. :) There are actually pretty hit or miss issues with all 1

Re: [zfs-discuss] Raid-Z Issue

2009-09-11 Thread David E. Anderson
some time out if they don't have updated firmware On Fri, Sep 11, 2009 at 11:57 AM, Volker A. Brandt wrote: > > Seagate 1.5 TB drives? > > This sounds somewhat ominous. Are there known problems? > > > Thanks -- Volker > -- > --

Re: [zfs-discuss] Raid-Z Issue

2009-09-11 Thread Volker A. Brandt
> Seagate 1.5 TB drives? This sounds somewhat ominous. Are there known problems? Thanks -- Volker -- Volker A. Brandt Consulting and Support for Sun Solaris Brandt & Brandt Computer GmbH

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tim Cook
On Fri, Sep 11, 2009 at 12:48 PM, Chris Du wrote: > >>Can you use SATA drives with expanders at all? (I have to stick to > enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte consumer SATA) for > cost reasons). > > Yes you can in E1 model. E1 is single path model which supports both SAS > an

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Ross Walker
On Fri, Sep 11, 2009 at 12:53 PM, Richard Elling wrote: > On Sep 11, 2009, at 5:05 AM, Markus Kovero wrote: > >> Hi, I was just wondering following idea, I guess somebody mentioned >> something similar and I’d like some thoughts on this. >> >> 1.       create iscsi volume on Node-A and mount it lo

Re: [zfs-discuss] This is the scrub that never ends...

2009-09-11 Thread Will Murnane
On Thu, Sep 10, 2009 at 13:06, Will Murnane wrote: > On Wed, Sep 9, 2009 at 21:29, Bill Sommerfeld wrote: >>> Any suggestions? >> >> Let it run for another day. > I'll let it keep running as long as it wants this time. scrub: scrub completed after 42h32m with 0 errors on Thu Sep 10 17:20:19 2009

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Chris Du
>>Can you use SATA drives with expanders at all? (I have to stick to >>enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte consumer SATA) for >>cost reasons). Yes you can in E1 model. E1 is single path model which supports both SAS and SATA. You need to know what you are buying. The Superm

Re: [zfs-discuss] snv_121 zfs issue

2009-09-11 Thread Greg
This also occurs when I do a zfs destroy. Thanks! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Markus Kovero
I believe failover is best to be done manually just to be sure active node is really dead before importing it on another node, otherwise there could be serious issues I think. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@o

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Markus Kovero
This also makes failover more easy, as volumes are already shared via iscsi on both nodes. I have to poke it next week to see performance numbers, I could imagine it plays within expected iscsi performance, or it should atleast. Yours Markus Kovero -Original Message- From: Richard Elling

[zfs-discuss] snv_121 zfs issue

2009-09-11 Thread Greg
Hello all, I am having a problem when I do a zfs promote or a zfs rollback, I get a "dataset is busy error" I am now doing a image update to see if there was an issue with the image I have. Has anyone idea as to how to fix this issue? Thanks, Greg -- This message posted from opensolaris.org ___

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Richard Elling
On Sep 11, 2009, at 5:05 AM, Markus Kovero wrote: Hi, I was just wondering following idea, I guess somebody mentioned something similar and I’d like some thoughts on this. 1. create iscsi volume on Node-A and mount it locally with iscsiadm 2. create pool with this local iscsi-s

Re: [zfs-discuss] Raid-Z Issue

2009-09-11 Thread Richard Elling
Seagate 1.5 TB drives? -- richard On Sep 11, 2009, at 5:40 AM, Mads Skipper wrote: I am using a Asrock motherboard and a LSI Megaraid controller I wanted to connect 4 drives to my LSI Raid controller and 1 drive to my motherboard. This would make it possible for me to run 2 x 5 drives in Raid-

Re: [zfs-discuss] sync replication easy way?

2009-09-11 Thread Maurice Volaski
This method also allows one to nest mirroring or some RAID-z level with mirroring. When I tested it with a older build a while back, I found performance really poor, about 1-2 MB/second, but my environment was also constrained. A major showstopper had been the infamous 3 minute iSCSI timeout,

[zfs-discuss] Raid-Z Issue

2009-09-11 Thread Mads Skipper
I am using a Asrock motherboard and a LSI Megaraid controller I wanted to connect 4 drives to my LSI Raid controller and 1 drive to my motherboard. This would make it possible for me to run 2 x 5 drives in Raid-Z. But.. When I do this and begin copying to the Raid-Z it will copy some GBs before th

[zfs-discuss] possibilities of AFP ever making it into ZFS like NFS and CIFS did

2009-09-11 Thread Brian Hechinger
As I sit here building netatalk (assuming it will actually build) it occurs to me that maybe AFP could be the next protocol to be merged directly into ZFS the way NFS and CIFS have been. Any thoughs/opinions on this? I think this would be a great way to get ZFS out there into OSX shops by way of

Re: [zfs-discuss] De-duplication before SXCE EOL ?

2009-09-11 Thread BJ Quinn
Personally I don't care about SXCE EOL, but what about before 2010.02? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-09-11 Thread Rich Morris
On 09/10/09 16:22, en...@businessgrade.com wrote: Quoting Bob Friesenhahn : On Thu, 10 Sep 2009, Rich Morris wrote: On 07/28/09 17:13, Rich Morris wrote: On Mon, Jul 20, 2009 at 7:52 PM, Bob Friesenhahn wrote: Sun has opened internal CR 6859997. It is now in Dispatched state at High prio

[zfs-discuss] sync replication easy way?

2009-09-11 Thread Markus Kovero
Hi, I was just wondering following idea, I guess somebody mentioned something similar and I'd like some thoughts on this. 1. create iscsi volume on Node-A and mount it locally with iscsiadm 2. create pool with this local iscsi-share 3. create iscsi volume on Node-B and share

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Joseph L. Casale
>Can you use SATA drives with expanders at all? (I have to stick >to enterprise/nearline SATA (100 EUR/TByte vs. 60 EUR/TByte >consumer SATA) for cost reasons). Yes, the expander has nothing to do with the drive in front of it. I have several SAS expanders with SATA drives on them. >What is the a

Re: [zfs-discuss] De-duplication before SXCE EOL ?

2009-09-11 Thread Darren J Moffat
Andre Lue wrote: Can anyone answer if we will get zfs de-duplication before SXCE EOL? If possible also anser the same on encryption? Why do you care wither it happens before SXCE EOL or not ? -- Darren J Moffat ___ zfs-discuss mailing list zfs-discu

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Eugen Leitl
On Thu, Sep 10, 2009 at 11:54:16AM -0700, Chris Du wrote: > Why do you need 3x LSI SAS3081E-R? The back plane has LSI SAS x36 expander so > you only nedd 1x 3081E. If you want multipathing, you need E2 model. Can you use SATA drives with expanders at all? (I have to stick to enterprise/nearline S

Re: [zfs-discuss] zfs cksum calculation

2009-09-11 Thread P. Anil Kumar
Hi, Thanks for the prompt response. I tried using digest with sha256 to calculate the uberblock checksum. Now, digest gives me a 65 char's ouput, while zdb -uuu pool-name, gives me only 49 char output. how can this be accounted? I'm trying to understand how the checksum is calculated and dis

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Markus Kovero
Couple months, nope. I guess there is this DOS utility provided by WD that allows you change TLER settings having TLER disabled can be problem, faulty disks timeout randomly and zfs doesn't always want to mark them as failed, sometimes it does though. Yours Markus Kovero -Original Message--

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Tristan Ball
How long have you had them in production? Were you able to adjust the TLER settings from within solaris? Thanks, Tristan. -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Markus Kovero Sent: Friday, 11 Septembe

Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-11 Thread Markus Kovero
We've been using caviar black 1TB with disk configurations consisting 64 disks or more. They are working just fine. Yours Markus Kovero -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Eugen Leitl Sent: 11. syyskuuta