Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread Shawn Joy
If you don't give ZFS any redundancy, you risk loosing you pool if there is data corruption. Is this the same risk for data corruption as UFS on hardware based luns? If we present one LUN to ZFS and choose not to ZFS mirror or do a raidz pool of that LUN is ZFS able to handle disk or raid

Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread Erik Trimble
Shawn Joy wrote: If you don't give ZFS any redundancy, you risk loosing you pool if there is data corruption. Is this the same risk for data corruption as UFS on hardware based luns? It's a tradeoff. ZFS has more issues with loss of connectivity to the underlying LUN than UFS, while UFS has

Re: [zfs-discuss] SSD over 10gbe not any faster than 10K SAS over GigE

2009-10-10 Thread Bob Friesenhahn
On Fri, 9 Oct 2009, Derek Anderson wrote: I created a NFS filesystem for vmware by using : zfs create SSD/vmware . I had to set permissoins for Vmware anon=0, but thats it. Below is what zpool iostat reads: File copy 10Gbe to SSD - 40M max My clients here do better than that over

Re: [zfs-discuss] How to use ZFS on x4270

2009-10-10 Thread Bob Friesenhahn
On Fri, 9 Oct 2009, tak ar wrote: Hi! I bought x4270 servers for (write heavy) mail server. And waiting for delivery. That have two Intel SSD X25-E(for ZIL) and HDDs. x4270 servers have hardware RAID card based on Adaptec's RAID 5805 adapter, which has 256MB BBWC. SSD has write cache and

Re: [zfs-discuss] SSD over 10gbe not any faster than 10K SAS over GigE

2009-10-10 Thread Derek Anderson
Thank you so much for the detail. The 10Gbe is attached to 10Gbe port on a Vmware ESX server. I am trying to use NFS for VMware. When I bought the SSD's I was after low seek time not necessarily total bandwidth. I can add devices over time to get the bandwidth up. I am puzzled why even my

Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread Richard Elling
On Oct 10, 2009, at 8:19 AM, Erik Trimble wrote: Shawn Joy wrote: If you don't give ZFS any redundancy, you risk loosing you pool if there is data corruption. Is this the same risk for data corruption as UFS on hardware based luns? It's a tradeoff. ZFS has more issues with loss of

[zfs-discuss] Keep track of meta data on each zfs

2009-10-10 Thread Harry Putnam
As just a home tinkerer with small needs... I've already run into situations where I've created a zfs fs for some purpose... and mnths later forgotten what it was for or supposed to do or hold. I may recognize the files and directories... but have forgotten why its in this particular fs as

Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread David Magda
On Oct 10, 2009, at 01:26, Erik Trimble wrote: That is, there used to be an issue in this scenario: (1) zpool constructed from a single LUN on a SAN device (2) SAN experiences temporary outage, while ZFS host remains running. (3) zpool is permanently corrupted, even if no I/O occured during

Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread Victor Latushkin
Erik Trimble wrote: ZFS no longer has the issue where loss of a single device (even intermittently) causes pool corruption. That's been fixed. Erik, it does not help at all when you are talking about some issue being fixed and does not provide corresponding CR number. It does not allow

Re: [zfs-discuss] How to use ZFS on x4270

2009-10-10 Thread tak ar
Hi! thanks for reply. The BBWC is much more useful than the write cache on the X25-E since the X25-E's write cache is volatile and therefore may cause harm to your data. According to reports I have seen, the X25-E write IOPS reduces by a factor of five when its write cache is

[zfs-discuss] use zpool directly w/o create zfs

2009-10-10 Thread Hua
I understand that usually zfs need to be created inside a zpool to store files/data. However, I quick test shows that I actually can put files directly inside a mounted zpool without creating any zfs. After zpool create -f tank c0d1 I actually can copy/delete any files into /tank. I can also

Re: [zfs-discuss] How to use ZFS on x4270

2009-10-10 Thread Bob Friesenhahn
On Sat, 10 Oct 2009, tak ar wrote: Use the BBWC to maintain high IOPS when X25-E's write cache is disabled? It should certainly help. Note that in this case your relatively small battery-backed memory is accepting writes for both the X25-E and for the disk storage so the BBWC memory

Re: [zfs-discuss] use zpool directly w/o create zfs

2009-10-10 Thread Andrew Gabriel
Hua wrote: I understand that usually zfs need to be created inside a zpool to store files/data. However, I quick test shows that I actually can put files directly inside a mounted zpool without creating any zfs. After zpool create -f tank c0d1 I actually can copy/delete any files into

Re: [zfs-discuss] How to use ZFS on x4270

2009-10-10 Thread Richard Elling
On Oct 10, 2009, at 4:11 PM, Bob Friesenhahn wrote: On Sat, 10 Oct 2009, tak ar wrote: I think the IOPS is important for mail server, so ZIL is useful. The server has 48GB RAM and two(ZFS or hardware mirror) X25-E(32GB) for ZIL(slog). I understand the ZIL needs half of RAM. There is a

[zfs-discuss] You really do need ECC RAM

2009-10-10 Thread Richard Elling
You really do need ECC RAM, but for the naysayers: http://www.cs.toronto.edu/%7Ebianca/papers/sigmetrics09.pdf -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] 2009.06 cifs/acl/windows-sid weirdness

2009-10-10 Thread HUGE | David Stahl
I'm not sure if this is a bug or something. I tried researching but have come up dry due to hard to come up with right keywords. Anyway, we have been using OSOL 2008.11 as a file server just fine using instructions very similar to this

Re: [zfs-discuss] You really do need ECC RAM

2009-10-10 Thread Dennis Clarke
You really do need ECC RAM, but for the naysayers: http://www.cs.toronto.edu/%7Ebianca/papers/sigmetrics09.pdf There are people that still question that? Really ? From section 3.2 Errors per DIMM in that paper : The mean number of correctable errors per DIMM are more comparable,

Re: [zfs-discuss] Does ZFS work with SAN-attached devices?

2009-10-10 Thread Erik Trimble
Victor Latushkin wrote: Erik Trimble wrote: ZFS no longer has the issue where loss of a single device (even intermittently) causes pool corruption. That's been fixed. Erik, it does not help at all when you are talking about some issue being fixed and does not provide corresponding CR number.