Re: [zfs-discuss] RAID Failure Calculator (for 8x 2TB RAIDZ)

2011-02-14 Thread Paul Kraus
On Mon, Feb 7, 2011 at 7:53 PM, Richard Elling richard.ell...@gmail.com wrote: On Feb 7, 2011, at 1:07 PM, Peter Jeremy wrote: On 2011-Feb-07 14:22:51 +0800, Matthew Angelo bang...@gmail.com wrote: I'm actually more leaning towards running a simple 7+1 RAIDZ1. Running this with 1TB is not a

Re: [zfs-discuss] RAID Failure Calculator (for 8x 2TB RAIDZ)

2011-02-14 Thread Nico Williams
On Feb 14, 2011 6:56 AM, Paul Kraus p...@kraus-haus.org wrote: P.S. I am measuring number of objects via `zdb -d` as that is faster than trying to count files and directories and I expect is a much better measure of what the underlying zfs code is dealing with (a particular dataset may have

[zfs-discuss] existing performance data for on-disk dedup?

2011-02-14 Thread Janice Chang
Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I'd ask here. Thanks for the help, Janice ___ zfs-discuss

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-14 Thread ian W
Thanks for the responses.. I found the issue. It was due to power management, and a probably bug with event driven power management states, changing cpupm enable to cpupm enable poll-mode in /etc/power.conf fixed the issue for me. back up to 110MB/sec+ now.. -- This message posted from

Re: [zfs-discuss] how to destroy a pool by id?

2011-02-14 Thread chris
I have old pool skeletons with vdevs that no longer exist. Can't import them, can't destroy them, can't even rename them to something obvious like junk1. What do I do to clean up? -- This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-14 Thread Krunal Desai
On Sat, Feb 12, 2011 at 3:14 AM, ian W dropbears...@yahoo.com.au wrote: Thanks for the responses.. I found the issue. It was due to power management, and a probably bug with event driven power management states, changing cpupm enable to cpupm enable poll-mode in /etc/power.conf fixed

Re: [zfs-discuss] existing performance data for on-disk dedup?

2011-02-14 Thread Jim Dunham
Hi Janice, Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I'd ask here. I find it somewhat interesting that you are asking this question on behalf

Re: [zfs-discuss] how to destroy a pool by id?

2011-02-14 Thread Cindy Swearingen
Hi Chris, Yes, this is a known problem and a CR is filed. I haven't tried these in a while, but consider one of the following workarounds below. #1 is most drastic and make sure you've got the right device name. No sanity checking is done by the dd command. Other experts can comment on a

[zfs-discuss] One LUN per RAID group

2011-02-14 Thread Gary Mills
With ZFS on a Solaris server using storage on a SAN device, is it reasonable to configure the storage device to present one LUN for each RAID group? I'm assuming that the SAN and storage device are sufficiently reliable that no additional redundancy is necessary on the Solaris ZFS server. I'm

Re: [zfs-discuss] ZFS read/write fairness algorithm for single pool

2011-02-14 Thread Richard Elling
Hi Nathan, comments below... On Feb 13, 2011, at 8:28 PM, Nathan Kroenert wrote: On 14/02/2011 4:31 AM, Richard Elling wrote: On Feb 13, 2011, at 12:56 AM, Nathan Kroenertnat...@tuneunix.com wrote: Hi all, Exec summary: I have a situation where I'm seeing lots of large reads starving

Re: [zfs-discuss] One LUN per RAID group

2011-02-14 Thread Paul Kraus
On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills mi...@cc.umanitoba.ca wrote: I realize that it is possible to configure more than one LUN per RAID group on the storage device, but doesn't ZFS assume that each LUN represents an independant disk, and schedule I/O accordingly?  In that case,

Re: [zfs-discuss] ACL for .zfs directory

2011-02-14 Thread Cindy Swearingen
Hi Ian, You are correct. Previous Solaris releases displayed older POSIX ACL info on this directory. It was changed to the new ACL style from the integration of this CR: 6792884 Vista clients cannot access .zfs Thanks, Cindy On 02/13/11 19:30, Ian Collins wrote: While scanning filesystems

Re: [zfs-discuss] ACL for .zfs directory

2011-02-14 Thread Ian Collins
On 02/15/11 10:14 AM, Cindy Swearingen wrote: Hi Ian, You are correct. Previous Solaris releases displayed older POSIX ACL info on this directory. It was changed to the new ACL style from the integration of this CR: 6792884 Vista clients cannot access .zfs Thanks Cindy. Unfortunately

[zfs-discuss] ZFS and Virtual Disks

2011-02-14 Thread Mark Creamer
Hi I wanted to get some expert advice on this. I have an ordinary hardware SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use that if possible with my VMware environment where I run several Solaris / OpenSolaris virtual machines. My question is regarding the virtual disks.

Re: [zfs-discuss] ZFS and Virtual Disks

2011-02-14 Thread Fajar A. Nugraha
On Tue, Feb 15, 2011 at 5:47 AM, Mark Creamer white...@gmail.com wrote: Hi I wanted to get some expert advice on this. I have an ordinary hardware SAN from Promise Tech that presents the LUNs via iSCSI. I would like to use that if possible with my VMware environment where I run several Solaris

Re: [zfs-discuss] ZFS and Virtual Disks

2011-02-14 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Mark Creamer 1. Should I create individual iSCSI LUNs and present those to the VMware ESXi host as iSCSI storage, and then create virtual disks from there on each Solaris VM?  - or -

Re: [zfs-discuss] One LUN per RAID group

2011-02-14 Thread Gary Mills
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote: On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills mi...@cc.umanitoba.ca wrote: Is there any reason not to use one LUN per RAID group? [...] In other words, if you build a zpool with one vdev of 10GB and another with two vdev's each

Re: [zfs-discuss] ZFS read/write fairness algorithm for single pool

2011-02-14 Thread Nathan Kroenert
Thanks for all the thoughts, Richard. One thing that still sticks in my craw is that I'm not wanting to write intermittently. I'm wanting to write flat out, and those writes are being held up... Seems to me that zfs should know and do something about that without me needing to tune

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-14 Thread ian W
Hello my power.conf is as follows; any recommendations for improvement? device-dependency-property removable-media /dev/fb autopm enable autoS3 enable cpu-threshold 1s # Auto-Shutdown Idle(min) Start/Finish(hh:mm) Behavior autoshutdown 30 0:00 0:00 noshutdown S3-support enable

Re: [zfs-discuss] One LUN per RAID group

2011-02-14 Thread Erik Trimble
On 2/14/2011 3:52 PM, Gary Mills wrote: On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote: On Mon, Feb 14, 2011 at 2:38 PM, Gary Millsmi...@cc.umanitoba.ca wrote: Is there any reason not to use one LUN per RAID group? [...] In other words, if you build a zpool with one vdev of

Re: [zfs-discuss] Very bad ZFS write performance. Ok Read.

2011-02-14 Thread Richard Elling
On Feb 14, 2011, at 4:49 PM, ian W wrote: Hello my power.conf is as follows; any recommendations for improvement? For best performance, disable power management. For certain processors and BIOSes, some combinations of power management (below the OS) are also known to be toxic. At Nexenta,