Re: [zfs-discuss] VDI iops with caching

2013-01-04 Thread Geoff Nordli
On 13-01-04 02:08 PM, Richard Elling wrote: All of these IOPS -- VDI users guidelines are wrong. The problem is that the variability of response time is too great for a HDD. The only hope we have of getting the back-of-the-napkin calculations to work is to reduce the variability by using a

Re: [zfs-discuss] VDI iops with caching

2013-01-03 Thread Geoff Nordli
Thanks Richard, Happy New Year. On 13-01-03 09:45 AM, Richard Elling wrote: On Jan 2, 2013, at 8:45 PM, Geoff Nordli geo...@gnaa.net mailto:geo...@gnaa.net wrote: I am looking at the performance numbers for the Oracle VDI admin guide. http://docs.oracle.com/html/E26214_02/performance

[zfs-discuss] VDI iops with caching

2013-01-02 Thread Geoff Nordli
I am looking at the performance numbers for the Oracle VDI admin guide. http://docs.oracle.com/html/E26214_02/performance-storage.html From my calculations for 200 desktops running Windows 7 knowledge user (15 iops) with a 30-70 read/write split it comes to 5100 iops. Using 7200 rpm disks the

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-16 Thread Geoff Nordli
On 12-11-16 03:02 AM, Jim Klimov wrote: On 2012-11-15 21:43, Geoff Nordli wrote: Instead of using vdi, I use comstar targets and then use vbox built-in scsi initiator. Out of curiosity: in this case are there any devices whose ownership might get similarly botched, or you've tested

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-15 Thread Geoff Nordli
On 12-11-15 11:57 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: When I google around for anyone else who cares and may have already solved the problem before I came along - it seems we're all doing the same thing for the same reason.If by any chance you are running

Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-08 Thread Geoff Nordli
Dan, If you are going to do the all in one with vbox, you probably want to look at: http://sourceforge.net/projects/vboxsvc/ It manages the starting/stopping of vbox vms via smf. Kudos to Jim Klimov for creating and maintaining it. Geoff On Thu, Nov 8, 2012 at 7:32 PM, Dan Swartzendruber

[zfs-discuss] defer_destroy property set on snapshot creation

2012-09-13 Thread Geoff Nordli
I am running NexentaOS_134f This is really weird, but I for some reason the defer_destroy property is being set on new snapshots and I can't turn it off. Normally it should be enabled when using the zfs destroy -d command. The property doesn't seem to be inherited from anywhere. It seems

[zfs-discuss] SAS world-wide name

2012-03-01 Thread Geoff Nordli
trying to figure out a reliable way to identify drives to make sure I pull the right drive when there is a failure. These will be smaller installations (16 drives) I am pretty sure the wwn name on a sas device is preassigned like a MAC address, but I just want to make sure. Is there any

Re: [zfs-discuss] Using Solaris iSCSI target in VirtualBox iSCSI Initiator

2011-02-25 Thread Geoff Nordli
-Original Message- From: Thierry Delaitre Sent: Wednesday, February 23, 2011 4:42 AM To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] Using Solaris iSCSI target in VirtualBox iSCSI Initiator Hello, I’m using ZFS to export some iscsi targets for the virtual box iscsi initiator.

Re: [zfs-discuss] a single nfs file system shared out twice with different permissions

2010-12-20 Thread Geoff Nordli
From: Edward Ned Harvey Sent: Monday, December 20, 2010 9:25 AM Subject: RE: [zfs-discuss] a single nfs file system shared out twice with different permissions From: Richard Elling zfs create tank/snapshots zfs set sharenfs=on tank/snapshots on by default sets the NFS share parameters to:

Re: [zfs-discuss] a single nfs file system shared out twice with different permissions

2010-12-20 Thread Geoff Nordli
From: Richard Elling Sent: Monday, December 20, 2010 8:14 PM Subject: Re: [zfs-discuss] a single nfs file system shared out twice with different permissions On Dec 20, 2010, at 11:26 AM, Geoff Nordli geo...@gnaa.net wrote: From: Edward Ned Harvey Sent: Monday, December 20, 2010 9:25 AM

Re: [zfs-discuss] a single nfs file system shared out twice with different permissions

2010-12-20 Thread Geoff Nordli
From: Darren J Moffat Sent: Monday, December 20, 2010 4:15 AM Subject: Re: [zfs-discuss] a single nfs file system shared out twice with different permissions On 18/12/2010 07:09, Geoff Nordli wrote: I am trying to configure a system where I have two different NFS shares which point to the same

Re: [zfs-discuss] a single nfs file system shared out twice with different permissions

2010-12-18 Thread Geoff Nordli
-Original Message- From: Edward Ned Harvey [mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] Sent: Saturday, December 18, 2010 6:13 AM To: 'Geoff Nordli'; zfs-discuss@opensolaris.org Subject: RE: [zfs-discuss] a single nfs file system shared out twice with different permissions

[zfs-discuss] a single nfs file system shared out twice with different permissions

2010-12-17 Thread Geoff Nordli
I am trying to configure a system where I have two different NFS shares which point to the same directory. The idea is if you come in via one path, you will have read-only access and can't delete any files, if you come in the 2nd path, then you will have read/write access. For example, create

[zfs-discuss] zfs list takes a long time to return

2010-11-02 Thread Geoff Nordli
I am running the latest version of Nexenta Core 3.0 (b134 + extra backports). The time to run zfs list is starting to increase as the number of datasets increase where it takes almost 30 seconds now to return 1500 datasets. r...@zfs1:/etc# time zfs list -t all | wc -l 1491 real

[zfs-discuss] way to find out of a dataset has children

2010-09-27 Thread Geoff Nordli
Is there a way to find out if a dataset has children or not using zfs properties or other scriptable method? I am looking for a more efficient way to delete datasets after they are finished being used. Right now I use custom property to set delete=1 on a dataset, and then I have a script that

Re: [zfs-discuss] way to find out of a dataset has children

2010-09-27 Thread Geoff Nordli
From: Darren J Moffat Sent: Monday, September 27, 2010 11:03 AM On 27/09/2010 18:14, Geoff Nordli wrote: Is there a way to find out if a dataset has children or not using zfs properties or other scriptable method? I am looking for a more efficient way to delete datasets after

Re: [zfs-discuss] way to find out of a dataset has children

2010-09-27 Thread Geoff Nordli
From: Richard Elling Sent: Monday, September 27, 2010 1:01 PM On Sep 27, 2010, at 11:54 AM, Geoff Nordli wrote: Are there any properties I can set on the clone side? Each clone records its origin snapshot in the origin property. $ zfs get origin syspool/rootfs-nmu-001 NAME

[zfs-discuss] stmf corruption and dealing with dynamic lun mapping

2010-09-01 Thread Geoff Nordli
I am running Nexenta NCP 3.0 (134f). My stmf configuration was corrupted. I was getting errors like in /var/adm/messages: Sep 1 10:32:04 llift-zfs1 svc-stmf[378]: [ID 130283 user.error] get property view_entry-0/all_hosts failed - entity not found Sep 1 10:32:04 llift-zfs1 svc.startd[9]: [ID

Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-08 Thread Geoff Nordli
From: Edward Ned Harvey [mailto:sh...@nedharvey.com] Sent: Sunday, August 08, 2010 8:34 PM boun...@opensolaris.org] On Behalf Of Geoff Nordli Anyone have any experience with a R510 with the PERC H200/H700 controller with ZFS? My perception is that Dell doesn't play well with OpenSolaris. I

[zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Geoff Nordli
Anyone have any experience with a R510 with the PERC H200/H700 controller with ZFS? My perception is that Dell doesn't play well with OpenSolaris. Thanks, Geoff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-08-07 Thread Geoff Nordli
-Original Message- From: Brian Hechinger [mailto:wo...@4amlunch.net] Sent: Saturday, August 07, 2010 8:10 AM To: Geoff Nordli Subject: Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS On Sat, Aug 07, 2010 at 08:00:11AM -0700, Geoff Nordli wrote: Anyone have any experience

[zfs-discuss] block align SSD for use as a l2arc cache

2010-07-09 Thread Geoff Nordli
I have an Intel X25-M 80GB SSD. For optimum performance, I need to block align the SSD device, but I am not sure exactly how I should to it. If I run the format - fdisk it allows me to partition based on a cylinder, but I don't think that is sufficient enough. Can someone tell me how

Re: [zfs-discuss] block align SSD for use as a l2arc cache

2010-07-09 Thread Geoff Nordli
-Original Message- From: Erik Trimble Sent: Friday, July 09, 2010 6:45 PM Subject: Re: [zfs-discuss] block align SSD for use as a l2arc cache On 7/9/2010 5:55 PM, Geoff Nordli wrote: I have an Intel X25-M 80GB SSD. For optimum performance, I need to block align the SSD

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-07-01 Thread Geoff Nordli
Actually, I think the rule-of-thumb is 270 bytes/DDT entry. It's 200 bytes of ARC for every L2ARC entry. DDT doesn't count for this ARC space usage E.g.:I have 1TB of 4k files that are to be deduped, and it turns out that I have about a 5:1 dedup ratio. I'd also like to see

Re: [zfs-discuss] OCZ Vertex 2 Pro performance numbers

2010-06-25 Thread Geoff Nordli
From: Arne Jansen Sent: Friday, June 25, 2010 3:21 AM Now the test for the Vertex 2 Pro. This was fun. For more explanation please see the thread Crucial RealSSD C300 and cache flush? This time I made sure the device is attached via 3GBit SATA. This is also only a short test. I'll retest after

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Geoff Nordli
-Original Message- From: Linder, Doug Sent: Friday, June 18, 2010 12:53 PM Try doing inline quoting/response with Outlook, where you quote one section, reply, quote again, etc. It's impossible. You can't split up the quoted section to add new text - no way, no how. Very infuriating.

Re: [zfs-discuss] Dedup... still in beta status

2010-06-15 Thread Geoff Nordli
From: Fco Javier Garcia Sent: Tuesday, June 15, 2010 11:21 AM Realistically, I think people are overtly-enamored with dedup as a feature - I would generally only consider it worth-while in cases where you get significant savings. And by significant, I'm talking an order of magnitude space

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Geoff Nordli
On Behalf Of Joe Auty Sent: Tuesday, June 08, 2010 11:27 AM I'd love to use Virtualbox, but right now it (3.2.2 commercial which I'm evaluating, I haven't been able to compile OSE on the CentOS 5.5 host yet) is giving me kernel panics on the host while starting up VMs which are obviously

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-09 Thread Geoff Nordli
Brandon High wrote: On Tue, Jun 8, 2010 at 10:33 AM, besson3c j...@netmusician.org wrote: What VM software are you using? There are a few knobs you can turn in VBox which will help with slow storage. See http://www.virtualbox.org/manual/ch12.html#id2662300 for instructions on reducing the

Re: [zfs-discuss] iScsi slow

2010-05-26 Thread Geoff Nordli
-Original Message- From: Matt Connolly Sent: Wednesday, May 26, 2010 5:08 AM I've set up an iScsi volume on OpenSolaris (snv_134) with these commands: sh-4.0# zfs create rpool/iscsi sh-4.0# zfs set shareiscsi=on rpool/iscsi sh-4.0# zfs create -s -V 10g rpool/iscsi/test The underlying

Re: [zfs-discuss] can you recover a pool if you lose the zil (b134+)

2010-05-17 Thread Geoff Nordli
-Original Message- From: Edward Ned Harvey [mailto:solar...@nedharvey.com] Sent: Monday, May 17, 2010 6:29 AM I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data

[zfs-discuss] can you recover a pool if you lose the zil (b134+)

2010-05-16 Thread Geoff Nordli
I was messing around with a ramdisk on a pool and I forgot to remove it before I shut down the server. Now I am not able to mount the pool. I am not concerned with the data in this pool, but I would like to try to figure out how to recover it. I am running Nexenta 3.0 NCP (b134+). I have

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-12 Thread Geoff Nordli
From: James C. McPherson [mailto:james.mcpher...@oracle.com] Sent: Wednesday, May 12, 2010 2:28 AM On 12/05/10 03:18 PM, Geoff Nordli wrote: I have been wondering what the compatibility is like on OpenSolaris. My perception is basic network driver support is decent, but storage

Re: [zfs-discuss] ZFS and Comstar iSCSI BLK size

2010-05-11 Thread Geoff Nordli
-Original Message- From: Brandon High [mailto:bh...@freaks.com] Sent: Monday, May 10, 2010 5:56 PM On Mon, May 10, 2010 at 3:53 PM, Geoff Nordli geo...@gnaa.net wrote: Doesn't this alignment have more to do with aligning writes to the stripe/segment size of a traditional storage array

Re: [zfs-discuss] Opteron 6100? Does it work with opensolaris?

2010-05-11 Thread Geoff Nordli
On Behalf Of James C. McPherson Sent: Tuesday, May 11, 2010 5:41 PM On 12/05/10 10:32 AM, Michael DeMan wrote: I agree on the motherboard and peripheral chipset issue. This, and the last generation AMD quad/six core motherboards all seem to use the AMD SP56x0/SP5100 chipset, which I can't

Re: [zfs-discuss] ZFS and Comstar iSCSI BLK size

2010-05-10 Thread Geoff Nordli
-Original Message- From: Brandon High [mailto:bh...@freaks.com] Sent: Monday, May 10, 2010 9:55 AM On Sun, May 9, 2010 at 9:42 PM, Geoff Nordli geo...@gnaa.net wrote: I am looking at using 8K block size on the zfs volume. 8k is the default for zvols. You are right, I didn't look

[zfs-discuss] ZFS and Comstar iSCSI BLK size

2010-05-09 Thread Geoff Nordli
I am using ZFS as the backing store for an iscsi target running a virtual machine. I am looking at using 8K block size on the zfs volume. I was looking at the comstar iscsi settings and there is also a blk size configuration, which defaults as 512 bytes. That would make me believe that

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-23 Thread Geoff Nordli
-Original Message- From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Friday, April 23, 2010 7:08 AM We are currently porting over our existing Learning Lab Infrastructure platform from MS Virtual Server to VBox + ZFS. When students connect into their lab environment it dynamically

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-22 Thread Geoff Nordli
From: Ross Walker [mailto:rswwal...@gmail.com] Sent: Thursday, April 22, 2010 6:34 AM On Apr 20, 2010, at 4:44 PM, Geoff Nordli geo...@grokworx.com wrote: If you combine the hypervisor and storage server and have students connect to the VMs via RDP or VNC or XDM then you will have

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-21 Thread Geoff Nordli
From: matthew patton [mailto:patto...@yahoo.com] Sent: Tuesday, April 20, 2010 12:54 PM Geoff Nordli geo...@grokworx.com wrote: With our particular use case we are going to do a save state on their virtual machines, which is going to write  100-400 MB per VM via CIFS or NFS, then we take

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-20 Thread Geoff Nordli
or they will be joining San Jose on the sidelines. With Ottawa, Montreal on the way out too, it could be a tough spring for Canadian hockey fans. On Apr 18, 2010, at 11:21 PM, Geoff Nordli wrote: Hi Richard. Can you explain in a little bit more detail how this process works? Let's say you are writing

Re: [zfs-discuss] Snapshots and Data Loss

2010-04-19 Thread Geoff Nordli
On Apr 13, 2010, at 5:22 AM, Tony MacDoodle wrote: I was wondering if any data was lost while doing a snapshot on a running system? ZFS will not lose data during a snapshot. Does it flush everything to disk or would some stuff be lost? Yes, all ZFS data will be committed to disk and then the