Re: [zfs-discuss] ZFS on solid state as disk rather than L2ARC...

2010-09-15 Thread Richard Jahnel
14x 256gb MLC SSDs in a raidz2 array have worked fine for us. Performace seems to be mostly limited by the raid controller in operating in JBOD mode. Raidz2 allows sufficient redundancy to replace any MLC drives that develop issues and when you have that many consumer level SSDs, some will devel

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-15 Thread Richard Jahnel
FWIW I'm making a significant bet that Nexenta plus Illumos will be the future for the space in which I operate. I had already begun the process of migrating my 134 boxes over to Nexenta before Oracle's cunning plans became known. This just reaffirms my decision. -- This message posted from ope

Re: [zfs-discuss] ZFS and VMware

2010-08-12 Thread Richard Jahnel
We are using zfs backed fibre targets for ESXi 4.1 and previously 4.0 and have had good performance with no issues. The fibre LUNS were formated with vmfs by the ESXi boxes. SQLIO benchmarks from guest system running on fibre attacted ESXi host. File Size MBThreads Read/Write Duration

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-06 Thread Richard Jahnel
For arc reasons if no other, I would max it out to the 8 gb regardless. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool 'stuck' after failed zvol destory and reboot

2010-08-05 Thread Richard Jahnel
Assuming there are no other volumes sharing slices of those disk, why import? Just over write the disk with a new pool using the f flag during creation. I'm just sayin since you were destroying the volume anyway I presume there is no data we are trying to preserve here. -- This message posted f

Re: [zfs-discuss] ZFS read performance terrible

2010-07-29 Thread Richard Jahnel
>Hi r2ch >The operations column shows about 370 operations for read - per spindle >(Between 400-900 for writes) >How should I be measuring iops? It seems to me then that your spindles are going about as fast as they can and your just moving small block sizes. There are lots of ways to test for

Re: [zfs-discuss] ZFS read performance terrible

2010-07-28 Thread Richard Jahnel
How many iops per spindle are you getting? A rule of thumb I use is to expect no more than 125 iops per spindle for regular HDDs. SSDs are a different story of course. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-20 Thread Richard Jahnel
I'll try an export/import and scrub of the receiving pool and see what that does. I can't take the sending pool offline to try that stuff though. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:/

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-20 Thread Richard Jahnel
On the receiver /opt/csw/bin/mbuffer -m 1G -I Ostor-1:8000 | zfs recv -F e...@sunday in @ 0.0 kB/s, out @ 0.0 kB/s, 43.7 GB total, buffer 100% fullcannot receive new filesystem stream: invalid backup stream mbuffer: error: outputThread: error writing to at offset 0xaedf6a000: Broken pipe sum

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Richard Jahnel
FWIW I found netcat over at CSW. http://www.opencsw.org/packages/CSWnetcat/ -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Richard Jahnel
Using SunOS X 5.11 snv_133 i86pc i386 i86pc. So the network thing that was fixed in 129 shouldn't be the issue. -Original Message- From: Brent Jones [mailto:br...@servuhome.net] Sent: Monday, July 19, 2010 1:02 PM To: Richard Jahnel Cc: zfs-discuss@opensolaris.org Subject: Re:

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Richard Jahnel
>I've used mbuffer to transfer hundreds of TB without a problem in mbuffer >itself. You will get disconnected if the send or receive prematurely ends, >though. mbuffer itself very specifically ends with a broken pipe error. Very quickly with s set to 128 or after sometime with s set over 1024.

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Richard Jahnel
>If this is across a trusted link, have a look at the HPN patches to >ZFS. There are three main benefits to these patches: >- increased (and dynamic) buffers internal to SSH >- adds a multi-threaded aes cipher >- adds the NONE cipher for non-encrypted data transfers >(authentication is still encryp

Re: [zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Richard Jahnel
>Any idea why? Does the zfs send or zfs receive bomb out part way through? I have no idea why mbuffer fails. Changing the -s from 128 to 1536 made it take longer to occur and slowed it down bu about 20% but didn't resolve the issue. It just ment I might get as far as 2.5gb before mbuffer bombed

[zfs-discuss] zfs send to remote any ideas for a faster way than ssh?

2010-07-19 Thread Richard Jahnel
I've tried ssh blowfish and scp arcfour. both are CPU limited long before the 10g link is. I'vw also tried mbuffer, but I get broken pipe errors part way through the transfer. I'm open to ideas for faster ways to to either zfs send directly or through a compressed file of the zfs send output.

Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-20 Thread Richard Jahnel
TBH write amp was not considered, but since I've never heard of a write amp over 1.5, for my purposes on the 256gb drives they still last welll over the required 5 year life span. Again it does hurt a lot when your using smaller drives that less space available for wear leveling. I suppose for

Re: [zfs-discuss] does sharing an SSD as slog and l2arc reduces its life span?

2010-06-19 Thread Richard Jahnel
Well pretty much by definition any writes shorten the drives life, the more writes the shorter it is. That said, here is some interesting math that I did before I built my first mlc array. For a certain brand of indellix drive I calculated the life span in the following way. Based on the maxim

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-17 Thread Richard Jahnel
The EX specs page does list the supercap The pro specs page does not. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Crucial RealSSD C300 and cache flush?

2010-06-11 Thread Richard Jahnel
I'm interested in the answer to this as well. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Richard Jahnel
Do you lose the data if you lose that 9v feed at the same time the computer losses power? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Richard Jahnel
And a very nice device it is indeed. However for my purposes it doesn't work as it doesn't fit into a 2.5" slot and use sata/sas connections. Unfortunately all my pci express slots are in use. 2 raid controllers 1 Fibre HBA 1 10gb ethernet card. -- This message posted from opensolaris.org

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-07 Thread Richard Jahnel
I'll have to take your word on the Zeus drives. I don't see any thing in thier literature that explicitly states that cache flushes are obeyed or other wise protected against power loss. As for OCZ they cancelled the Vertex 2 Pro which was to be the one with the super cap. For the moment they a

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-06-06 Thread Richard Jahnel
FWIW. I use 4 intel 32gb ssds as read cache for each pool of 10 Patriot Torx drives which are running in a raidz2 configuration. No Slogs as I haven't seen a compliant SSD drive yet. I am pleased with the results. The bottleneck really turns out to be the 24 port raid card they are plugged int

Re: [zfs-discuss] why both dedup and compression?

2010-05-05 Thread Richard Jahnel
Hmm... To clarify. Every discussion or benchmarking that I have seen always show both off, compression only or both on. Why never compression off and dedup on? After some further thought... perhaps it's because compression works at the byte level and dedup is at the block level. Perhaps I hav

[zfs-discuss] why both dedup and compression?

2010-05-05 Thread Richard Jahnel
I've googled this for a bit, but can't seem to find the answer. What does compression bring to the party that dedupe doesn't cover already? Thank you for you patience and answers. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Thoughts on drives for ZIL/L2ARC?

2010-04-27 Thread Richard Jahnel
For the l2arc you want iops pure an simple. For this I think the Intel SSDs are still king. The slog however has a gotcha, you want a iops, but also you want something that doesn't say it's done writing until the write is safely nonvolitile. The intel drives fail in this regard. So far I'm thin

Re: [zfs-discuss] ZFS deduplication ratio on Server 2008 backup VHD files

2010-04-23 Thread Richard Jahnel
You might note, dedupe only dedupes data that is writen after the flag is set. It does not retroactivly dedupe already writen data. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-15 Thread Richard Jahnel
Thank you for the corrections. Also I forgot about using an SSD to assist. My bad. =) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedup screwing up snapshot deletion

2010-04-14 Thread Richard Jahnel
This sounds like the known issue about the dedupe map not fitting in ram. When blocks are freed, dedupe scans the whole map to ensure each block is not is use before releasing it. This takes a veeery long time if the map doesn't fit in ram. If you can try adding more ram to the system. -- This

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-10 Thread Richard Jahnel
Any hints as to where you read that? I'm working on another system design with LSI controllers and being able to use SAS expanders would be a big help. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-10 Thread Richard Jahnel
Just as an FYI, not all drives like sas expanders. As an example, we had a lot of trouble with Indilinx MLC based SSDs. The systems had Adaptec 52445 controllers and Chenbro SAS expanders. In the end we had to remove the SAS expanders and put a 2nd 52445 in each system to get them to work prop

Re: [zfs-discuss] about backup and mirrored pools

2010-04-09 Thread Richard Jahnel
Mirrored sets do protect against disk failure, but most of the time you'll find proper backups are better as most issues are more on the order of "oops" than "blowed up sir". Perhaps mirrored sets with daily snapshots and a knowedge of how to mount snapshots as clones so that you can pull a cop

Re: [zfs-discuss] zfs send hangs

2010-04-09 Thread Richard Jahnel
I had some issues with direct send/receives myself. In the end I elected to send to a gz file and then scp that file across to receive from the file on the otherside. This has been working fine 3 times a day for about 6 months now. two sets of systems using doing this so far, a set running b111b

Re: [zfs-discuss] VMware client solaris 10, RAW physical disk and zfs snapshots problem - all created snapshots are equal to zero.

2010-03-30 Thread Richard Jahnel
what size is the gz file if you do an incremental send to file? something like: zfs send -i sn...@vol sn...@vol | gzip > /somplace/somefile.gz -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://m

Re: [zfs-discuss] b134 - Mirrored rpool won't boot unless both mirrors are present

2010-03-29 Thread Richard Jahnel
Exactly where in the menu.lst would I put the -r ? Thanks in advance. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS RaidZ to RaidZ2

2010-03-26 Thread Richard Jahnel
zfs send s...@oldpool | zfs receive newpool -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Richard Jahnel
BTW, if you download the solaris drivers for the 52445 from adaptec, you can use jbod instead of simple volumes. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/lis

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Richard Jahnel
Well the thing I like about raidz3 is that even with 1 drive out you have 3 copies of all the blocks. So if you encounter bit rot, not only can checksums be used to find the good data, you can still get a best 2 out of 3 vote on which data is correct. As to performance, all I can say is test te

Re: [zfs-discuss] pool use from network poor performance

2010-03-25 Thread Richard Jahnel
Awsome, glad to hear that you got it figured out. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] RAIDZ2 configuration

2010-03-25 Thread Richard Jahnel
I think I would do 3xraidz3 with 8 disks and 0 hotspares. That way you have a better chance of resolving bit rot issues that might become apparent during a rebuild. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-23 Thread Richard Jahnel
Not quite brave enough to put dedup into prodiction here. Concerned about the issues some folks have had when releasing large numbers of blocks in one go. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] pool use from network poor performance

2010-03-23 Thread Richard Jahnel
what does prstat show? We had a lot of trouble here using iscsi and zvols due to the cpu capping out with speeds less than 20mb/sec. After simply switching to Qlogic fibre HBAs and a file backed lu we went to 160mb/sec on that same test platform. -- This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-19 Thread Richard Jahnel
no, but I'm slightly paranoid that way. ;) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS send and receive corruption across a WAN link?

2010-03-19 Thread Richard Jahnel
They way we do this here is: zfs snapshot voln...@snapnow [i]#code to break on error and email not shown.[/i] zfs send -i voln...@snapbefore voln...@snapnow | pigz -p4 -1 > file [i]#code to break on error and email not shown.[/i] scp /dir/file u...@remote:/dir/file [i]#code to break on error and

Re: [zfs-discuss] Painfully slow RAIDZ2 as fibre channel COMSTAR export

2010-02-22 Thread Richard Jahnel
You might try this and see what you get. Changing to a file backed fibretarget resulted in a 3x performance boost for me. locad...@storage1:~# touch /bigpool/uberdisk/vol1 locad...@storage1:~# sbdadm create-lu -s 10700G /bigpool/uberdisk/vol1 locad...@storage1:~# stmfadm add-view 600144f0383cc50