>> On 6 January 2011 22:26, Chris Forgeron <cforge...@acsi.ca> wrote:
>> > You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
>> > some of the speed results I've been getting over the last 6mo-1yr that 
>> > I've been running them with ZFS. I think people should be using hardware 
>> > RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for 
>> > the cost of a 60 gig SSD, and it will trounce the SSD for speed.
>> >

(I'm making an updated comment on my previous comment. Sorry for the topic 
drift, but I think this is important to consider)

I decided to do some tests between my Gigabyte i-RAM and OCZ Vertex 2 SSD. I've 
found that they are both very similar for Random 4K-aligned Write speed (I was 
receiving around 17,000 IOPS on both, slightly faster ms access time for the 
i-RAM). Now, if you're talking 512b aligned writes (which is what ZFS is unless 
you've tweaked the ashift value) you're going to win with an i-RAM device. The 
OCZ Drops down to ~6000 IOPS for 512b random writes.

Please note, that's on a used Vertex 2. A fresh Vertex 2 was giving me 28,000 
IOPS on 4k aligned writes - Faster than the i-RAM. But with more time, it will 
be slower than the i-RAM due to SSD fade. 

I'm seriously considering trading in my ZIL SSD's for i-RAM devices, they are 
around the same price if you can still find them, and they won't degrade like 
an SSD does. ZIL doesn't need much storage space. I think 12 gig (3 I-RAM's) 
would do nicely, and would give me an aggregate IOPS close to a ddrdrive for 
under $500. 

I did some testing with SSD Fade recently, here's the link to my blog on it if 
anyone cares for more detail - 
http://christopher-technicalmusings.blogspot.com/2011/01/ssd-fade-its-real-and-why-you-may-not.html

I'm still using SSDs for my ZIL, but I think I'll be switching over to some 
sort of RAM device shortly. I wish the i-RAM in 3.5" format had proper SATA 
power connectors on the back so it could plug into my SAS backplane like the 
OCZ 3.5" SSDs do. As it stands, I'd have to rig something, as my SAN head 
doesn't have any PCI controller slots for the other i-RAM format.


-----Original Message-----
From: owner-freebsd-sta...@freebsd.org 
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Markiyan Kushnir
Sent: Friday, January 07, 2011 8:10 AM
To: Jeremy Chadwick
Cc: Chris Forgeron; freebsd-stable@freebsd.org; Artem Belevich; Jean-Yves 
Avenard
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011/1/7 Jeremy Chadwick <free...@jdc.parodius.com>:
> On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
>> On 6 January 2011 22:26, Chris Forgeron <cforge...@acsi.ca> wrote:
>> > You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
>> > some of the speed results I've been getting over the last 6mo-1yr that 
>> > I've been running them with ZFS. I think people should be using hardware 
>> > RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for 
>> > the cost of a 60 gig SSD, and it will trounce the SSD for speed.
>> >
>> > I'd put your SSD to L2ARC (cache).
>>
>> Where do you find those though.
>>
>> I've looked and looked and all references I could find was that
>> battery-powered RAM card that Sun used in their test setup, but it's
>> not publicly available..
>
> DDRdrive:
>  http://www.ddrdrive.com/
>  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
>
> ACard ANS-9010:
>  http://techreport.com/articles.x/16255
>
> GC-RAMDISK (i-RAM) products:
>  http://us.test.giga-byte.com/Products/Storage/Default.aspx
>
> Be aware these products are absurdly expensive for what they offer (the
> cost isn't justified), not to mention in some cases a bottleneck is
> imposed by use of a SATA-150 interface.  I'm also not sure if all of
> them offer BBU capability.
>
> In some respects you might be better off just buying more RAM for your
> system and making md(4) memory disks that are used by L2ARC (cache).
> I've mentioned this in the past (specifically "back in the days" when
> the ARC piece of ZFS on FreeBSD was causing havok, and asked if one
> could work around the complexity by using L2ARC with md(4) drives
> instead).
>

Once you have got extra RAM, why not just reserve it directly to ARC
(via vm.kmem_size[_max] and vfs.zfs.arc_max)?

Markiyan.

> I tried this, but couldn't get rc.d/mdconfig2 to do what I wanted on
> startup WRT the aforementioned.
>
> --
> | Jeremy Chadwick                                   j...@parodius.com |
> | Parodius Networking                       http://www.parodius.com/ |
> | UNIX Systems Administrator                  Mountain View, CA, USA |
> | Making life hard for others since 1977.               PGP 4BD6C0CB |
>
> _______________________________________________
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to