On Mon, Apr 26, 2010 at 8:01 AM, Travis Tabbal <tra...@tabbal.net> wrote:
> At the end of my OP I mentioned that I was interested in L2ARC for dedupe. It 
> sounds like the DDT can get bigger than RAM and slow things to a crawl. Not 
> that I expect a lot from using an HDD for that, but I thought it might help. 
> I'd like to get a nice SSD or two for this stuff, but that's not in the 
> budget right now.

A large DDT will require a lot of random reads, which isn't an ideal
use case for a spinning disk. Plus, 10k disks are loud and hot.

You can get a 30-40gb ssd for about $100 these days. It doesn't matter
if a disk for the L2ARC obeys cache flushing, etc. Regardless of
whether the host is shutdown cleanly or not, the L2ARC starts cold. It
doesn't matter if the data is corrupted, because a failed checksum
will cause the pool to go back to the data disks.

As far as using 10k disks for a slog, it depends on what kind of
drives are in your pool and how it's laid out. If you have a wide
raidz stripe on slow disks, just about anything will help. If you've
got striped mirrors on fast disks, then it probably won't help much,
especially for what sounds like a server with a small number of
clients.

I've got an OCZ Vertex 30gb drive with a 1GB stripe used for the slog
and the rest used for the L2ARC, which for ~ $100 has been a nice
boost to nfs writes.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to