Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-22 Thread Richard Elling
On Jul 21, 2010, at 7:56 AM, Hernan F wrote:

 Hi,
 Out of pure curiosity, I was wondering, what would happen if one tries to use 
 a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?

My rule of thumb is that if the latency of the slog (write latency) or L2ARC 
(random read)
is 10x better than the pool devices, then go for it. There are some cases where 
a fast
HDD can have lower read or write latency  than a slow HDD (or Marty's USB disk 
:-)
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Hernan F
 
 Hi,
 Out of pure curiosity, I was wondering, what would happen if one tries
 to use a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?

I tested it once, for the same reasons as you.  Curiosity.  I never said
anything about it because it wasn't interesting.  In some cases, it
accelerates some.  In some cases, it slows things down.  It basically
depends on the characteristics of your work load, but overall, it was a net
zero gain.  Or perhaps a net loss.

Generally speaking, people won't sacrifice the hardware or disk slots,
unless there's a clear gain.  So that's the conclusion.  Obvious though it
may be.  There is no clear gain, so just don't do it.

Offhand, I think the only situation where there was a gain was ... If the
primary pool (6 disks) is being absolutely hammered by async operations, and
then you benchmark some sync writes simultaneously, with and without the
extra disk for dedicated log.  The end result is:  Since the dedicated log
is idle, the sync write can immediately write to it, and then become just
another async write going along with the crowd.  Without the dedicated log,
the sync write competes against the async operations for access to the main
pool, slowing itself down and the async operations.  But with the dedicated
log, you've got 6 disks active with async operations and the 7th disk is
idle during normal async operations ... But when you mix the async and sync
operations, then suddenly you're able to leverage all 7 disks and have a net
gain because your primary pool is not dorking around with tiny little sync
operations.

Like I said.  Unimpressive, not interesting.  No general purpose, and
generally no gain.  Don't do it.

Just by the SSD instead.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Marty Scholes
 Hi,
 Out of pure curiosity, I was wondering, what would
 happen if one tries to use a regular 7200RPM (or 10K)
 drive as slog or L2ARC (or both)?

I have done both with success.

At one point my backup pool was a collection of USB attached drives (please 
keep the laughter down) with dedup=verify.  Solaris' slow USB performance 
coupled with slow drives and dedup reads gave abysmal write speeds, so much so 
that at times it had trouble keeping the snapshots synchronized.  To help it 
along, I took an unused fast, small SCSI disk and made it L2ARC, which 
significantly improved write performance on the pool.

During testing of some iSCSI applications, I ran into a scenario where a client 
was performing many small, synchronous writes to a zvol in a wide RAIDZ3 
stripe.  Since synchronous writes can double the write activity (once for the 
zil and once for the actual pool), actual throughput from the client was below 
2MB/s, even though the pool would sustain 200MB/s on sequential writes.  As 
above, I added a mirrored slog which was two small, fast SCSI drives.  While I 
expected the throughput to double, it actually went up by a factor of 4, to 
8MB/s.  Even though 8MB/s wasn't mind-numbing, it was enough that it was close 
to saturating the client's 100Mb ethernet link, so it was ok.

I think the reason that the slog improved things so much is that the slog disks 
were not bothered with other i/o and were doing very little seeking.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Garrett D'Amore
On Wed, 2010-07-21 at 07:56 -0700, Hernan F wrote:
 Hi,
 Out of pure curiosity, I was wondering, what would happen if one tries to use 
 a regular 7200RPM (or 10K) drive as slog or L2ARC (or both)?
 
 I know these are designed with SSDs in mind, and I know it's possible to use 
 anything you want as cache. So would ZFS benefit from it? Would it be the 
 same? Would it slow down?
 
 I guess it would slow things down, because it would be trying to read/write 
 from a single spindle instead of a multidisk array, right? I havent found any 
 articles discussing this, only ones talking about SSD-based slogs/caches.
 
 Thanks,
 Hernan


I think yes, it would probably slow things down, at least for typical
usage.  However, there is a small change it might improve things by
offloading this functionality from the main spindle(s) to separate ones.
But I think you'd be better off expanding a stripe than using a disk in
this way.

- Garrett


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] slog/L2ARC on a hard drive and not SSD?

2010-07-21 Thread Scott Meilicke
Another data point - I used three 15K disks striped using my RAID controller as 
a slog for the zil, and performance went down. I had three raidz sata vdevs 
holding the data, and my load was VMs, i.e. a fair amount of small, random IO 
(60% random, 50% write, ~16k in size). 

Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss