Quoting Jeremy Chadwick <free...@jdc.parodius.com> (from Mon, 15 Feb 2010 01:07:56 -0800):

On Mon, Feb 15, 2010 at 10:49:47AM +0200, Dan Naumov wrote:
> I had a feeling someone would bring up L2ARC/cache devices.  This gives
> me the opportunity to ask something that's been on my mind for quite
> some time now:
>
> Aside from the capacity different (e.g. 40GB vs. 1GB), is there a
> benefit to using a dedicated RAM disk (e.g. md(4)) to a pool for
> L2ARC/cache?  The ZFS documentation explicitly states that cache
> device content is considered volatile.

Using a ramdisk as an L2ARC vdev doesn't make any sense at all. If you
have RAM to spare, it should be used by regular ARC.

...except that it's already been proven on FreeBSD that the ARC getting
out of control can cause kernel panics[1], horrible performance until

There are other ways (not related to ZFS) to shoot into your feet too, I'm tempted to say that this is
 a) a documentation bug
and
b) a lack of sanity checking of the values... anyone out there with a good algorithm for something like this?

Normally you do some testing with the values you use, so once you resolved the issues, the system should be stable.

ZFS has had its active/inactive lists flushed[2], and brings into

Someone needs to sit down and play a little bit with ways to tell the ARC that there is free memory. The mail you reference already tells that the inactive/cached lists should maybe taken into account too (I didn't had a look at this part of the ZFS code).

question how proper tuning is to be established and what the effects are
on the rest of the system[3].  There are still reports of people

That's what I talk about regarding b) above. If you specify an arc_max which is too big (arc_max > kmem_size - SOME_SAVE_VALUE), there should be a message from the kernel and the value should be adjusted to a save amount.

Until the problems are fixed, a MD for L2ARC may be a viable alternative (if you have enough mem to give for this). Feel free to provide benchmark numbers, but in general I see this just as a workaround for the current issues.

disabling ZIL "for stability reasons" as well.

For the ZIL you definitively do not want to have a MD. If you do not specify a log vdev for the pool, the ZIL will be written somewhere on the disks of the pool. When the data hits the ZIL, it has to be really on a non-volatile storage. If you lose the ZIL, you lose data.

The "Internals" section of Brendan Gregg's blog[4] outlines where the
L2ARC sits in the scheme of things, or if the ARC could essentially
be disabled by setting the minimum size to something very small (a few
megabytes) and instead using L2ARC which is manageable.

At least in 7-stable, 8-stable and 9-current, the arc_max now really corresponds to a max value, so it is more of providing a save arc_max than a minimal arc_max. No matter how you construct the L2ARC, ARC access will be faster than L2ARC access.

[1]: http://lists.freebsd.org/pipermail/freebsd-questions/2010-January/211009.html [2]: http://lists.freebsd.org/pipermail/freebsd-stable/2010-January/053949.html [3]: http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055073.html
[4]: http://blogs.sun.com/brendan/entry/test

Bye,
Alexander.

--
BOFH excuse #439:

Hot Java has gone cold

http://www.Leidinger.net    Alexander @ Leidinger.net: PGP ID = B0063FE7
http://www.FreeBSD.org       netchild @ FreeBSD.org  : PGP ID = 72077137
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to