On Sun, Jun 5, 2011 at 9:56 AM, Edward Ned Harvey <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

> > From: Richard Elling [mailto:richard.ell...@gmail.com]
> > Sent: Saturday, June 04, 2011 9:10 PM
> > > Instant Poll : Yes/No ?
> >
> > No.
> >
> > Methinks the MRU/MFU balance algorithm adjustment is more fruitful.
>
> Operating under the assumption that cache hits can be predicted, I agree
> with RE.  However, that's not always the case, and if you have a random
> work
> load with enough ram to hold the whole DDT, but you don't have enough ram
> to
> hold your whole storage pool, then dedup hurts your performance
> dramatically.  Your only option is to set primarycache=metadata, and simply
> give up hope that you could *ever* have a userdata cache hit.
>
> The purpose for starting this thread is to suggest it might be worthwhile
> (particularly with dedup enabled) to at least have the *option* of always
> keeping the metadata in cache, but still allow userdata to be cached too,
> up
> to the size of c_max.  Just in case you might ever see a userdata cache
> hit.
> ;-)
>
> And as long as we're taking a moment to think outside the box, it might as
> well be suggested that this doesn't have to be a binary decision,
> all-or-nothing.  One way to implement such an idea would be to assign a
> relative weight to metadata versus userdata.  Dan and Roch suggested a
> value
> of 128x seems appropriate.  I'm sure some people would suggest infinite
> metadata weight (which is synonymous to the aforementioned
> primarycache=metadata, plus the ability to cache userdata in the remaining
> unused ARC space.)
>
>
I'd go with the option of allowing both a weighted and a forced option.  I
agree though, if you do primarycache=metadata, the system should still
attempt to cache userdata if there is additional space remaining.

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to