G'Day,

On Sat, Feb 13, 2010 at 09:02:58AM +1100, Daniel Carosone wrote:
> On Fri, Feb 12, 2010 at 11:26:33AM -0800, Richard Elling wrote:
> > Mathing aorund a bit, for a 300 GB L2ARC (apologies for the tab separation):
> >     size (GB)       300             
> >     size (sectors)  585937500               
> >     labels (sectors)        9232            
> >     available sectors       585928268               
> >     bytes/L2ARC header      200             
> >                     
> >     recordsize (sectors)    recordsize (kBytes)     L2ARC capacity 
> > (records)        Header size (MBytes)
> >     1       0.5     585928268       111,760
> >     2       1       292964134       55,880
> >     4       2       146482067       27,940
> >     8       4       73241033        13,970
> >     16      8       36620516        6,980
> >     32      16      18310258        3,490
> >     64      32      9155129 1,750
> >     128     64      4577564 870
> >     256     128     2288782 440
> > 
> > So, depending on the data, you need somewhere between 440 MBytes and  111 
> > GBytes
> > to hold the L2ARC headers. For a rule of thumb, somewhere between 0.15% and 
> > 40%
> > of the total used size. Ok, that rule really isn't very useful...
> 
> All that precision up-front for such a broad conclusion..  bummer :)
> 
> I'm interested in a better rule of thumb, for rough planning
> purposes.  As previously noted, I'm especially interesed in the

I use 2.5% for an 8 Kbyte record size.  ie, for every 1 Gbyte of L2ARC, about
25 Mbytes of ARC is consumed.  I don't recommand other record sizes since:

- the L2ARC is currently intended for random I/O workloads.  Such workloads
  usually have small record sizes, such as 8 Kbytes.  Larger record sizes (such
  as the 128 Kbyte default) is better for streaming workloads.  The L2ARC
  doesn't currently touch streaming workloads (l2arc_noprefetch=1).

- The best performance from SSDs is with smaller I/O sizes, not larger.  I get
  about 3200 x 8 Kbyte read I/O from my current L2ARC devices, yet only about
  750 x 128 Kbyte read I/O from the same devices.

- smaller than 4 Kbyte record sizes leads to a lot of ARC headers and worse
  streaming performance.  I wouldn't tune it smaller unless I had to for
  some reason.

So, from the table above I'd only really consider the 4 to 32 Kbyte size range.
4 Kbytes if you really wanted a smaller record size, and 32 Kbytes if you had
limited DRAM you wanted to conserve (at the trade-off of SSD performance.)

Brendan


> combination with dedup, where DDT entries need to be cached.  What's
> the recordsize for L2ARC-of-on-disk-DDT, and how does that bias the
> overhead %age above?
> 
> I'm also interested in a more precise answer to a different question,
> later on.  Lets say I already have an L2ARC, running and warm.  How do
> I tell how much is being used?  Presumably, if it's not full, RAM 
> to manage it is the constraint - how can I confirm that and how can I
> tell how much RAM is currently used?
> 
> If I can observe these figures, I can tell if I'm wasting ssd space
> that can't be used.  Either I can reallocate that space or know that
> adding RAM will have an even bigger benefit (increasing both primary
> and secondary cache sizes).  Maybe I can even decide that L2ARC is not
> worth it for this box (especially if it can't fit any more RAM).
> 
> Finally, how smart is L2ARC at optimising this usage? If it's under
> memory pressure, does it prefer to throw out smaller records in favour
> of larger more efficient ones? 
> 
> My current rule of thumb for all this, absent better information, is
> that you should just have gobs of RAM (no surprise there) but that if
> you can't, then dedup seems to be most worthwhile when the pool itself
> is on ssd, no l2arc. Say, a laptop.  Here, you care most about saving
> space and the IO overhead costs least.
> 
> We need some thumbs in between these extremes.  :-(
> 
> --
> Dan.


> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Brendan Gregg, Fishworks                       http://blogs.sun.com/brendan
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to