It's looking like slab_chunk_max of 512k will sit better as a default
until stitching is done...

It doesn't create slab classes of say (770k) that still take 1MB of space
due to the slab mover needing consistent page sizes.

It doesn't have the low end efficiency hole between 16k and ~80k slab
sizes that the 16k default had.

With -I 2m and old code + page mover, anything above 1M uses 2M of space.
For -I 20m that ends up being 10->20m. with 512k, at least a 15m item with
an -I 20m would actually take 15m of space... and as mentioned above the
low end isn't damaged. This is with the factor default of 1.25.

Anyone have any thoughts? I'm going to hold this change for monday since
there's a chance I could roll it with the new crawler changes.

I've fixed the recommendation for slab_chunk_max in the releasenotes to be
512k instead of 1m. Hopefully that tides things over.

Sorry, again. Some simple math would've avoided this situation. This is a
complicated change to do on one's own.

On Sat, 13 Aug 2016, Dormando wrote:

> what about without the slab_chunk_max change? (just bare modern) is usage 
> better?
>
> could I get a stats snapshot from the one that filled?
>
> On Aug 13, 2016, at 9:35 AM, andr...@vimeo.com wrote:
>
>       The "STAT bytes" leveled out at 8.1GB for the 1.4.30 instance (with -C 
> -m 10240 -I 20m -c 4096 -o modern,slab_chunk_max=1048576 -f 1.25), vs. 9.4GB 
> for 1.4.25, and
>       STAT curr_items is 120k vs. 136k. So it still seems to be making worse 
> use of memory, but it's far better than any of the previous tries with 
> .29/.30.
>
>       On Friday, August 12, 2016 at 8:46:41 PM UTC-4, Dormando wrote:
>             still running ok?
>
>             > On Aug 12, 2016, at 1:10 PM, dormando <dorm...@rydia.net> wrote:
>             >
>             > Ok. So I think I can narrow the change to explicitly set -f 
> 1.08 if the
>             > slab_chunk_max is actually 16k... instead of just if `-o 
> modern` is on...
>             > I was careful about filling out a lot of the new values after 
> all of the
>             > parsing is done but missed some spots.
>             >
>             > Thanks for trying it out. I'll wait a few hours in case you 
> find anything
>             > else.. or I think of anything else.
>             >
>             > Much appreciated.
>             >
>             >> On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>             >>
>             >> That one seems to work okay ― again, I've gotten past 2GB and 
> the hit-rate is within a few points of where it belongs. I don't have numbers 
> for the
>             same situation on .29 but
>             >> IIRC it was very bad. So I guess .30 is an improvement there.
>             >>
>             >> On Friday, August 12, 2016 at 3:34:00 PM UTC-4, Dormando wrote:
>             >>      Also, just for completeness:
>             >>
>             >>      Does:
>             >>
>             >>      `-C -m 10240 -I 20m -c 4096 -o modern`
>             >>
>             >>      also fail under .30? (without the slab_chunk_max change)
>             >>
>             >>>      On Fri, 12 Aug 2016, dormando wrote:
>             >>>
>             >>> FML.
>             >>>
>             >>> Please let me know how it goes. I'm going to take a hard look 
> at this and
>             >>> see about another bugfix release... there're a couple things 
> I forgot from
>             >>> .30 anyway.
>             >>>
>             >>> Your information will be very helpful though. Thanks again 
> for testing it.
>             >>> All of my testing recently was with explicit configuration 
> options, so I
>             >>> didn't notice the glitch with -o modern :(
>             >>>
>             >>>> On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>             >>>>
>             >>>> It will take a while to fill up entirely, but I passed 2GB 
> with 0 evictions, so it looks like that probably does the job.
>             >>>>
>             >>>> On Friday, August 12, 2016 at 3:02:47 PM UTC-4, Dormando 
> wrote:
>             >>>>       Ahhhh crap, I think I see it.
>             >>>>
>             >>>>       Can you add: `-f 1.25` *after* the -o stuff?
>             >>>>
>             >>>>       like this:
>             >>>>
>             >>>>       `-C -m 10240 -I 20m -c 4096 -o 
> modern,slab_chunk_max=1048576 -f 1.25`
>             >>>>
>             >>>>       And test that out, please? I might have to back out 
> some over-aggressive
>             >>>>       switches... and I keep thinking of making this 
> particular problem (which
>             >>>>       I'll talk about if confirmed) a startup error :(
>             >>>>
>             >>>>       On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>             >>>>
>             >>>>       > Here you go.
>             >>>>       > Yes, 1.4.25 is running with `-C -m 10240 -I 20m -c 
> 4096 -o
>             >>      
> maxconns_fast,hash_algorithm=murmur3,lru_maintainer,lru_crawler,slab_reassign,slab_automove`.
>             >>>>       > 1.4.30 is running with `-C -m 10240 -I 20m -c 4096 
> -o modern,slab_chunk_max=1048576`.
>             >>>>       >
>             >>>>       >
>             >>>>       > On Friday, August 12, 2016 at 2:32:59 PM UTC-4, 
> Dormando wrote:
>             >>>>       >       Hey,
>             >>>>       >
>             >>>>       >       any chance I could see `stats slabs` output as 
> well? a lot of the data's
>             >>>>       >       in there. Need all three: stats, stats items, 
> stats slabs
>             >>>>       >
>             >>>>       >       Also, did you try 1.4.30 with `-o 
> slab_chunk_max=1048576` as well?
>             >>>>       >
>             >>>>       >       thanks
>             >>>>       >
>             >>>>       >       On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>             >>>>       >
>             >>>>       >       > Thanks! That's an improvement. It's still 
> worse than older versions, but it's better than 1.4.29. This time it made it 
> up to about
>             1.75GB/10GB
>             >>      used
>             >>>>       before it
>             >>>>       >       started evicting;
>             >>>>       >       > I left it running for another 8 hours and it 
> got up to 2GB, but no higher.
>             >>>>       >       > Here's some stats output from the old and 
> new versions, in case you can puzzle anything out of it.
>             >>>>       >       >
>             >>>>       >       > Thanks,
>             >>>>       >       >
>             >>>>       >       > Andrew
>             >>>>       >       >
>             >>>>       >       >
>             >>>>       >       > On Thursday, August 11, 2016 at 6:14:26 PM 
> UTC-4, Dormando wrote:
>             >>>>       >       >       Hi,
>             >>>>       >       >
>             >>>>       >       >       
> https://github.com/memcached/memcached/wiki/ReleaseNotes1430
>             >>>>       >       >
>             >>>>       >       >       Can you please try this? And let me 
> know how it goes either way :)
>             >>>>       >       >
>             >>>>       >       >       On Wed, 10 Aug 2016, dormando wrote:
>             >>>>       >       >
>             >>>>       >       >       > Hey,
>             >>>>       >       >       >
>             >>>>       >       >       > Thanks and sorry about that. I just 
> found a bug this week where the new
>             >>>>       >       >       > code is over-allocating (though 30MB 
> out of 10G limit seems odd?)
>             >>>>       >       >       >
>             >>>>       >       >       > ie: with -I 2m, it would allocate 2 
> megabytes of memory and then only use
>             >>>>       >       >       > up to 1mb of it. A one-line fix for 
> a missed variable conversion.
>             >>>>       >       >       >
>             >>>>       >       >       > Will likely do a bugfix release 
> later tonight with that and a few other
>             >>>>       >       >       > things.
>             >>>>       >       >       >
>             >>>>       >       >       > Will take a look at your data in 
> hopes it's the same issue at least,
>             >>>>       >       >       > thanks!
>             >>>>       >       >       >
>             >>>>       >       >       > On Wed, 10 Aug 2016, 
> and...@vimeo.com wrote:
>             >>>>       >       >       >
>             >>>>       >       >       > > I decided to give this a try on a 
> production setup that has a very bimodal size distribution (about a 50/50 
> split of
>             10k-100k values
>             >>      and 1M-10M
>             >>>>       values)
>             >>>>       >       and
>             >>>>       >       >       lots of writes,
>             >>>>       >       >       > > where we've been running with "-I 
> 10m -m 10240" for a while. It didn't go so great. Almost immediately there 
> were lots
>             and lots of
>             >>      evictions,
>             >>>>       even
>             >>>>       >       though the
>             >>>>       >       >       used memory was
>             >>>>       >       >       > > only about 30MB of the 10GB limit, 
> and the number of active keys grew very slowly. "-o slab_chunk_max=1048576" 
> may have
>             had some
>             >>      effect, but it
>             >>>>       didn't
>             >>>>       >       really
>             >>>>       >       >       seem like it.
>             >>>>       >       >       > > Setting "slabs automove 2" 
> (usually 1) reduced evictions about 50% but it still wasn't enough to get 
> acceptable
>             performance.
>             >>>>       >       >       > > I've rolled back to 1.4.25 for the 
> moment, but I'm attaching a log with "stats" and "stats items" from 
> yesterday. "stats
>             sizes"
>             >>      wasn't
>             >>>>       available due to
>             >>>>       >       -C, and
>             >>>>       >       >       the log isn't
>             >>>>       >       >       > > from as long after startup as I 
> would like, but it's what I got, sorry.
>             >>>>       >       >       > >
>             >>>>       >       >       > > Let me know if there's anything 
> else I can do to help.
>             >>>>       >       >       > >
>             >>>>       >       >       > > Thanks,
>             >>>>       >       >       > >
>             >>>>       >       >       > > Andrew
>             >>>>       >       >       > >
>             >>>>       >       >       > > On Wednesday, July 13, 2016 at 
> 8:08:49 PM UTC-4, Dormando wrote:
>             >>>>       >       >       > >       
> https://github.com/memcached/memcached/wiki/ReleaseNotes1429
>             >>>>       >       >       > >
>             >>>>       >       >       > >       enjoy.
>             >>>>       >       >       > >
>             >>>>       >       >       > > --
>             >>>>       >       >       > >
>             >>>>       >       >       > > ---
>             >>>>       >       >       > > You received this message because 
> you are subscribed to the Google Groups "memcached" group.
>             >>>>       >       >       > > To unsubscribe from this group and 
> stop receiving emails from it, send an email to 
> memcached+...@googlegroups.com.
>             >>>>       >       >       > > For more options, visit 
> https://groups.google.com/d/optout.
>             >>>>       >       >       > >
>             >>>>       >       >       > >
>             >>>>       >       >       >
>             >>>>       >       >       > --
>             >>>>       >       >       >
>             >>>>       >       >       > ---
>             >>>>       >       >       > You received this message because 
> you are subscribed to the Google Groups "memcached" group.
>             >>>>       >       >       > To unsubscribe from this group and 
> stop receiving emails from it, send an email to 
> memcached+...@googlegroups.com.
>             >>>>       >       >       > For more options, visit 
> https://groups.google.com/d/optout.
>             >>>>       >       >       >
>             >>>>       >       >
>             >>>>       >       > --
>             >>>>       >       >
>             >>>>       >       > ---
>             >>>>       >       > You received this message because you are 
> subscribed to the Google Groups "memcached" group.
>             >>>>       >       > To unsubscribe from this group and stop 
> receiving emails from it, send an email to memcached+...@googlegroups.com.
>             >>>>       >       > For more options, visit 
> https://groups.google.com/d/optout.
>             >>>>       >       >
>             >>>>       >       >
>             >>>>       >
>             >>>>       > --
>             >>>>       >
>             >>>>       > ---
>             >>>>       > You received this message because you are subscribed 
> to the Google Groups "memcached" group.
>             >>>>       > To unsubscribe from this group and stop receiving 
> emails from it, send an email to memcached+...@googlegroups.com.
>             >>>>       > For more options, visit 
> https://groups.google.com/d/optout.
>             >>>>       >
>             >>>>       >
>             >>>>
>             >>>> --
>             >>>>
>             >>>> ---
>             >>>> You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>             >>>> To unsubscribe from this group and stop receiving emails 
> from it, send an email to memcached+...@googlegroups.com.
>             >>>> For more options, visit https://groups.google.com/d/optout.
>             >>>
>             >>> --
>             >>>
>             >>> ---
>             >>> You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>             >>> To unsubscribe from this group and stop receiving emails from 
> it, send an email to memcached+...@googlegroups.com.
>             >>> For more options, visit https://groups.google.com/d/optout.
>             >>
>             >> --
>             >>
>             >> ---
>             >> You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>             >> To unsubscribe from this group and stop receiving emails from 
> it, send an email to memcached+...@googlegroups.com.
>             >> For more options, visit https://groups.google.com/d/optout.
>             >
>             > --
>             >
>             > ---
>             > You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>             > To unsubscribe from this group and stop receiving emails from 
> it, send an email to memcached+...@googlegroups.com.
>             > For more options, visit https://groups.google.com/d/optout.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to