Ok. So I think I can narrow the change to explicitly set -f 1.08 if the
slab_chunk_max is actually 16k... instead of just if `-o modern` is on...
I was careful about filling out a lot of the new values after all of the
parsing is done but missed some spots.

Thanks for trying it out. I'll wait a few hours in case you find anything
else.. or I think of anything else.

Much appreciated.

On Fri, 12 Aug 2016, andr...@vimeo.com wrote:

> That one seems to work okay — again, I've gotten past 2GB and the hit-rate is 
> within a few points of where it belongs. I don't have numbers for the same 
> situation on .29 but
> IIRC it was very bad. So I guess .30 is an improvement there.
>
> On Friday, August 12, 2016 at 3:34:00 PM UTC-4, Dormando wrote:
>       Also, just for completeness:
>
>       Does:
>
>       `-C -m 10240 -I 20m -c 4096 -o modern`
>
>       also fail under .30? (without the slab_chunk_max change)
>
>       On Fri, 12 Aug 2016, dormando wrote:
>
>       > FML.
>       >
>       > Please let me know how it goes. I'm going to take a hard look at this 
> and
>       > see about another bugfix release... there're a couple things I forgot 
> from
>       > .30 anyway.
>       >
>       > Your information will be very helpful though. Thanks again for 
> testing it.
>       > All of my testing recently was with explicit configuration options, 
> so I
>       > didn't notice the glitch with -o modern :(
>       >
>       > On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>       >
>       > > It will take a while to fill up entirely, but I passed 2GB with 0 
> evictions, so it looks like that probably does the job.
>       > >
>       > > On Friday, August 12, 2016 at 3:02:47 PM UTC-4, Dormando wrote:
>       > >       Ahhhh crap, I think I see it.
>       > >
>       > >       Can you add: `-f 1.25` *after* the -o stuff?
>       > >
>       > >       like this:
>       > >
>       > >       `-C -m 10240 -I 20m -c 4096 -o modern,slab_chunk_max=1048576 
> -f 1.25`
>       > >
>       > >       And test that out, please? I might have to back out some 
> over-aggressive
>       > >       switches... and I keep thinking of making this particular 
> problem (which
>       > >       I'll talk about if confirmed) a startup error :(
>       > >
>       > >       On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>       > >
>       > >       > Here you go.
>       > >       > Yes, 1.4.25 is running with `-C -m 10240 -I 20m -c 4096 -o
>       
> maxconns_fast,hash_algorithm=murmur3,lru_maintainer,lru_crawler,slab_reassign,slab_automove`.
>       > >       > 1.4.30 is running with `-C -m 10240 -I 20m -c 4096 -o 
> modern,slab_chunk_max=1048576`.
>       > >       >
>       > >       >
>       > >       > On Friday, August 12, 2016 at 2:32:59 PM UTC-4, Dormando 
> wrote:
>       > >       >       Hey,
>       > >       >
>       > >       >       any chance I could see `stats slabs` output as well? 
> a lot of the data's
>       > >       >       in there. Need all three: stats, stats items, stats 
> slabs
>       > >       >
>       > >       >       Also, did you try 1.4.30 with `-o 
> slab_chunk_max=1048576` as well?
>       > >       >
>       > >       >       thanks
>       > >       >
>       > >       >       On Fri, 12 Aug 2016, and...@vimeo.com wrote:
>       > >       >
>       > >       >       > Thanks! That's an improvement. It's still worse 
> than older versions, but it's better than 1.4.29. This time it made it up to 
> about 1.75GB/10GB
>       used
>       > >       before it
>       > >       >       started evicting;
>       > >       >       > I left it running for another 8 hours and it got up 
> to 2GB, but no higher.
>       > >       >       > Here's some stats output from the old and new 
> versions, in case you can puzzle anything out of it.
>       > >       >       >
>       > >       >       > Thanks,
>       > >       >       >
>       > >       >       > Andrew
>       > >       >       >
>       > >       >       >
>       > >       >       > On Thursday, August 11, 2016 at 6:14:26 PM UTC-4, 
> Dormando wrote:
>       > >       >       >       Hi,
>       > >       >       >
>       > >       >       >       
> https://github.com/memcached/memcached/wiki/ReleaseNotes1430
>       > >       >       >
>       > >       >       >       Can you please try this? And let me know how 
> it goes either way :)
>       > >       >       >
>       > >       >       >       On Wed, 10 Aug 2016, dormando wrote:
>       > >       >       >
>       > >       >       >       > Hey,
>       > >       >       >       >
>       > >       >       >       > Thanks and sorry about that. I just found a 
> bug this week where the new
>       > >       >       >       > code is over-allocating (though 30MB out of 
> 10G limit seems odd?)
>       > >       >       >       >
>       > >       >       >       > ie: with -I 2m, it would allocate 2 
> megabytes of memory and then only use
>       > >       >       >       > up to 1mb of it. A one-line fix for a 
> missed variable conversion.
>       > >       >       >       >
>       > >       >       >       > Will likely do a bugfix release later 
> tonight with that and a few other
>       > >       >       >       > things.
>       > >       >       >       >
>       > >       >       >       > Will take a look at your data in hopes it's 
> the same issue at least,
>       > >       >       >       > thanks!
>       > >       >       >       >
>       > >       >       >       > On Wed, 10 Aug 2016, and...@vimeo.com wrote:
>       > >       >       >       >
>       > >       >       >       > > I decided to give this a try on a 
> production setup that has a very bimodal size distribution (about a 50/50 
> split of 10k-100k values
>       and 1M-10M
>       > >       values)
>       > >       >       and
>       > >       >       >       lots of writes,
>       > >       >       >       > > where we've been running with "-I 10m -m 
> 10240" for a while. It didn't go so great. Almost immediately there were lots 
> and lots of
>       evictions,
>       > >       even
>       > >       >       though the
>       > >       >       >       used memory was
>       > >       >       >       > > only about 30MB of the 10GB limit, and 
> the number of active keys grew very slowly. "-o slab_chunk_max=1048576" may 
> have had some
>       effect, but it
>       > >       didn't
>       > >       >       really
>       > >       >       >       seem like it.
>       > >       >       >       > > Setting "slabs automove 2" (usually 1) 
> reduced evictions about 50% but it still wasn't enough to get acceptable 
> performance.
>       > >       >       >       > > I've rolled back to 1.4.25 for the 
> moment, but I'm attaching a log with "stats" and "stats items" from 
> yesterday. "stats sizes"
>       wasn't
>       > >       available due to
>       > >       >       -C, and
>       > >       >       >       the log isn't
>       > >       >       >       > > from as long after startup as I would 
> like, but it's what I got, sorry.
>       > >       >       >       > >
>       > >       >       >       > > Let me know if there's anything else I 
> can do to help.
>       > >       >       >       > >
>       > >       >       >       > > Thanks,
>       > >       >       >       > >
>       > >       >       >       > > Andrew
>       > >       >       >       > >
>       > >       >       >       > > On Wednesday, July 13, 2016 at 8:08:49 PM 
> UTC-4, Dormando wrote:
>       > >       >       >       > >       
> https://github.com/memcached/memcached/wiki/ReleaseNotes1429
>       > >       >       >       > >
>       > >       >       >       > >       enjoy.
>       > >       >       >       > >
>       > >       >       >       > > --
>       > >       >       >       > >
>       > >       >       >       > > ---
>       > >       >       >       > > You received this message because you are 
> subscribed to the Google Groups "memcached" group.
>       > >       >       >       > > To unsubscribe from this group and stop 
> receiving emails from it, send an email to memcached+...@googlegroups.com.
>       > >       >       >       > > For more options, visit 
> https://groups.google.com/d/optout.
>       > >       >       >       > >
>       > >       >       >       > >
>       > >       >       >       >
>       > >       >       >       > --
>       > >       >       >       >
>       > >       >       >       > ---
>       > >       >       >       > You received this message because you are 
> subscribed to the Google Groups "memcached" group.
>       > >       >       >       > To unsubscribe from this group and stop 
> receiving emails from it, send an email to memcached+...@googlegroups.com.
>       > >       >       >       > For more options, visit 
> https://groups.google.com/d/optout.
>       > >       >       >       >
>       > >       >       >
>       > >       >       > --
>       > >       >       >
>       > >       >       > ---
>       > >       >       > You received this message because you are 
> subscribed to the Google Groups "memcached" group.
>       > >       >       > To unsubscribe from this group and stop receiving 
> emails from it, send an email to memcached+...@googlegroups.com.
>       > >       >       > For more options, visit 
> https://groups.google.com/d/optout.
>       > >       >       >
>       > >       >       >
>       > >       >
>       > >       > --
>       > >       >
>       > >       > ---
>       > >       > You received this message because you are subscribed to the 
> Google Groups "memcached" group.
>       > >       > To unsubscribe from this group and stop receiving emails 
> from it, send an email to memcached+...@googlegroups.com.
>       > >       > For more options, visit https://groups.google.com/d/optout.
>       > >       >
>       > >       >
>       > >
>       > > --
>       > >
>       > > ---
>       > > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>       > > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+...@googlegroups.com.
>       > > For more options, visit https://groups.google.com/d/optout.
>       > >
>       > >
>       >
>       > --
>       >
>       > ---
>       > You received this message because you are subscribed to the Google 
> Groups "memcached" group.
>       > To unsubscribe from this group and stop receiving emails from it, 
> send an email to memcached+...@googlegroups.com.
>       > For more options, visit https://groups.google.com/d/optout.
>       >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to memcached+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to