It's looking like slab_chunk_max of 512k will sit better as a default
until stitching is done...
It doesn't create slab classes of say (770k) that still take 1MB of space
due to the slab mover needing consistent page sizes.
It doesn't have the low end efficiency hole between 16k and ~80k slab
what about without the slab_chunk_max change? (just bare modern) is usage
better?
could I get a stats snapshot from the one that filled?
> On Aug 13, 2016, at 9:35 AM, andr...@vimeo.com wrote:
>
> The "STAT bytes" leveled out at 8.1GB for the 1.4.30 instance (with -C -m
> 10240 -I 20m -c 4096
The "STAT bytes" leveled out at 8.1GB for the 1.4.30 instance (with -C -m
10240 -I 20m -c 4096 -o modern,slab_chunk_max=1048576 -f 1.25), vs. 9.4GB
for 1.4.25, and STAT curr_items is 120k vs. 136k. So it still seems to be
making worse use of memory, but it's far better than any of the previous
still running ok?
> On Aug 12, 2016, at 1:10 PM, dormando wrote:
>
> Ok. So I think I can narrow the change to explicitly set -f 1.08 if the
> slab_chunk_max is actually 16k... instead of just if `-o modern` is on...
> I was careful about filling out a lot of the new values
Ok. So I think I can narrow the change to explicitly set -f 1.08 if the
slab_chunk_max is actually 16k... instead of just if `-o modern` is on...
I was careful about filling out a lot of the new values after all of the
parsing is done but missed some spots.
Thanks for trying it out. I'll wait a
That one seems to work okay — again, I've gotten past 2GB and the hit-rate
is within a few points of where it belongs. I don't have numbers for the
same situation on .29 but IIRC it was very bad. So I guess .30 is an
improvement there.
On Friday, August 12, 2016 at 3:34:00 PM UTC-4, Dormando
Also, just for completeness:
Does:
`-C -m 10240 -I 20m -c 4096 -o modern`
also fail under .30? (without the slab_chunk_max change)
On Fri, 12 Aug 2016, dormando wrote:
> FML.
>
> Please let me know how it goes. I'm going to take a hard look at this and
> see about another bugfix release...
FML.
Please let me know how it goes. I'm going to take a hard look at this and
see about another bugfix release... there're a couple things I forgot from
.30 anyway.
Your information will be very helpful though. Thanks again for testing it.
All of my testing recently was with explicit
It will take a while to fill up entirely, but I passed 2GB with 0
evictions, so it looks like that probably does the job.
On Friday, August 12, 2016 at 3:02:47 PM UTC-4, Dormando wrote:
>
> A crap, I think I see it.
>
> Can you add: `-f 1.25` *after* the -o stuff?
>
> like this:
>
> `-C
A crap, I think I see it.
Can you add: `-f 1.25` *after* the -o stuff?
like this:
`-C -m 10240 -I 20m -c 4096 -o modern,slab_chunk_max=1048576 -f 1.25`
And test that out, please? I might have to back out some over-aggressive
switches... and I keep thinking of making this particular problem
Here you go.
Yes, 1.4.25 is running with `-C -m 10240 -I 20m -c 4096 -o
maxconns_fast,hash_algorithm=murmur3,lru_maintainer,lru_crawler,slab_reassign,slab_automove`.
1.4.30 is running with `-C -m 10240 -I 20m -c 4096 -o
modern,slab_chunk_max=1048576`.
On Friday, August 12, 2016 at 2:32:59 PM
Hey,
any chance I could see `stats slabs` output as well? a lot of the data's
in there. Need all three: stats, stats items, stats slabs
Also, did you try 1.4.30 with `-o slab_chunk_max=1048576` as well?
thanks
On Fri, 12 Aug 2016, andr...@vimeo.com wrote:
> Thanks! That's an improvement. It's
Thanks! That's an improvement. It's still worse than older versions, but
it's better than 1.4.29. This time it made it up to about 1.75GB/10GB used
before it started evicting; I left it running for another 8 hours and it
got up to 2GB, but no higher.
Here's some stats output from the old and
Hi,
https://github.com/memcached/memcached/wiki/ReleaseNotes1430
Can you please try this? And let me know how it goes either way :)
On Wed, 10 Aug 2016, dormando wrote:
> Hey,
>
> Thanks and sorry about that. I just found a bug this week where the new
> code is over-allocating (though 30MB out
Hey,
Thanks and sorry about that. I just found a bug this week where the new
code is over-allocating (though 30MB out of 10G limit seems odd?)
ie: with -I 2m, it would allocate 2 megabytes of memory and then only use
up to 1mb of it. A one-line fix for a missed variable conversion.
Will likely
I decided to give this a try on a production setup that has a very bimodal
size distribution (about a 50/50 split of 10k-100k values and 1M-10M
values) and lots of writes, where we've been running with "-I 10m -m 10240"
for a while. It didn't go so great. Almost immediately there were lots and
Thanks for the clarification that is much clearer ^_^
On Saturday, July 16, 2016 at 2:52:25 AM UTC+10, Dormando wrote:
>
> Hi,
>
> I updated the release notes to be a little more clear. You use the -I
> option, don't touch slab_chunk_max at all unless you really know what
> you're doing.
>
>
Hi,
I updated the release notes to be a little more clear. You use the -I
option, don't touch slab_chunk_max at all unless you really know what
you're doing.
All you have to do is:
-I 2m
ie:
-I 2m -o modern
... and you have a modern startup option with a 2m item limit.
On Fri, 15 Jul 2016,
ah units in KB so
-o slab_chunk_max=2048
?
how is it passed on command line with modern flag too ?
-o modern,slab_chunk_max=2048
??
On Friday, July 15, 2016 at 11:06:35 PM UTC+10, Centmin Mod George Liu
wrote:
>
> so to clarify if i want to raise max item size to 2MB i'd set -o
>
so to clarify if i want to raise max item size to 2MB i'd set -o
slab_chunk_max=2097152 ?
On Thursday, July 14, 2016 at 10:08:49 AM UTC+10, Dormando wrote:
>
> https://github.com/memcached/memcached/wiki/ReleaseNotes1429
>
> enjoy.
>
--
---
You received this message because you are
https://github.com/memcached/memcached/wiki/ReleaseNotes1429
enjoy.
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com.
https://github.com/memcached/memcached/pull/181
proper, this time. hoping to be done by friday.
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
22 matches
Mail list logo