On Mon, 14 Nov 2016, Bill Moseley wrote:
> Thanks for the response.
>
> Sorry for all the questions. I'm trying to fully understand before upgrading.
> On Sun, Nov 13, 2016 at 11:52 PM, dormando <dorma...@rydia.net> wrote:
>
> > * If all items in the slab are
ed? Or do all items in a
> given page first need to expire before it can be reclaimed?
See above.
To answer your earlier questions: -I 4m should work a lot better with the
new code. It doesn't screw up the smaller slab classes.
>
> On Fri, Nov 11, 2016 at 10:44 PM, Bill Moseley <mose...@
I think this is fixed in 1.4.32. Been broken for a very long time
unfortunately.
On Sun, 25 Sep 2016, Bill Moseley wrote:
> If I understand the documentation correctly, evicted_time should show the
> seconds since the last access for the more recently evicted item.
>
> I'm seeing evicted_time
>
>
> On Monday, October 3, 2016 at 9:17:00 PM UTC-4, Dormando wrote:
>
>
> On Mon, 3 Oct 2016, yuantao peng wrote:
>
> > Hi, -- I am reading memcached source code and got a question on
> this function: do_slabs_alloc_chunked, it is called by do
https://github.com/memcached/memcached/wiki/ReleaseNotes1432
didn't quite get everything I wanted in there... but the LRU fix is pretty
significant.
-Dormando
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from
>
>
> On Tuesday, October 4, 2016 at 12:31:52 AM UTC-4, Dormando wrote:
> > >
> >
> > I'll need to check more carefully. If that's true, the tests should
> show
> > data corruption (and they did a few times during development)
all said; are you looking into a particular bug or weirdness or
anything? What's gotten you into this?
-Dormando
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send
On Mon, 3 Oct 2016, yuantao peng wrote:
> Hi, -- I am reading memcached source code and got a question on this
> function: do_slabs_alloc_chunked, it is called by do_slabs_alloc if the
> request size is larger than
> the slabclass size. I am curious why we don't just move to a
Hi,
Unfortunately there is no supported version of memcached that runs on
windows.
On Wed, 21 Sep 2016, Ajinkya Aher wrote:
> Hii,
>
> I am using Memcached for windows can anyone tell me what is the maximum size
> of per Item that can be stored in Memcached as when I use " -I 129M " command
>
You cannot, no. you can only fetch exact keys.
On Tue, 30 Aug 2016, Sonia wrote:
> Hello,
>
> Is there any functionality to retrieve an array of key-value pairs in
> libmemcached using wildcard characters. I have read about memcached_get()
> function but there is no mention
> of whether we can
2016 at 3:05 PM, Joseph Grasser <jgrasser@gmail.com>
> wrote:
> So, when I compare the total pages with the unfetched evictions I do
> notice skew. We should probably reallocate the pages to better fit our usage
> pattern.
>
> On Sat, Aug 27, 2016 at 2:46
t, is that 6% utilization of data in
> cache?
>
> On Sat, Aug 27, 2016 at 1:35 PM, dormando <dorma...@rydia.net> wrote:
> You could comb through stats looking for things like evicted_unfetched,
> unbalanced slab classes, etc.
>
> 1.4.31 with `-o
of makes senses with the mem pressure ). I
> think the application is some kind of report generation tool - it's hard to
> say, my visibility into
> the team stuff is pretty low right now as I am a new hire.
>
> On Sat, Aug 27, 2016 at 12:34 PM, dormando <dorma...@rydia.net>
ctions yet. I'm gong to dig into it
> on Monday though.
>
>
> On Aug 27, 2016 1:55 AM, "Ripduman Sohan" <ripduman.so...@gmail.com> wrote:
>
> On Aug 27, 2016 1:46 AM, "dormando" <dorma...@rydia.net> w
involvement with this though so it's just conjecture
> on my part. I
> guess sa...@solarflare.com knows more, I can find out if it helps.
>
>
> On 26 August 2016 at 23:37, dormando <dorma...@rydia.net> wrote:
> Is that still using a modified codebase?
Is that still using a modified codebase?
On Fri, 26 Aug 2016, Ripduman Sohan wrote:
> Some more
> numbers:https://www.solarflare.com/Media/Default/PDFs/Solutions/Solarflare-Accelerating-Memcached-Using-Flareon-Ultra-server-IO-adapter.pdf
>
> On 26 August 2016 at 07:08, Henrik Schröder
ain. Some simple math would've avoided this situation. This is a
complicated change to do on one's own.
On Sat, 13 Aug 2016, Dormando wrote:
> what about without the slab_chunk_max change? (just bare modern) is usage
> better?
>
> could I get a stats snapshot from the one that filled
; 10240 -I 20m -c 4096 -o modern,slab_chunk_max=1048576 -f 1.25), vs. 9.4GB for
> 1.4.25, and STAT curr_items is 120k vs. 136k. So it still seems to be making
> worse use of memory, but it's far better than any of the previous tries with
> .29/.30.
>
>> On Friday, August 12,
still running ok?
> On Aug 12, 2016, at 1:10 PM, dormando <dorma...@rydia.net> wrote:
>
> Ok. So I think I can narrow the change to explicitly set -f 1.08 if the
> slab_chunk_max is actually 16k... instead of just if `-o modern` is on...
> I was careful about filling out a
e same
> situation on .29 but
> IIRC it was very bad. So I guess .30 is an improvement there.
>
> On Friday, August 12, 2016 at 3:34:00 PM UTC-4, Dormando wrote:
> Also, just for completeness:
>
> Does:
>
> `-C -m 10240 -I 20m -c 4096 -o modern`
>
&
Also, just for completeness:
Does:
`-C -m 10240 -I 20m -c 4096 -o modern`
also fail under .30? (without the slab_chunk_max change)
On Fri, 12 Aug 2016, dormando wrote:
> FML.
>
> Please let me know how it goes. I'm going to take a hard look at this and
> see about another bu
configuration options, so I
didn't notice the glitch with -o modern :(
On Fri, 12 Aug 2016, andr...@vimeo.com wrote:
> It will take a while to fill up entirely, but I passed 2GB with 0 evictions,
> so it looks like that probably does the job.
>
> On Friday, August 12, 2016 at 3:02:47 PM UTC
0 is running with `-C -m 10240 -I 20m -c 4096 -o
> modern,slab_chunk_max=1048576`.
>
>
> On Friday, August 12, 2016 at 2:32:59 PM UTC-4, Dormando wrote:
> Hey,
>
> any chance I could see `stats slabs` output as well? a lot of the data's
> in there. Need all th
in case you can
> puzzle anything out of it.
>
> Thanks,
>
> Andrew
>
>
> On Thursday, August 11, 2016 at 6:14:26 PM UTC-4, Dormando wrote:
> Hi,
>
> https://github.com/memcached/memcached/wiki/ReleaseNotes1430
>
> Can you please try this? And
Hi,
https://github.com/memcached/memcached/wiki/ReleaseNotes1430
Can you please try this? And let me know how it goes either way :)
On Wed, 10 Aug 2016, dormando wrote:
> Hey,
>
> Thanks and sorry about that. I just found a bug this week where the new
> code is over-allocating (tho
d like, but it's what I got, sorry.
>
> Let me know if there's anything else I can do to help.
>
> Thanks,
>
> Andrew
>
> On Wednesday, July 13, 2016 at 8:08:49 PM UTC-4, Dormando wrote:
> https://github.com/memcached/memcached/wiki/ReleaseNotes1429
>
> enj
Your app is putting the key back. make it stop doing that?
On Mon, 25 Jul 2016, Babu G wrote:
> Hi,
>
> After deleting the key successfully again key existing after some time. is
> there any way to delete key permanently.
>
> syntax : delete
>
am, you will need a txt file containing a list of all the
> servers. You can make changes to this file accordingly.
> If you execute the code with a -h option you will get details of the command
> line args.
>
> On Thursday, July 14, 2016 at 2:29:06 PM UTC-5, Dormando wrote:
ore than 8 megabytes of
RAM if you use more than one slab class. there is a minimum memory
assignable of 1mb per slab class.
-Dormando
On Fri, 15 Jul 2016, Centmin Mod George Liu wrote:
> with /usr/local/bin/memcached -d -m 8 -l 127.0.0.1 -p 11211 -c 2048 -b 2048
> -R 200 -t 4 -n 72 -f 1
ote:
> so to clarify if i want to raise max item size to 2MB i'd set -o
> slab_chunk_max=2097152 ?
> On Thursday, July 14, 2016 at 10:08:49 AM UTC+10, Dormando wrote:
> https://github.com/memcached/memcached/wiki/ReleaseNotes1429
>
> enjoy.
Hey,
the delete_misses and delete_hits counters only tick when a delete command
is run. you'll need to temporarily enable either logging via your app or
via memcached to see where those delete commands are coming from.
On Thu, 14 Jul 2016, Utkarsh Awasthi wrote:
> To be more specific Memcached
aviour.
>
> On Wednesday, July 13, 2016 at 11:22:07 PM UTC-5, Dormando wrote:
> I'm not sure why. you can validate the settings via the `stats settings`
> command.
>
> On Wed, 13 Jul 2016, Sonia wrote:
>
> > I tried inserting 10 values of size 100
he misses for the
> first 7 key-value pairs.Is there a flag that we have to set in the memcached
> configuration file (I currently have the '-m' option set to 2048)
>
> On Wednesday, July 13, 2016 at 7:07:44 PM UTC-5, Dormando wrote:
> Hi,
>
> You're trying to
Do you have a specific question?
Given the subject, possibly your app is issuing delete's for stuff that
isn't in the cache for some reason?
On Wed, 13 Jul 2016, Utkarsh Awasthi wrote:
> Following are the stats of Memcached:
> STAT pid 18323
> STAT uptime 384753
> STAT time 1468390067
> STAT
https://github.com/memcached/memcached/wiki/ReleaseNotes1429
enjoy.
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to memcached+unsubscr...@googlegroups.com.
nd the output of
> the stats command in the attached file.
> I really appreciate the help. Thank you.
>
> On Wednesday, July 13, 2016 at 2:39:43 PM UTC-5, Dormando wrote:
> Can you give more detail as to what exactly is failing? what error
> message
> are you getti
Can you give more detail as to what exactly is failing? what error message
are you getting, what client are you using, what is the `stats` output
from some of your memcached instances, etc?
100,000 1 meg values are going to take at least 100 gigabytes of RAM. if
you have 16 2G servers, you only
https://github.com/memcached/memcached/pull/181
proper, this time. hoping to be done by friday.
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
https://github.com/memcached/memcached/wiki/ReleaseNotes1428
bugfixes. latest feature I'm working on is proper "large item support".
Has a triple-whammy of allowing more slab classes at the lower end, having
better memory efficiency for largeish objects, and allowing increasing the
limit above
+1.
On Mon, 27 Jun 2016, 'Jay Grizzard' via memcached wrote:
> This is really heavily dependent on the exact hashing algorithm your library.
> Some are (relatively) good about losing a server, others are complete trash
> about it. I don’t
> know what the perl lib does off the top of my head.
>
On Mon, 27 Jun 2016, Geoff Galitz wrote:
> Hi...
> We're working on a project to migrate from one set of memcache servers to
> newer largers ones which are behind a mcrouter.
>
> One option on the table is to take the current memcache_servers array in our
> perl app and replace a single
Hey,
http://memcached.org/tutorial - memcached isn't something you integrate
with a database, so much as your application uses along with your
database.
On Tue, 21 Jun 2016, Kishan Ashra wrote:
> I want to use distributed caching with my MarkLogic Database. So, I want to
> know how can I
https://github.com/memcached/memcached/pull/169
in case anyone wants to review/follow along.
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
r the wall of text. You have a few options before having to modify
the thing. Lets be absolutely sure it's what you need and that you can
operate on the most minimal effect.
thanks,
-Dormando
On Mon, 20 Jun 2016, 'Vu Tuan Nguyen' via memcached wrote:
> We'd like to get the expiration time with the v
Hey,
Take a look at the doc/new_lru.txt file included in the source tarball.
(https://github.com/memcached/memcached/blob/master/doc/new_lru.txt for
the lazy)
The LRU-ish algorithm was updated a few versions ago. This documents its
behavior thoroughly.
On Sun, 19 Jun 2016, Hong Yeol Lim wrote:
done a lot of testing so far and things are going well. It's a really
handy feature and I hope people get good use out of it.
-Dormando
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To unsubscribe from this group and stop receiv
at least :)
> -j
>
> On Fri, Jun 17, 2016 at 12:50 PM, dormando <dorma...@rydia.net> wrote:
>
>
> On Fri, 17 Jun 2016, Dagobert Michelsen wrote:
>
> > Hi Dormando,
> >
> > Am 17.06.2016 um 10:39 schrieb dormando <dorma.
On Fri, 17 Jun 2016, Dagobert Michelsen wrote:
> Hi Dormando,
>
> Am 17.06.2016 um 10:39 schrieb dormando <dorma...@rydia.net>:
> > https://github.com/memcached/memcached/pull/127 is now "done", as much as
> > it'll be done for this release. More work in
etween now and
then :)
It's been tough getting feedback on these things. Hard to tell if I'm
doing it well enough or nobody has the time to really look. Hopefully the
former and people get some good use out of this new thing.
thanks,
-Dormando
--
---
You received this message because you are subscribed to
can
provide people with a huge of amount of insight into what their server is
doing. It does this without having to manage STDOUT/STDERR piped into
anything.
Thanks,
-Dormando
--
---
You received this message because you are subscribed to the Google Groups
"memcached" group.
To u
r's ability to process anything.
> user = memcache.get(filename)
> if user and os.file.ispath(filename):
> # I try my best to show the user processing it.
> print "Being processed by %s" % user
> else:
> # But
Hey,
Memcached can't do that easily right now. You can use the STDOUT logging
but that requires reading everything the server is doing directly.
I started a branch for a better logging situation a few months ago, and am
picking it up to finish over the next few weeks
in
the link I gave you several times in the issue. Please use it.
Thanks,
-Dormando
On Fri, 3 Jun 2016, Nishant Varma wrote:
> Can anyone help me peer review this script
> https://gist.github.com/varmanishant/0129286d41038cc21471652a6460a5ff that
> demonstrate potential problems wit
Hey,
Try telnetting to the port and running 'stats'. I don't know what program
you're using, but it looks like it's not printing all of the counters.
you're also on a very old version, which won't have nearly as many
statistics as a newer one.
On Wed, 13 Apr 2016, Ross Peetoom wrote:
> When I
Hey,
What version are you running? Is this the amazon hosted memcached cloud
thing or are you actually installing and managing memcached yourself?
On Wed, 24 Feb 2016, raysmithvic1...@gmail.com wrote:
> HiUsing memcached in AWS using to store our application web sessions. Several
> days
Thanks. I'm trying to find some time to deal with the wiki having gone
away.
On Wed, 3 Feb 2016, Dan Madere wrote:
> Just a heads up that on http://memcached.org/ there's a broken link when you
> click "API" in the sentence "Its API is available for most popular languages."
>
> --
>
> ---
> You
$ cat memcached-1.4.25.tar.gz.sha1
7fd0ba9283c61204f196638ecf2e9295688b2314 ./memcached-1.4.25.tar.gz
$ sha1sum ./memcached-1.4.25.tar.gz
7fd0ba9283c61204f196638ecf2e9295688b2314 ./memcached-1.4.25.tar.gz
that's from the original copy on my local disk. Did your browser
decompress the .gz file
4329]: segfault at 10 ip
> 0037b4a0f89e sp 7fa7e6fb8b60 error 4 in
> libsasl2.so.2.0.23[37b4a0+19000]
> Looks like its memcache + sasl issue.
>
> Regards,
> Prasad
>
> On Wed, Dec 30, 2015 at 4:43 PM, dormando <dorma...@rydia.net> wrote:
> Hi,
>
> D
assigned. Meanwhile slab class 2 has 5000 pages and never
evicts.
1.4.4 isn't supported, so if you continue to have trouble I highly
recommend trying the new code (and start options) first.
-Dormando
On Wed, 9 Dec 2015, Bill Moseley wrote:
> Thank you Denis for the explanation.
>
> On W
Hey,
What are your startup arguments for memcached?
In .24 and up the maximum slab count is now 63 instead of 255. if the
default number of slab classes is 42, so you'd have to set your scaling
factor pretty low to get it to go that high.
Would that cause problems for you?
On Thu, 26 Nov 2015,
Hey,
https://github.com/memcached/memcached - I've dumped what I hope are all
of the commits for 1.4.25 into master. If you're listening, would you mind
attempting to build and run the tests on your platform of choice? Hoping
to get a few different kinds since I still don't have a reliable
https://github.com/memcached/memcached/wiki/ReleaseNotes1425
Much, much thanks to the netflix crew for their feedback in development
the slab automove improvements. I apologize for the bizarre month delay
between its near completion and release to the public. Please enjoy.
Yo,
I'd be super thankful if anyone out there in the ether could help review
and test a few things:
1) https://github.com/memcached/memcached/pull/84
https://github.com/dormando/memcached/tree/fix_flags - is my branch doing
similar.
If anyone could give me feedback on what clients do when
On Tue, 10 Nov 2015, Terry Hu wrote:
> when iinstall memcached-1.4.23
> make make test it tell me
>
> testapp: testapp.c:725: safe_recv: Assertion `nr != 0' failed.
You'll need to include more of the test output. it's unclear what part of
the tests that failed in.
>
> anybody can help
That assert() is checking that the item you're about to link isn't already
in there as the head. It's a basic double-linking prevention check.
that code is adding the link, not removing it.
On Sat, 7 Nov 2015, Song Zhigang wrote:
> i was reading memcached source code these days. but i got
ample:
> * Key : 2c
> * Val : 28 bytes
> * Flg : 0 (1bytes)
>
> turns into:
> * Key : 3b
>
> => key number characters + 1
> * Val : 30b
>
> => 28 bytes + 2 bytes ("\r\n")
> * Hdr : 4b + 3b == 7b
>
> => What are
red in Slab1
>
> ok for the /r/n ... should take 4 bytes no?
>
> So, if we count 56 bytes for CAS : 56(cas)+31(key+value+flags)+4(/r/n)= 91
>
> Not good... :(
>
> where I'm wrong ??
>
> Le samedi 14 novembre 2015 23:55:12 UTC+1, Dormando a écrit :
> The mysql docs d
The mysql docs don't speak for the main tree... that's their own thing.
the "sizes" binary that comes with the source tree tells you how many
bytes an item will use (though I intend to add this output to the 'stats'
output somewhere). With CAS this is 56 bytes.
56 + 2 + 30 == 88. Class 1 by
memcached.org is still right. the "source and development repos" link on
the main page links to a wiki page which links to github, though the text
is out of date.
I'll be migrating things when I have time. apologies for the scary
"archived" stuff everywhere.
On Wed, 11 Nov 2015, Teo Tei wrote:
not super sure what the quote would be for? you're looking for someone who
uses this roles thing?
On Tue, 27 Oct 2015, Matthew Miller wrote:
> Hey all. The Fedora Server operating system has a feature called "roles",
> which allow push-button deployment of selected configs. This can be
>
. More like something from the
> memcached project saying that it's cool/useful/whatever that Fedora is
> providing memcached in this way.
>
> On Oct 28, 2015 3:31 AM, "dormando" <dorma...@rydia.net> wrote:
> not super sure what the quote would be for? you're lo
any luck?
> On Oct 6, 2015, at 12:23 AM, Dormando <dorma...@rydia.net> wrote:
>
> ah. I pushed two more changes earlier. should fix mem_requested. just
> cosmetic stuff though
>
>> On Oct 6, 2015, at 12:13 AM, Scott Mansfield <smansfi...@netflix.com> wrote:
&
are you using dns to resolve the IP's of the clients?
On Tue, 6 Oct 2015, Jim Horning wrote:
> I have two conditions that seem to make memcached fail. I have a memcached
> server and a few clients all on the same subnet: 192.168.1.X, where these
> devices use NAT to get to the Internet.
> 1.
ame plan as before.
>
>> On Monday, October 5, 2015 at 4:38:00 PM UTC-7, Dormando wrote:
>> Looking forward to the results. Thanks for getting on this so quickly.
>>
>> I think there's still a bug in tracking requested memory, and I want to
>> move the stats counters to
It took a day of running torture tests which took 30-90 minutes to fail,
but along with a bunch of house chores I believe I've found the problem:
https://github.com/dormando/memcached/tree/slab_rebal_next - has a new
commit, specifically this:
https://github.com/dormando/memcached/commit
know what I find.
>
> On Monday, October 5, 2015 at 1:29:03 AM UTC-7, Dormando wrote:
> It took a day of running torture tests which took 30-90 minutes to fail,
> but along with a bunch of house chores I believe I've found the problem:
>
> https://gith
+1d000]
>
> [46545.316351] memcached[2789]: segfault at 0 ip 0040e007 sp
> 00007f362ceedeb0 error 4 in memcached[40+1d000]
>
> [102076.523474] memcached[29833]: segfault at 0 ip 0040e007 sp
> 7f3c89b9ebe0 error 4 in memcached[40+1d000]
>
> [55
362ceedeb0 error 4 in memcached[40+1d000]
>
>
> I can possibly supply the binary file if needed, though we didn't do anything
> besides the standard setup and compile.
>
>
>
> On Tuesday, September 29, 2015 at 10:27:59 PM UTC-7, Dormando wrote:
> If you look
412b9d
> sp:7fc0700dbdd0 error:0 in memcached[40+1d000]
>
>
> addr2line shows:
>
> $ addr2line -e memcached 412b9d
>
> /mnt/builds/slave/workspace/TL-SYS-memcached-slab_rebal_next/build/memcached-1.4.24-slab-rebal-next/assoc.c:119
>
>
>
> On Thursday, October 1
u_crawler,hash_algorithm=murmur3 -I 4m -m 56253
>
> On Thursday, October 1, 2015 at 12:41:06 PM UTC-7, Dormando wrote:
> Were lru_maintainer/lru_crawler/etc enabled though? even if slab mover
> is
> off, those two were the big changes in .24
>
> On Thu, 1 O
lightly different per server, as we
> calculate on startup how much we give. It's in the same ballpark, though (~56
> gigs).
>
> On Thursday, October 1, 2015 at 12:11:35 PM UTC-7, Dormando wrote:
> Just before I sit in and try to narrow this down: have you run any host
>
Any chance you could describe (perhaps privately?) in very broad strokes
what the write load looks like? (they're getting only writes, too?).
otherwise I'll have to devise arbitrary torture tests. I'm sure the bug's
in there but it's not obvious yet
On Thu, 1 Oct 2015, dormando wrote:
> perf
e detail offline if you need it.
>
>
> On Thursday, October 1, 2015 at 2:32:53 PM UTC-7, Dormando wrote:
> Any chance you could describe (perhaps privately?) in very broad strokes
> what the write load looks like? (they're getting only writes, too?).
> otherwise I'll ha
quot;, but I didn't
> get a screenshot. That number is directly from the stats slabs output.
>
>
>
> On Thursday, October 1, 2015 at 4:21:42 PM UTC-7, Dormando wrote:
> ok... slab class 12 claims to have 2 in "total_pages", yet 14g in
> mem_requested. is this s
your full startup args, though?
On Thu, 1 Oct 2015, Scott Mansfield wrote:
> The commit was the latest in slab_rebal_next at the time:
> https://github.com/dormando/memcached/commit/bdd688b4f20120ad844c8a4803e08c6e03cb061a
>
> addr2line gave me this output:
>
> $ addr2line -e
at 10:23:32 AM UTC-7, Scott Mansfield wrote:
> I'm working on getting a test going internally. I'll let you know how
> it goes.
>
>
> Scott Mansfield
> On Mon, Sep 7, 2015 at 2:33 PM, dormando wrote:
> Yo,
>
> https://github.com/dormando/memcached/com
t? I'm about to take a look at the diffs as well.
>
> On Tuesday, September 29, 2015 at 12:37:45 PM UTC-7, Dormando wrote:
> excellent. if automove=2 is too aggressive you'll see that come in in a
> hit ratio reduction.
>
> the new branch works with automove=
> latency et. al. from the client side, though network normally dwarfs
> memcached time.
>
> On Tuesday, September 29, 2015 at 3:10:03 AM UTC-7, Dormando wrote:
> That's unfortunate.
>
> I've done some more work on the branch:
> https://github.com/memcached/
Yo,
https://github.com/dormando/memcached/commits/slab_rebal_next - would you
mind playing around with the branch here? You can see the start options in
the test.
This is a dead simple modification (a restoration of a feature that was
arleady there...). The test very aggressively writes
to see how it behaves.
On Monday, August 3, 2015 at 1:15:06 AM UTC-7, Dormando wrote:
You sure that's 1.4.24? None of those fail for me :(
On Mon, 3 Aug 2015, Scott Mansfield wrote:
The command line I've used that will start
for an event where
memory is needed at that instant.
It's a change in approach, from reactive to proactive. What do you think?
On Monday, July 13, 2015 at 5:54:11 PM UTC-7, Dormando wrote:
First, more detail for you:
We are running 1.4.24 in production and haven't noticed any
at 5:54:11 PM UTC-7, Dormando wrote:
First, more detail for you:
We are running 1.4.24 in production and haven't noticed any bugs as
of yet. The new LRUs seem to be working well, though we nearly always run
memcached scaled to hold all data without evictions. Those
correct, it does not free the pages
On Jul 17, 2015, at 1:48 AM, Denis Tataurov sineed...@gmail.com wrote:
Hi! I use memcached in my project and I want to know will memcached free
their pages or not.
A page has 1 Mb size. Here is the statistics of the biggest slabs in my
memcached:
the rebalancer. It's pretty
easy to run one config to load up 10k objects, then flip to the other
using the same key namespace.
Thanks,
Scott
On Saturday, July 11, 2015 at 12:05:54 PM UTC-7, Dormando wrote:
Hey,
On Fri, 10 Jul 2015, Scott Mansfield wrote:
We've seen
seen any
instability? I'm currently torn between fighting a few bugs and start on
improving the slab rebalancer.
-Dormando
What happens with more than one connection? A lot of things changed to
increase the scalability of it but single thread might be meh.
What're the memcached start args? What exactly is that test doing?
gets/sets? Is .24 any better?
On Tue, 7 Jul 2015, Denis Samoylov wrote:
hi,
Does anybody
What version of memcached are you using are you using?
On Wed, 13 May 2015, Miaolong Zhang wrote:
There are something wrong when I used memcached command flush_all.
I used memcached to store the CSV files. every time I changed the CSV files
and then flush the memcached used flush_all, but
https://code.google.com/p/memcached/wiki/ReleaseNotes1423
After a long delay I fixed the last known bugs in the lru_rework branch
and decided to release it. If you run a large cluster, this could have a
huge positive impact on your hit ratio. Hit ratio is life. All for the hit
ratio!
-Dormando
Hello Dormando, thanks for detailed and constructive reply.
On Tuesday, 17 February 2015 01:39:02 UTC+7, Dormando wrote:
Yo,
On Mon, 16 Feb 2015, Roman Leventov wrote:
Hello Memcached users and developers,
I've analyzed Memcached implementation and suggested
On Tuesday, 17 February 2015 03:38:55 UTC+7, Dormando wrote:
Again, in actual benchmarks I've not been able to prove them to be a
problem. In one of the gist links I provided before I show an all miss
case, which acquires/releases a bucket lock but does not have
301 - 400 of 993 matches
Mail list logo