version is very old, but I
doubt that's what's causing your flushes to happen :)
-Dormando
))
On 4月26日, 上午10时15分, dormando dorma...@rydia.net wrote:
We can't support crashes in old versions.
Please upgrade to the latest (1.4.13) and file again if you still have a
crash.
On Wed, 25 Apr 2012, lee wrote:
memcached-1.4.5
i can't reproducing the bug
2.6.30 x86_64 CentOS
What version of memcached is this?
How are you reproducing the bug?
What OS/version are you on?
On Wed, 25 Apr 2012, lee wrote:
the memcached main process can't accept connect when use
assoc_maintenance_thread function,here is the gdb bt result:
we have 4 thread :
|-memcached,28059 -d
Hi all,
I compiled and installed Memcached 1.4.3 from source:
http://memcached.googlecode.com/files/memcached-1.4.13.tar.gz
The configuration is quite simple: ./configure --prefix=/usr/local/
memcached
It works if I install it on a dedicated Ubuntu 11 server. However, it
does not work on
think.
On Apr 25, 11:05 pm, dormando dorma...@rydia.net wrote:
Hi all,
I compiled and installed Memcached 1.4.3 from source:
http://memcached.googlecode.com/files/memcached-1.4.13.tar.gz
The configuration is quite simple: ./configure --prefix=/usr/local/
memcached
It works
We can't support crashes in old versions.
Please upgrade to the latest (1.4.13) and file again if you still have a
crash.
On Wed, 25 Apr 2012, lee wrote:
memcached-1.4.5
i can't reproducing the bug
2.6.30 x86_64 CentOS
On 4月25日, 下午12时03分, dormando dorma...@rydia.net wrote:
What version
On Tue, Apr 10, 2012 at 11:01 PM, Aaron Stone sodab...@gmail.com wrote:
On Tue, Apr 10, 2012 at 10:45 PM, Dustin dsalli...@gmail.com wrote:
On Sunday, April 8, 2012 10:53:53 PM UTC-7, Aaron Stone wrote:
git cherry reported 443 changesets between master and merge-wip. The first
few
Memcached's architecture is a set of tradeoffs:
- Add a little network roundtrip latency
- Gain global atomic invalidation
- Gain linearly expanding cache size
Global invalidation means your cache can be more useful, and perhaps you
can use it in more places.
Expansive cache means (if you
No,
numactl --hardware will tell you how many nodes you have. googling will
tell you more about NUMA. Intel i3/i5's aren't NUMA chips, however, so
those are probably just one node already.
On Mon, 16 Apr 2012, yunfeng sun wrote:
Dear Dormando,
Regards binding one memcached instance per NUMA
that level of performance
and have good headroom. If you really tune things hard you could get that
down to 6. If you left me alone in a room for a few months with a giant
pile of money I could do it with two. three for redundancy.
-Dormando
The Java application need Get() once and set() once for each changed pair, it
will be 50M*40%*2=4M qps (query per second) .
We tested memcached - which shows very limited qps.
Our benchmarking is very similar to results showed
So, I would like to as, it it would be possible to add (internal)
compression mechanism to the memcached daemon. I was thinking about a
switch which would switch the compression on/off (and/or setting the
level) during memcached start. The data would be compressed
transparently, so no
Thank you for your advice. I will have a look at the 1.6 branch.
May I ask you, however, to share more details about new storage engine
implementation? Where to look at, what (interface) must be
implemented? May the new engine be built on the top of an already
existing (a default) one?
If
Hey,
I do, for the most part. If you'd like to help keep the clients list up to
date I'd love to have the assistance.
E-mail me privately with your google account and I'll add your access.
I had to disable wiki comments since many were spams or people asking for
help in places where nobody
(24 * 60 * 60) == 86400 (a day)
* 5 == 432000
20736000 / 86400 == 240 (days)
expiration times over 30 days are treated as an absolute unix timestamp.
240 days == sometime late in 1970, which is in the past, so the item
immediately expires.
On Wed, 28 Mar 2012, StormByte wrote:
I've manually
Hi,
The memcached website links to a couple of top users of memcached.
Is there a more comprehensive list of sites using memcached.
It's probably still safe to say that most popular websites use or have
used a form of memcached. It's rare to hit a website which doesn't use it
at some point.
I am looking at using memchached for caching blob objects and flush them
after a while. My use case is that users may save lots of times and instead
of persisting every save I would save the data only at logical point to the
database or if there was no activity for n seconds then read from
You might want to send this over to the mysql cluster folks. It seems to
be a problem with the ndb_engine, which we don't support directly.
On Wed, 21 Mar 2012, deepak m wrote:
hi
i am getting an error while starting memcached 1.6 beta service
like
[root@cent2 ~]#
hello,recently i read the memcached code of version 1.4.13.When i read
about the function of slab_automove_decision,i feel confused about
it,can you explain the main idea for me and tell me
something about the local variables like slab_winner and so on?Thank
you~my english is pool,if you
can
I'm thinking about implementing a PEEK command. It turns out that on
large payloads ( = 1MB) it would be advantageous to know if the data
already exists before doing all the work to create the data. I would
like to solicited some feedback before I dive into it.
- Is there already a way to
.
Also keep in mind that we're presently trying to reconcile the 1.6 tree,
so it's difficult to tell where changes should go (1.4 or 1.6).
-Dormando
You can submit patches to 1.6 or 1.4 or both.
There isn't really a tutorial. You can look through the commit history to
see how we do things.
On Sat, 10 Mar 2012, 陈 宗志 wrote:
As I have read all the memcached code. I want to contirbute some codes to
memcached. which version should I start?
Hi memcached,
I'm upgrading libevent-1.3a to libevent-2.0.17-stable in our library
distribution. Memcached-1.4.0 is one of the packages that relies on libevent
and I'm trying to catch regressions, if any.
The basic tests (make test) complete successfully. Before I upgrade our
libraries:
I'm currently working to port memcached to MPSoCs, related to some
project which use memcahed as benchmark. I have some questions, it
would be great if you give your views on this,
a) do you think, is it worth to port memcached to MPSoC which have low
memory and cache? Do you have any ideas
*... Too bad that I
can' t recall this post. Sorry for the interruption !
You might want to consider mc-crusher as well.
https://github.com/dormando/mc-crusher
it's still rough, but from what it can do it's very flexible.
Thanks for the response, i believe it won't work. First i have many
items of different sizes, so i can't get them to one slab, having 2
clients would be a performance hit because i'd need to make
connections twice every request (unfortunately the part that handles
SQL write stagging is
LOCKs) but not suitable for all use patterns, because it can miss keys
then you have uncontrolled slab growth and then you need to restart
btw, 1.4.11+ can rebalance the slabs if you enable the feature. so you
don't need to restart it in that case.
that would
be running in parallel to block for that 1ms. For really low request rates
that's fine, but we must support much more than that.
@Dormando... why you call it bizzare? :) Rebalancing slabs shouldn't
be much different.
Because it's a corner case, and your solution is to do a *ton* of work. So
the client, write a
storage engine with an internal LRU split. I'm sure I could come up with
more but I have other things to think about right now :P
On 26 Lut, 21:31, dormando dorma...@rydia.net wrote:
For the running multiple copies... im using persistent connection
but are you sure the amount
Btw: how large instances you're running (and how many req/s)? You said
you'll keep 3 or more LRUs in new version, any other improvements?
Also uh, I didn't mean to ignore that question, but I'm not at liberty to
divulge hard numbers. They're large enough and hit hard enough to continue
to
@dormando, those are default CFLAGS for different RHEL platforms:
4 - i386-O2 -g -march=i386 -mcpu=i686
4 - x86_64 -O2 -g
5 - i386-O2 -g -march=i386 -mtune=i686
5 - x86_64 -O2 -g -m64 -mtune=generic
6 - i386-O2 -g -march=i386 -mtune=i686
6 - x86_64
* RHEL-4 (GCC 3.4.6), on both 32 and 64 bits:
thread.c:98: warning: implicit declaration of function
`__sync_sub_and_fetch'
* RHEL-5 (GCC 4.1.2) fails only on 32 bits:
thread.c:83: undefined reference to `__sync_add_and_fetch_2'
@dormando, building RPM package for memcached 1.4.13
web
servers, the code works fine on the other 2 so it must be related to memcached
on this particular server, however, the same version is in use as on all the
other servers, and this was working and suddenly broke :(
On 07/02/12 23:37, dormando wrote:
hi
we are using memcached
Here's the repro on the largest unsigned 64-bit integer
18446744073709551615 with telnet on memcached 1.4.5
set aad2ac07-2fd5-42bb-88b9-e7bae3b55f5b 0 200 20
18446744073709551615
STORED
get aad2ac07-2fd5-42bb-88b9-e7bae3b55f5b
VALUE aad2ac07-2fd5-42bb-88b9-e7bae3b55f5b 0 20
likes on your platform.
On Tue, 7 Feb 2012, Sean wrote:
See my second post. Even 9223372036854775809 won't work.
9223372036854775807 works.
On Feb 7, 1:38 pm, dormando dorma...@rydia.net wrote:
Here's the repro on the largest unsigned 64-bit integer
18446744073709551615 with telnet
, Sean sean.y@gmail.com wrote:
version 1.4.5_4_gaa7839e on windows 2008 R2 64-bit
On Feb 7, 1:46 pm, dormando dorma...@rydia.net wrote:
What's your platform?
Escape character is '^]'.
version
VERSION 1.4.5
set foo 0 0 20
18446744073709551614
STORED
hi
we are using memcached 1.4.5_2 on 3 servers, each server has the same
version of all software installed. on two servers memcached is working
perfectly, however on the 3rd if doesnt work, it appears to not return
any data, the sql connection is working fine and the code we are using
is OK
Hey dormando,
I'm a developer at Chatham Financial, and we're big fans of memcached.
I was wondering if my company can use the logo for a poster at
recruiting events.
The idea is that we want to showcase at a glance some of the software/
programs that we work with every day - hence
Hey,
1.4.11 had some trouble building on a few platforms, and some bugs were
reported in in the meantime. 1.4.12 fixes almost all of these reported
issues.
No new features for now, just bugfixes. Thanks and sorry for the trouble.
have fun,
-Dormando
hi all,the problem is like this:There are 10 millions of key/values in
memcached,and it requires to update all the values at 00:00 every day.
For example,it is a daily task,the value data can not be accumulated .so it
should clear up the old data value at a fixed time every day.
Is
We're having some issues with retrieving cached keys from our
memcached server.
Tested different data with expiry times of 30s, 120s, and 600s - the
test was: try to get the data three times in quick succession.
Obviously, we're expecting the first one from db but then the next two
should
of culling
groups-specific spammers. I have disabled the moderation queue for new
users, but still require users register before sending messages.
Lets see how well this works.
G'awd!
-Dormando
El 18/01/12 19:04, dormando escribió:
Closer... Can you try the attached patch against 1.4.11? could've sworn
I'd tested the other path, but I guess not :(
I added some prints at the bottom that should print something to STDERR
when started in the foreground:
$ ./memcached
Today Jan 18, 2012 at 09:51 dormando wrote:
b) i386 = manual compilation works, but a I'm getting an error
building
the RPM package, it might be related with gcc flags, I'm working on it..
rpmbuild error output:
https://gist.github.com/1632139
Closer... Can you try
Is there any guarantee that search is not NULL on Line 133? I think if
Line 107 is true and takes the branch on Line 108, there is nothing
between there and Line 133 that sets the value for search. So, if
slabs_alloc fails to allocate memory in all the instances and it
remains NULL, we can
@dormando, results after applying your last patch:
Thank you!
- build works on RHEL-{4,5,6} both i386 and x86_64 :)
- RHEL 4 and 5 return emulated atomics and RHEL-6 gcc atomics
Even RHEL5 64bit says emulated atomics ? That shouldn't be :/ I did a
build test on a centos5 host and it seems
Yo all,
I'll be speaking at SCALE (http://socallinuxexpo.org/) on saturday on the
topic of modern memcached - underused existing features, and talking
about current and future work on the project.
El 17/01/12 21:27, dormando escribió:
could you please try 1.4.11 with the attached patch on all platforms? I'm
taking a bit of a guess since google isn't very familiar with this macro.
for bonus points; add an fprintf(stderr, etc) to make sure it's using the
right path for each
El 17/01/12 21:27, dormando escribió:
could you please try 1.4.11 with the attached patch on all platforms?
I'm
taking a bit of a guess since google isn't very familiar with this
macro.
for bonus points; add an fprintf(stderr, etc) to make sure it's using
El 17/01/12 06:36, dormando escribió:
http://code.google.com/p/memcached/wiki/ReleaseNotes1411
We're having problems building this release with old GCC versions, for
example:
* RHEL-4 (GCC 3.4.6), on both 32 and 64 bits:
thread.c:98: warning: implicit declaration of function
El 17/01/12 06:36, dormando escribió:
http://code.google.com/p/memcached/wiki/ReleaseNotes1411
We're having problems building this release with old GCC versions, for
example:
* RHEL-4 (GCC 3.4.6), on both 32 and 64 bits:
thread.c:98: warning: implicit declaration of function
El 17/01/12 06:36, dormando escribió:
http://code.google.com/p/memcached/wiki/ReleaseNotes1411
We're having problems building this release with old GCC versions, for
example:
* RHEL-4 (GCC 3.4.6), on both 32 and 64 bits:
thread.c:98: warning: implicit declaration of function
and some complications with getting
slab reassign to work.
have fun,
-Dormando
http://code.google.com/p/memcached/wiki/ReleaseNotes1411rc1
Tossed in a few more fixes from the issue tracker, punting on the rest for
1.4.12 since this is so late.
I'm going to leave this up for another day or two while I work on the wiki
a bit and try to come up with other tests. Slab
El 09/01/12 06:12, dormando escribió:
Hey, could you please try to reproduce the issue with 1.4.11-beta1:
http://code.google.com/p/memcached/wiki/ReleaseNotes1411beta1
I've closed the logic issues and fixed a few other things besides. Would
be very good to know if you're still able
There are archived discussions floating around about this subject,
particularly with SSD. Are there open source works (implementation,
src code, interface description, etc) available for people to take a
look ?
There is a working and well used engine implementation in the engine-pu
branch:
points.
Thanks, and sorry,
-Dormando
dormando, with a new script setting a random exptime I can reproduce the
problem in a fresh memcached 1.4.10 (it doesn't happen with earlier versions):
https://gist.github.com/1564556
With the first evictions memcached starts reporting SERVER_ERROR out of
memory storing object. Those
/ca5016c54111e062c771d20fcc4662400713c634
- using exptime=0 it doesn't happen.
I probably fixed this a few weeks ago in my branch, but I'm still wringing
the bugs from it. If you can hold on for a few days for the beta and test
that when I post it, it should be better.
Thanks!
-Dormando
We're using memcached to cache content generated by another
application on the same server. Items that are cached are set to never
expire, as the application currently self decides when it will refresh
items already in the cache.
We run memcache with verbose output, and I occasionally see
Hello,
After 3 weeks with memcached 1.4.10 in production, today we have start
getting randomly this error:
SERVER_ERROR out of memory storing object with memcached
I can reproduce it with a simple set+get loop, this is the Python
script that I have used (running the script from 6
Dormando,
What exactly do you mean by stack? Do you mean buffering?
I've tried a few options from libmemcached: buffering and no_reply.
Both of them seem to make things much much faster.
With buffering it does 150k/s with no_reply ~750k/s. So it seems
memcached itself is lightning fast
@dormando, great response this is almost exctly what i had in mind, i.e.
grouping all of your memcached servers into logical pools so as to
avoid hitting all of them for every request. infact, a reasonable design, for
a very large server installation base, would be to aim for say 10-20
Hi,
I'm developing some applications using memcached and I saw that the
incr operation does not update the key's expiration, requiring the
usage of get + set if I want to update the expiration.
Is there a special reason to this behavior?
It's more flexible if that's the default. you can
I'm surprised how little I can find on this topic... references to the
stampeding problem (which is resolved by using get_locked) are about
all the info I can find. Sorry in advance if i'm missing some earlier
discussion - i've searched quite a bit and can't find anything about
it.
Are
Keys can have any characters except newlines or spaces while using the
ASCII protocol. If using the binary protocol exclusively there's no limit
to what, just the length (250 bytes)
On Wed, 30 Nov 2011, Siddharth Jagtiani wrote:
Can it be -613748077 ?
On Wed, Nov 30, 2011 at 1:57 PM,
But which client? Usually if you need memcache to scale you will be
running many clients in parallel - and if they are doing single-key
operations in many cases adding more servers will make them completely
separate. It is only multi-gets with many small keys that don't
scale forever.
If
this into a wiki page I guess...
-Dormando
Just specify 0 and it won't expire.
On Wed, 23 Nov 2011, Siddharth Jagtiani wrote:
I noticed exp is int, not uint. So I wonder if I give -1 will it consider no
expiry :).
Siddharth
On Wed, Nov 23, 2011 at 4:25 PM, Siddharth Jagtiani
jagtiani.siddha...@gmail.com wrote:
Hi,
I am
Well, if I need to put another object in the collection, I need to first get
it the existing object from the cache. And then insert this new object
within that collection. Reducing performance by that much. But I understand
that perf will not drop considerably since a get is much faster, and
Dormando,Quick question.
So if I were to
put (key, array_of_size_3)
and then
append (key, new_item)
value = get (key)
size of value will be 4 ?
if array_of_size_3 is 3 bytes, and new_item is 1 byte, then yes.
remember that if you're appending complex structures, you still need
), or try the new -o
maxconns_fast option in 1.4.9+, which immediately rejects conns over the
limit.
-Dormando
Hi All,
My scenario needs me to retrieve multiple objects that have the same key.
Infact my scenario needs me to identify objects using multiple keys too,
but I can solve the multiple keys problem by adding one more entry to
memcached. So thats not my question. Is it possible to store
Thanks, as always!
On Wed, 9 Nov 2011, Paul Lindner wrote:
Fedora RPMs up for testing. This release looks nce!
On Wed, Nov 9, 2011 at 5:00 PM, dormando dorma...@rydia.net wrote:
Hey,
http://code.google.com/p/memcached/wiki/ReleaseNotes1410
See release notes or code
Hey,
http://code.google.com/p/memcached/wiki/ReleaseNotes1410
See release notes or code for more information. I would be surprised if
any decent host would top out before overloading its network card. This
one is a beast.
-Dormando
It still has a weakness of not being able to reassign memory if you put it
all into one slab class pool.
I have 16 fixed sizes (ranging from 1K-6K, all slightly irregular size
[e.g. 1028 bytes]), so I will use the slab allocator every time I need
one of these fixed sizes. So I will have 16
There's nothing like that currently. Last discussion I remember is that
we decided against allowing binary keys at the client because we don't
know what other clients may expect when trying to get that item.
We can certainly reconsider that, but it's not been needed thus far.
What the hell?
Calm down. It clearly wasn't 50% of the use cases given that it's just
now come up. :)
It was part of the whole damn point of implementing the protocol. We
wanted three things: binary keys, proper quiet commands, and CAS on
everything. The rest is exactly the fucking same in ASCII. I reserve
I'd prefer a flag that I have to _enable_ to have the library verify my
damn keys. Let the user do what he wants to do and don't expect every
client user to be a moron. (just like the stupid ubuntu installations that
adds all sorts of stupid aliases for rm etc).
Most users won't ever know
I'd prefer a flag that I have to _enable_ to have the library verify my
damn keys. Let the user do what he wants to do and don't expect every
client user to be a moron. (just like the stupid ubuntu installations that
adds all sorts of stupid aliases for rm etc).
(yes, I'm saying users tend to
I'd prefer a flag that I have to _enable_ to have the library verify my
damn keys. Let the user do what he wants to do and don't expect every
client user to be a moron. (just like the stupid ubuntu installations
that
adds all sorts of stupid aliases for rm etc).
Most users won't ever
Nobody fucking does that. Get over it, yo. People read the minimum amount
of crap they have to read until it works. Everyone else doesn't have a
hard time finding work.
Also; because when you don't, people switch to to other systems because
they believe it's easier, or they complain in IRC
There were strong arguments for keeping keys compatible with both ASCII and
binary clients, to the point where it was decided to keep
parity between the two.
We can certainly revisit that now, but I don't remember anyone advocating
for arbitrary bytes in keys other than asking if we
On Monday, October 24, 2011 10:50:39 PM UTC-4, Dormando wrote:
Just... add a flag so it can be turned off? It's a sane default, but
hurtful if you ever need blob keys. None of the original clients checked
keys, and that sucked.
Makes sense. Disable key checking: can lead
be an option you have to swith off in the
client, or the client may not actually be using binprot (run memcached
with -vvv to see what it thinks of your connecting clients).
-Dormando
hello,
i am trying to install memcached-1.4.9 on Centos 5.7
i get these errors (the root one is easy) but not sure what i need to
change for the others
t/item_size_max.t 1/7 Item max size cannot be less than 1024
bytes.
t/item_size_max.t 2/7 Cannot set item size limit higher
is cutting it a bit close.
2) Use -C startup command, which disables CAS and saves 8 bytes per item
3) Compare `stats sizes` with the slab class sizes after storing some test
items, and adjust -f and/or the minimum slab size to get the slabs closer
to ideal.
Should get you a lot closer.
-Dormando
[root@events memcached-1.2.8]# rpm -q libevent
just an aside; 1.2.8 is very very old. you should be on 1.4.9 if you're
building yourself.
I don't have the notes from that discussion, but there was the dream of the
five byte get
A byte of magic, a byte of flags describing the parts, a byte of opcode,
and a byte of key length, then a key, a
The flags would include things like quiet, CAS?, keylenlen, etc...
It's
On Wed, 19 Oct 2011, dormando wrote:
I don't have the notes from that discussion, but there was the dream of
the five byte get
A byte of magic, a byte of flags describing the parts, a byte of opcode,
and a byte of key length, then a key, a
The flags would include things
to alignment issues, at minimum one byte per item.
-Dormando
On Wednesday, October 19, 2011 2:49:17 PM UTC-7, Dormando wrote:
That startup option should flog you a dozen times first, and force you
to
agree that you're doing something very very wrong, but it should work.
The
default will forever stay at 250 bytes because it's
.
I've pushed over 10 billion requests through what's in the 14perf tree at
rates of over 1.2 million keys/sec sustained. I still have a few important
things to stress test before I mark it final, but that could happen within
the next week.
have fun,
-Dormando
should be trivial to implement in any dynamic language, as
it already is with the ascii protocol.
We may not even do this, who knows. I'd rather have this so I can use it.
Sick of not being able to use binprot features wherever I want.
-Dormando
. So much hate.
# Quiet as a Flag
Again, happy to take it as a consistent bit, but it's not a different
command, just different handling of the command both in the server and the
client.
I wrote my justification above. Hit me back with a stronger response, or
is it fine?
Thanks!
-Dormando
We had previously talked about an even tighter binary protocol, but perhaps
harder to generalize a parser around. This doesn't seem different
enough from the existing binary protocol to warrant introducing an
incompatibility.
I honestly can't remember what else was removed in our old
The Dev server has this issue:
- items that are not yet expired (I had a sample with 35 minutes
lifetime left) get removed from the cache
- the memcache has plenty of free memory from the default 64MB
- the server has 12GB of free memory
Any good reason why a valid item is removed with
be the signed -9223372036854775808 or whatever it is.
-Dormando
Hello,
I see from the API that the memcached counters are unsigned long
integers.
My questions is : why unsigned ? Is there any possibility to use
signed integer counters with memcached ?
Regards,
Vlad
The counters are documented in the server's doc/protocol.txt (most of
them...). Some
of
memcached. I'm guessing that it doesn't support what you're trying to do,
but it's unlikely people here know how they set things up.
-Dormando
601 - 700 of 993 matches
Mail list logo