>
> I apologize for not understanding more. So, all pages have been allocated
> (i.e. Memcached cannot just grab a new page) and there's nothing in the tail
> of the queue for something to evict. And that's because all recent items
> are being written at the same moment?
>
> That is, there's
On Wed, Sep 25, 2013 at 9:58 AM, dormando wrote:
> If you have a memory limit of 2MB, and start uploading 3 1MB objects, the
> third one will cause an out of memory error.
>
> During upload a free object is pulled to be written into. If you are
> actively writing to, or actively reading from + wr
If you have a memory limit of 2MB, and start uploading 3 1MB objects, the
third one will cause an out of memory error.
During upload a free object is pulled to be written into. If you are
actively writing to, or actively reading from + writing to, more objects
than are available for it to reserve,
> El 09/01/12 06:12, dormando escribió:
>
> > Hey, could you please try to reproduce the issue with 1.4.11-beta1:
> > http://code.google.com/p/memcached/wiki/ReleaseNotes1411beta1
> >
> > I've closed the logic issues and fixed a few other things besides. Would
> > be very good to know if you're sti
El 09/01/12 06:12, dormando escribió:
Hey, could you please try to reproduce the issue with 1.4.11-beta1:
http://code.google.com/p/memcached/wiki/ReleaseNotes1411beta1
I've closed the logic issues and fixed a few other things besides. Would
be very good to know if you're still able to bug it ou
> dormando, with a new script setting a random exptime I can reproduce the
> problem in a fresh memcached 1.4.10 (it doesn't happen with earlier versions):
>
> https://gist.github.com/1564556
>
> With the first evictions memcached starts reporting "SERVER_ERROR out of
> memory storing object". Thos
> El 05/01/12 12:00, Santi Saez escribió:
> > Making a diff with 1.4.9 it seems that is something related with
> > do_item_alloc()..
> More information:
>
> - reverting "remove the depth search from item_alloc" commit solves the
> problem:
>
> https://github.com/memcached/memcached/commit/ca5016c54
El 05/01/12 12:00, Santi Saez escribió:
Making a diff with 1.4.9 it seems that is something related with
do_item_alloc()..
More information:
- reverting "remove the depth search from item_alloc" commit solves the
problem:
https://github.com/memcached/memcached/commit/ca5016c54111e062c771d20f
El 30/12/11 17:51, dormando escribió:
Now that you've left it out for a while, can you try storing a few things
again and snapshot the items/slabs stats? I'm curious to see if the
tailrepairs counter goes up at all.
dormando, with a new script setting a random exptime I can reproduce the
prob
El 30/12/11 17:51, dormando escribió:
What client is this script written for, exactly?
By 6 different servers you mean you're running 6 copies of that script
from 6 places, or even more?
It's a Python script I wrote to try to reproduce the error, but we're
getting "out of memory" errors from
> Hello,
>
> After 3 weeks with memcached 1.4.10 in production, today we have start
> getting randomly this error:
>
> SERVER_ERROR out of memory storing object with memcached
>
> I can reproduce it with a simple set+get loop, this is the Python
> script that I have used (running the script from 6
11 matches
Mail list logo