Re: SERVER_ERROR out of memory storing object

2013-09-29 Thread dormando
>
> I apologize for not understanding more.   So, all pages have been allocated 
> (i.e. Memcached cannot just grab a new page) and there's nothing in the tail 
> of the queue for something to evict.   And that's because all recent items 
> are being written at the same moment?
>
> That is, there's no more memory and Memcached is too busy with active items 
> to find something to throw out?
>
> If this is rate related would one solution be to add more servers into the 
> pool to spread out the load? 
>
> Our Memcached isn't that busy -- if I can trust our Zabbix graphs 800 gets/s 
> and 250 set/s and 1 evection/s.  About 40M objects and 2500 connections on a 
> single Memcached server.
>
> We do have large objects (which is a problem we need to fix).  Why is OOM 
> error more likely with large objects?

Look through 'stats slabs' and 'stats items' - your 1MB object space
probably only has a handful of chunks on it (maybe even just one). That
means it has very little memory to work with.

>  
>   Were there issues with the latest version?
>
>
> No, I could not get the latest version to issue the OOM error, but that was 
> in a dev environment.   I don't think I could get our old production version 
> 1.4.4 to issue the OOM error either under dev.   But, I have a lot more 
> testing to do.
>
> The timeouts on production are a much bigger concern at this time.

I haven't looked at your timeouts mail yet, sorry.

The latest version should make those OOM errors less likely. You can also
use slab rebalance to give more memory to the larger slab classes.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: SERVER_ERROR out of memory storing object

2013-09-29 Thread Bill Moseley
On Wed, Sep 25, 2013 at 9:58 AM, dormando  wrote:

> If you have a memory limit of 2MB, and start uploading 3 1MB objects, the
> third one will cause an out of memory error.
>
> During upload a free object is pulled to be written into. If you are
> actively writing to, or actively reading from + writing to, more objects
> than are available for it to reserve, it'll bail with an OOM error. It's
> only able to look at the tail for this, so it's more common with large
> objects.
>

I apologize for not understanding more.   So, all pages have been allocated
(i.e. Memcached cannot just grab a new page) and there's nothing in the
tail of the queue for something to evict.   And that's because all recent
items are being written at the same moment?

That is, there's no more memory and Memcached is too busy with active items
to find something to throw out?

If this is rate related would one solution be to add more servers into the
pool to spread out the load?

Our Memcached isn't that busy -- if I can trust our Zabbix graphs 800
gets/s and 250 set/s and 1 evection/s.  About 40M objects and 2500
connections on a single Memcached server.

We do have large objects (which is a problem we need to fix).  Why is OOM
error more likely with large objects?



> Were there issues with the latest version?
>

No, I could not get the latest version to issue the OOM error, but that was
in a dev environment.   I don't think I could get our old production
version 1.4.4 to issue the OOM error either under dev.   But, I have a lot
more testing to do.

The timeouts on production are a much bigger concern at this time.

Thanks,


-- 
Bill Moseley
mose...@hank.org

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.