How did you notice the swapping? I struggled with a similar issue, we
finally disabled one of our monitoring tools (hyperic hq) which solved the
problem. But we didn't find out why/how the monitoring tool caused the
issue. Anything you find out would be helpful.
Peter
On Mon, Apr 12, 2010 at
On Wed, Apr 7, 2010 at 3:02 AM, Ryan Tomayko rtoma...@gmail.com wrote:
On Wed, Apr 7, 2010 at 2:14 AM, dormando dorma...@rydia.net wrote:
Just about all respones should happen sub-ms (excepting for network
jitter).
Thanks for the quick response.
Some stuff you can check for offhand:
-
We have a few memcached machines doing ~1000 ops/s (9:1 get to set)
each. These are fairly beefy, non-virtualized, 8-cpu servers with ~14G
RAM (12G to memcached). They're actually our hot fileserver spares,
which is why the hardware is so severely overallocated. CPU is
essentially idle, load
Just about all respones should happen sub-ms (excepting for network
jitter).
Some stuff you can check for offhand:
- List versions of all related software you're running; memcached proper,
libmemcached, ruby client)
- Your full startup arguments to memcached
- Narrow down if these timeouts
On Wed, Apr 7, 2010 at 2:14 AM, dormando dorma...@rydia.net wrote:
Just about all respones should happen sub-ms (excepting for network
jitter).
Thanks for the quick response.
Some stuff you can check for offhand:
- List versions of all related software you're running; memcached proper,
On Wed, 2010-04-07 at 01:23 -0700, Ryan Tomayko wrote:
But we also get occasional 200ms response times in those runs. Here's
the max response times for the same memslap runs graphed above:
It wouldn't be uncommon to see the occasional spike, though that graph
shows the very worst response time
On Wed, Apr 7, 2010 at 4:36 AM, Simon Riggs si...@2ndquadrant.com wrote:
On Wed, 2010-04-07 at 01:23 -0700, Ryan Tomayko wrote:
But we also get occasional 200ms response times in those runs. Here's
the max response times for the same memslap runs graphed above:
It wouldn't be uncommon to see