Thanks for the answer.
2014년 8월 4일 월요일 오후 2시 34분 10초 UTC+9, Dormando 님의 말:
Hello Dormando,
Thanks for the answer.
The LRU fiddling only happens once a minute per item, so hot items don't
affect the lock as much. The more you lean toward hot
items the better it scales as-is.
=
I have this Memcached cluster where 3 instances of Memcached run in a
single server. These servers have 24 cores, each instance is configured to
have 8 threads each. Each individual instance serves have about 5000G
gets/sets a day and about 3k current connections.
What would be better?
On Mon, 4 Aug 2014, Claudio Santana wrote:
I have this Memcached cluster where 3 instances of Memcached run in a single
server. These servers have 24 cores, each instance
is configured to have 8 threads each. Each individual instance serves have
about 5000G gets/sets a day and about 3k
I don't have exact metrics per second but per minute 1.1 million sets and
1.8 million gets which translates to
On Mon, Aug 4, 2014 at 6:22 PM, dormando dorma...@rydia.net wrote:
On Mon, 4 Aug 2014, Claudio Santana wrote:
I have this Memcached cluster where 3 instances of Memcached run in a
Dormando, thanks for the quick response. Sorry for the confusion, I don't
have exact metrics per second but per minute 1.12 million sets and 1.8
million gets which translates to 18,666 sets per minute and 30,000 gets per
second.
These stats are per Memcached instance which I currently run 3 on
You could run one instance with one thread and serve all of that just
fine. have you actually looked at graphs of the CPU usage of the host?
memcached should be practically idle with load that low.
One with -t 6 or -t 8 would do it just fine.
On Mon, 4 Aug 2014, Claudio Santana wrote:
Can someone in this group point me to the comparison for failover support
provided in the below 2 cases :
1. Running a memcached server on a local machine with spymemcached client .
2. Running a AWS memcached server and accessing it on EC2 instance with
spymemcached client .
Also please let