Re: Memcached make many SI (Software Interrupts)
We've run into this exact same issue and narrowed it down to the NIC, but don't really know where to go from there. I'm going to look up Dormando's suggestions but if anyone else has experience with this and can point us in the right direction, it would be greatly appreciated. Thanks, Jay On Sep 27, 2:34 pm, dormando dorma...@rydia.net wrote: We have an 2 x quad core server with 32 gb ram. If many clients connect to this server (only memcached runs on it) the first core run to nearly 100 % use by si (software interrups) and so some client can't reach the server. Memcached runs currently with 4 threads and with version (1.4.2). All other cores have 70 % idle so I ask me is there a possibility to improve the performance? This is an issue with how your network interrupts are being routed, not with how memcached is being threaded. Wish I had some good links offhand for this, because it's a little obscure to deal with. In short; you'll want to balance your network interrupts across cores. Google for blog posts about smp_affinity for network cards and irqbalance (which poorly tries to automatically do this). Depending on how many NIC's you have and if it's multiqueued or not you'll have to tune it differently. Linux 2.6.35 has some features for extending the speed of single-queued NIC's (find the pages discussing it on kernelnewbies.org).
Re: Suggestions for deferring DB write using memcached
One more shop here that uses gearmand for this!
Re: memcached limitation
:outofmemory 0 STAT items:26:tailrepairs 0 STAT items:27:number 570 STAT items:27:age 20065964 STAT items:27:evicted 262975 STAT items:27:evicted_time 14353 STAT items:27:outofmemory 0 STAT items:27:tailrepairs 0 STAT items:28:number 335 STAT items:28:age 20076269 STAT items:28:evicted 1015853 STAT items:28:evicted_time 4046 STAT items:28:outofmemory 0 STAT items:28:tailrepairs 0 STAT items:29:number 361 STAT items:29:age 20077722 STAT items:29:evicted 1832926 STAT items:29:evicted_time 2578 STAT items:29:outofmemory 0 STAT items:29:tailrepairs 0 STAT items:30:number 2370 STAT items:30:age 20041194 STAT items:30:evicted 48987 STAT items:30:evicted_time 39102 STAT items:30:outofmemory 0 STAT items:30:tailrepairs 0 STAT items:31:number 488 STAT items:31:age 1797653 STAT items:31:evicted 5130 STAT items:31:evicted_time 40988 STAT items:31:outofmemory 0 STAT items:31:tailrepairs 0 STAT items:32:number 30 STAT items:32:age 20058375 STAT items:32:evicted 5093 STAT items:32:evicted_time 21217 STAT items:32:outofmemory 0 STAT items:32:tailrepairs 0 STAT items:33:number 8 STAT items:33:age 20068323 STAT items:33:evicted 3485 STAT items:33:evicted_time 11274 STAT items:33:outofmemory 0 STAT items:33:tailrepairs 0 STAT items:34:number 6 STAT items:34:age 20060544 STAT items:34:evicted 2058 STAT items:34:evicted_time 19924 STAT items:34:outofmemory 0 STAT items:34:tailrepairs 0 STAT items:35:number 5 STAT items:35:age 20068643 STAT items:35:evicted 1255 STAT items:35:evicted_time 11575 STAT items:35:outofmemory 0 STAT items:35:tailrepairs 0 STAT items:36:number 4 STAT items:36:age 20068275 STAT items:36:evicted 739 STAT items:36:evicted_time 14545 STAT items:36:outofmemory 0 STAT items:36:tailrepairs 0 STAT items:37:number 3 STAT items:37:age 20060785 STAT items:37:evicted 294 STAT items:37:evicted_time 54136 STAT items:37:outofmemory 0 STAT items:37:tailrepairs 0 STAT items:38:number 2 STAT items:38:age 19930122 STAT items:38:evicted 7 STAT items:38:evicted_time 59143 STAT items:38:outofmemory 0 STAT items:38:tailrepairs 0 STAT items:39:number 1 STAT items:39:age 20079660 STAT items:39:evicted 0 STAT items:39:evicted_time 0 STAT items:39:outofmemory 0 STAT items:39:tailrepairs 0 STAT items:40:number 1 STAT items:40:age 10017795 STAT items:40:evicted 0 STAT items:40:evicted_time 0 STAT items:40:outofmemory 0 STAT items:40:tailrepairs 0 END On Apr 20, 12:40 am, Dustin dsalli...@gmail.com wrote: On Apr 19, 9:04 pm, Jay Paroline boxmon...@gmail.com wrote: Interestingly, I also recently noticed that we are using around 75% of the allocated space on each of our buckets even though they have been up for 7.5 months. For us that's still more than enough space so I'm not worried about it. If it helps at all we're using an older version of memcached, 1.4.1, each bucket is 1GB and has about 740MB of that used. A lot of stuff we set gets deleted or expired, but there are plenty of things that get set with no expiration. You'd expect the buckets to fill up eventually... Hard to say without looking at things over time / looking at individual slabs. You do have a lot of evictions, so it's possible that you could benefit from using more of the memory you have allocated. List stats slabs and stats items to get more info (preferably more than once with a bit of a delay). 1.4.3 improved slab sizing, but you may have a somewhat common problem of a memcached that has learned about how your use your data and the things it learns are now wrong. -- Subscription settings:http://groups.google.com/group/memcached/subscribe?hl=en
Re: memcached limitation
Interestingly, I also recently noticed that we are using around 75% of the allocated space on each of our buckets even though they have been up for 7.5 months. For us that's still more than enough space so I'm not worried about it. If it helps at all we're using an older version of memcached, 1.4.1, each bucket is 1GB and has about 740MB of that used. A lot of stuff we set gets deleted or expired, but there are plenty of things that get set with no expiration. You'd expect the buckets to fill up eventually... Here are stats from one of the buckets: STAT pid 3746 STAT uptime 19995681 STAT time 1271735741 STAT version 1.4.1 STAT pointer_size 64 STAT rusage_user 20656.508733 STAT rusage_system 33432.810440 STAT curr_connections 849 STAT total_connections 40910273 STAT connection_structures 4494 STAT cmd_get 1134194114 STAT cmd_set 415640237 STAT cmd_flush 0 STAT get_hits 428039413 STAT get_misses 706154701 STAT delete_misses 530392232 STAT delete_hits 136872073 STAT incr_misses 23286903 STAT incr_hits 101619485 STAT decr_misses 502 STAT decr_hits 4139 STAT cas_misses 0 STAT cas_hits 0 STAT cas_badval 0 STAT bytes_read 539287620767 STAT bytes_written 581614893372 STAT limit_maxbytes 1073741824 STAT accepting_conns 1 STAT listen_disabled_num 0 STAT threads 5 STAT conn_yields 0 STAT bytes 774883798 STAT curr_items 1132112 STAT total_items 350437440 STAT evictions 52738486 (note: our hit rate isn't so good but it's a bit misleading; the way we use memcache, sometimes a miss tells us all we need to know and doesn't result in a set) -- Subscription settings: http://groups.google.com/group/memcached/subscribe?hl=en
Re: PHP and persistent connections
Ok, I think I wasn't clear enough in my question, so I wrote a simple example. ?php set_time_limit(0); $m = new Memcached('pool'); $m-addServer('172.16.0.64', 11211); $m-add('monkeys', '5'); while(1){ $m-get('monkeys'); sleep(5); } ? If 5 of these are running at the same time, I would *like* to have them all sharing the same connection. But that's not what happens: [r...@lifebook ~]# php testMemcachePool.php [1] 3291 [r...@lifebook ~]# netstat -tnap | grep 11211 tcp0 0 172.16.1.84:47401 172.16.0.64:11211 ESTABLISHED 3291/php [r...@lifebook ~]# php testMemcachePool.php [2] 3294 [r...@lifebook ~]# netstat -tnap | grep 11211 tcp0 0 172.16.1.84:47401 172.16.0.64:11211 ESTABLISHED 3291/php tcp0 0 172.16.1.84:47402 172.16.0.64:11211 ESTABLISHED 3294/php [r...@lifebook ~]# php testMemcachePool.php [3] 3297 [r...@lifebook ~]# netstat -tnap | grep 11211 tcp0 0 172.16.1.84:47401 172.16.0.64:11211 ESTABLISHED 3291/php tcp0 0 172.16.1.84:47402 172.16.0.64:11211 ESTABLISHED 3294/php tcp0 0 172.16.1.84:47403 172.16.0.64:11211 ESTABLISHED 3297/php [r...@lifebook ~]# php testMemcachePool.php [4] 3300 [r...@lifebook ~]# netstat -tnap | grep 11211 tcp0 0 172.16.1.84:47404 172.16.0.64:11211 ESTABLISHED 3300/php tcp0 0 172.16.1.84:47401 172.16.0.64:11211 ESTABLISHED 3291/php tcp0 0 172.16.1.84:47402 172.16.0.64:11211 ESTABLISHED 3294/php tcp0 0 172.16.1.84:47403 172.16.0.64:11211 ESTABLISHED 3297/php [r...@lifebook ~]# php testMemcachePool.php [5] 3303 [r...@lifebook ~]# netstat -tnap | grep 11211 tcp0 0 172.16.1.84:47404 172.16.0.64:11211 ESTABLISHED 3300/php tcp0 0 172.16.1.84:47405 172.16.0.64:11211 ESTABLISHED 3303/php tcp0 0 172.16.1.84:47401 172.16.0.64:11211 ESTABLISHED 3291/php tcp0 0 172.16.1.84:47402 172.16.0.64:11211 ESTABLISHED 3294/php tcp0 0 172.16.1.84:47403 172.16.0.64:11211 ESTABLISHED 3297/php On our production servers, we might have 200 apache processes running at the same time (each running PHP). We have 21 memcached buckets, so our worst case scenario is 4200 active connections to memcached from just one front end node. In reality I'm counting 2583 with a sate of ESTABLISHED right now with 137 httpd processes. -- To unsubscribe, reply using remove me as the subject.
PHP and persistent connections
Currently we are (still) stuck using the PECL memcache extension, but hopefully soon we will be moving to the libmemcached based PECL memcached extension. I am trying to figure out if there is any way to set it up to use persistent/shared connections. Currently every apache/PHP process ends up with its own connection to every memcached bucket. We had to up the ephemeral port range because we were actually running out. During peak times, we still come up short. Rather than trying to squeeze out more ephemeral ports or mess with TIME_WAIT settings, it would be nice if the connections could just be pooled. Unfortunately I haven't been able to find any info about this in the memcached extension. Can anyone shed some light on whether this is possible and if so, how? Thanks, Jay -- To unsubscribe, reply using remove me as the subject.
Re: Switching from pecl/memcache to pecl/memcached
Fortunately, way back when we started using memcache I wrote a wrapper for it to handle some of the more tedious stuff for me automatically. All our code uses that wrapper class, so I just had to change the one class to swap out extensions. :) Unfortunately in our testing we are seeing some weirdness. If we set a key from pecl/memcache we can get it from pecl/memcached, but if we set it from pecl/memcached, we can't retrieve it from pecl/memcache. I was hoping that I just missed something in the long list of options that would make it fully backwards compatible. Jay On Feb 2, 10:34 am, Brian Moon br...@moonspot.net wrote: The syntax differences don't lend themselves to being swapped out that easily. Unless you are going to do a full code roll out to one set of servers and not another, I don't think this is too easy. They have different compression settings and I am not sure if they set the same flag when sending compressed data. I have not had a reason to look into it. A simple test would tell you for sure. Just try it out on your test systems and see how it works. Report back. Brian. http://brian.moonspot.net/ On 2/1/10 11:51 PM, Jay Paroline wrote: Hi all, We're in the process of switching from the pecl memcache extension to the pecl memcached extension, but we would like to start out by doing a very limited rollout of the memcached extension so we can compare performance and make sure everything is working as expected. The documentation about the hashing/failover strategies used by the memcache extension is extremely limited. Does anyone know which settings I should use for so that pecl/memcache and pecl/memcached choose the same servers for each key? Should it be considered generally safe to have both clients working on the same keys? Thanks, Jay
Switching from pecl/memcache to pecl/memcached
Hi all, We're in the process of switching from the pecl memcache extension to the pecl memcached extension, but we would like to start out by doing a very limited rollout of the memcached extension so we can compare performance and make sure everything is working as expected. The documentation about the hashing/failover strategies used by the memcache extension is extremely limited. Does anyone know which settings I should use for so that pecl/memcache and pecl/memcached choose the same servers for each key? Should it be considered generally safe to have both clients working on the same keys? Thanks, Jay
Problems with binary protocol and memcached PECL extension
Hi guys, I posted this to the libmemcached mailing list a while ago and didn't get a response, but this list is a lot more active so I'm hoping someone here will have answers for me. :) I've taken some time to work on porting our code from using the PHP PECL memcache extension to using the PECL memcached extension so we can take advantage of all the advanced functionality that libmemcached has to offer, but I'm running into some issues using the binary protocol. Here is my code: ?php $servers = array(array('localhost', '11211')); $m = new Memcached(); $m-addServers($servers); $m-setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); $m-setOption(Memcached::OPT_CONNECT_TIMEOUT, 500); $m-setOption(Memcached::OPT_SEND_TIMEOUT, 500); $m-setOption(Memcached::OPT_RECV_TIMEOUT, 500); $m-setOption(Memcached::OPT_BINARY_PROTOCOL, true); $m-setOption(Memcached::OPT_SERVER_FAILURE_LIMIT, 1); $m-set('foo', '100'); var_dump($m-get('foo')); ? If I run this, the script never finishes executing. If I change OPT_BINARY_PROTOCOL to false, it instantly returns with the results. So the two major issues are that it doesn't seem to be obeying my timeout settings, and of course the binary protocol doesn't seem to be working. Is there something I need to change on the server end to support binary protocol? I'm running version 1.4.4 of memcached and have the latest libmemcached and PECL memcached extensions installed. Thanks! Jay
Re: Problems with binary protocol and memcached PECL extension
It looks like both/either. I added print statements in front of each, and it doesn't get to the get. If I comment out the set, then it hangs on the get. Thanks, Jay On Jan 6, 4:43 pm, Brian Moon br...@moonspot.net wrote: does the get or the set hold it up? Brian. http://brian.moonspot.net/ On 1/6/10 3:38 PM, Jay Paroline wrote: Hi guys, I posted this to the libmemcached mailing list a while ago and didn't get a response, but this list is a lot more active so I'm hoping someone here will have answers for me. :) I've taken some time to work on porting our code from using the PHP PECL memcache extension to using the PECL memcached extension so we can take advantage of all the advanced functionality that libmemcached has to offer, but I'm running into some issues using the binary protocol. Here is my code: ?php $servers = array(array('localhost', '11211')); $m = new Memcached(); $m-addServers($servers); $m-setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); $m-setOption(Memcached::OPT_CONNECT_TIMEOUT, 500); $m-setOption(Memcached::OPT_SEND_TIMEOUT, 500); $m-setOption(Memcached::OPT_RECV_TIMEOUT, 500); $m-setOption(Memcached::OPT_BINARY_PROTOCOL, true); $m-setOption(Memcached::OPT_SERVER_FAILURE_LIMIT, 1); $m-set('foo', '100'); var_dump($m-get('foo')); ? If I run this, the script never finishes executing. If I change OPT_BINARY_PROTOCOL to false, it instantly returns with the results. So the two major issues are that it doesn't seem to be obeying my timeout settings, and of course the binary protocol doesn't seem to be working. Is there something I need to change on the server end to support binary protocol? I'm running version 1.4.4 of memcached and have the latest libmemcached and PECL memcached extensions installed. Thanks! Jay
Re: Problems with binary protocol and memcached PECL extension
1.4.4 On Jan 6, 5:07 pm, Trond Norbye trond.nor...@gmail.com wrote: What server version are you using? Trond On Wednesday, January 6, 2010, Brian Moon br...@moonspot.net wrote: and what versions of libmemcached and pecl/memcached are you using? php -i can tell you that. Brian. http://brian.moonspot.net/ On 1/6/10 3:45 PM, Jay Paroline wrote: It looks like both/either. I added print statements in front of each, and it doesn't get to the get. If I comment out the set, then it hangs on the get. Thanks, Jay On Jan 6, 4:43 pm, Brian Moonbr...@moonspot.net wrote: does the get or the set hold it up? Brian. http://brian.moonspot.net/ On 1/6/10 3:38 PM, Jay Paroline wrote: Hi guys, I posted this to the libmemcached mailing list a while ago and didn't get a response, but this list is a lot more active so I'm hoping someone here will have answers for me. :) I've taken some time to work on porting our code from using the PHP PECL memcache extension to using the PECL memcached extension so we can take advantage of all the advanced functionality that libmemcached has to offer, but I'm running into some issues using the binary protocol. Here is my code: ?php $servers = array(array('localhost', '11211')); $m = new Memcached(); $m-addServers($servers); $m-setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); $m-setOption(Memcached::OPT_CONNECT_TIMEOUT, 500); $m-setOption(Memcached::OPT_SEND_TIMEOUT, 500); $m-setOption(Memcached::OPT_RECV_TIMEOUT, 500); $m-setOption(Memcached::OPT_BINARY_PROTOCOL, true); $m-setOption(Memcached::OPT_SERVER_FAILURE_LIMIT, 1); $m-set('foo', '100'); var_dump($m-get('foo')); ? If I run this, the script never finishes executing. If I change OPT_BINARY_PROTOCOL to false, it instantly returns with the results. So the two major issues are that it doesn't seem to be obeying my timeout settings, and of course the binary protocol doesn't seem to be working. Is there something I need to change on the server end to support binary protocol? I'm running version 1.4.4 of memcached and have the latest libmemcached and PECL memcached extensions installed. Thanks! Jay -- Trond Norbye
Re: Problems with binary protocol and memcached PECL extension
This is very odd. If I run it from the command line (with or without vv) it works as expected. If it starts from init.d it doesn't work. [r...@rhd011 test]# /etc/init.d/memcached start Starting memcached:[ OK ] [r...@rhd011 test]# ps aux | grep memcached 101 29441 0.0 0.0 52448 1008 ?Ssl 20:07 0:00 memcached -d -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/ memcached.pid ^^the above does not work [r...@rhd011 test]# /etc/init.d/memcached stop Stopping memcached:[ OK ] [r...@rhd011 test]# memcached -d -p 11211 -u memcached -m 64 -c 1024 - P /var/run/memcached/memcached.pid [r...@rhd011 test]# ps aux | grep memcached 101 29473 0.0 0.0 128064 996 ?Ssl 20:09 0:00 memcached -d -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/ memcached.pid ^^the above works What the heck is the difference? Jay On Jan 6, 5:15 pm, Trond Norbye trond.nor...@gmail.com wrote: Try running your server from a console and add -vvv to the command line. Does ti print out any progress? On Wednesday, January 6, 2010, Jay Paroline boxmon...@gmail.com wrote: 1.4.4 On Jan 6, 5:07 pm, Trond Norbye trond.nor...@gmail.com wrote: What server version are you using? Trond On Wednesday, January 6, 2010, Brian Moon br...@moonspot.net wrote: and what versions of libmemcached and pecl/memcached are you using? php -i can tell you that. Brian. http://brian.moonspot.net/ On 1/6/10 3:45 PM, Jay Paroline wrote: It looks like both/either. I added print statements in front of each, and it doesn't get to the get. If I comment out the set, then it hangs on the get. Thanks, Jay On Jan 6, 4:43 pm, Brian Moonbr...@moonspot.net wrote: does the get or the set hold it up? Brian. http://brian.moonspot.net/ On 1/6/10 3:38 PM, Jay Paroline wrote: Hi guys, I posted this to the libmemcached mailing list a while ago and didn't get a response, but this list is a lot more active so I'm hoping someone here will have answers for me. :) I've taken some time to work on porting our code from using the PHP PECL memcache extension to using the PECL memcached extension so we can take advantage of all the advanced functionality that libmemcached has to offer, but I'm running into some issues using the binary protocol. Here is my code: ?php $servers = array(array('localhost', '11211')); $m = new Memcached(); $m-addServers($servers); $m-setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); $m-setOption(Memcached::OPT_CONNECT_TIMEOUT, 500); $m-setOption(Memcached::OPT_SEND_TIMEOUT, 500); $m-setOption(Memcached::OPT_RECV_TIMEOUT, 500); $m-setOption(Memcached::OPT_BINARY_PROTOCOL, true); $m-setOption(Memcached::OPT_SERVER_FAILURE_LIMIT, 1); $m-set('foo', '100'); var_dump($m-get('foo')); ? If I run this, the script never finishes executing. If I change OPT_BINARY_PROTOCOL to false, it instantly returns with the results. So the two major issues are that it doesn't seem to be obeying my timeout settings, and of course the binary protocol doesn't seem to be working. Is there something I need to change on the server end to support binary protocol? I'm running version 1.4.4 of memcached and have the latest libmemcached and PECL memcached extensions installed. Thanks! Jay -- Trond Norbye -- Trond Norbye
Re: Problems with binary protocol and memcached PECL extension
Ok, I'm officially semi-retarded. Apparently when I did a make install of the latest version of memcached on our dev server it installed in / usr/local/bin but the old version was still in /usr/bin -- when I ran the daemon it ran from /usr/local/bin but the init.d script was running it from /usr/bin I made a sym link from /usr/bin to /usr/local/bin and restarted, and it works like magic. Jay On Jan 6, 8:11 pm, Jay Paroline boxmon...@gmail.com wrote: This is very odd. If I run it from the command line (with or without vv) it works as expected. If it starts from init.d it doesn't work. [r...@rhd011 test]# /etc/init.d/memcached start Starting memcached: [ OK ] [r...@rhd011 test]# ps aux | grep memcached 101 29441 0.0 0.0 52448 1008 ? Ssl 20:07 0:00 memcached -d -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/ memcached.pid ^^the above does not work [r...@rhd011 test]# /etc/init.d/memcached stop Stopping memcached: [ OK ] [r...@rhd011 test]# memcached -d -p 11211 -u memcached -m 64 -c 1024 - P /var/run/memcached/memcached.pid [r...@rhd011 test]# ps aux | grep memcached 101 29473 0.0 0.0 128064 996 ? Ssl 20:09 0:00 memcached -d -p 11211 -u memcached -m 64 -c 1024 -P /var/run/memcached/ memcached.pid ^^the above works What the heck is the difference? Jay On Jan 6, 5:15 pm, Trond Norbye trond.nor...@gmail.com wrote: Try running your server from a console and add -vvv to the command line. Does ti print out any progress? On Wednesday, January 6, 2010, Jay Paroline boxmon...@gmail.com wrote: 1.4.4 On Jan 6, 5:07 pm, Trond Norbye trond.nor...@gmail.com wrote: What server version are you using? Trond On Wednesday, January 6, 2010, Brian Moon br...@moonspot.net wrote: and what versions of libmemcached and pecl/memcached are you using? php -i can tell you that. Brian. http://brian.moonspot.net/ On 1/6/10 3:45 PM, Jay Paroline wrote: It looks like both/either. I added print statements in front of each, and it doesn't get to the get. If I comment out the set, then it hangs on the get. Thanks, Jay On Jan 6, 4:43 pm, Brian Moonbr...@moonspot.net wrote: does the get or the set hold it up? Brian. http://brian.moonspot.net/ On 1/6/10 3:38 PM, Jay Paroline wrote: Hi guys, I posted this to the libmemcached mailing list a while ago and didn't get a response, but this list is a lot more active so I'm hoping someone here will have answers for me. :) I've taken some time to work on porting our code from using the PHP PECL memcache extension to using the PECL memcached extension so we can take advantage of all the advanced functionality that libmemcached has to offer, but I'm running into some issues using the binary protocol. Here is my code: ?php $servers = array(array('localhost', '11211')); $m = new Memcached(); $m-addServers($servers); $m-setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT); $m-setOption(Memcached::OPT_CONNECT_TIMEOUT, 500); $m-setOption(Memcached::OPT_SEND_TIMEOUT, 500); $m-setOption(Memcached::OPT_RECV_TIMEOUT, 500); $m-setOption(Memcached::OPT_BINARY_PROTOCOL, true); $m-setOption(Memcached::OPT_SERVER_FAILURE_LIMIT, 1); $m-set('foo', '100'); var_dump($m-get('foo')); ? If I run this, the script never finishes executing. If I change OPT_BINARY_PROTOCOL to false, it instantly returns with the results. So the two major issues are that it doesn't seem to be obeying my timeout settings, and of course the binary protocol doesn't seem to be working. Is there something I need to change on the server end to support binary protocol? I'm running version 1.4.4 of memcached and have the latest libmemcached and PECL memcached extensions installed. Thanks! Jay -- Trond Norbye -- Trond Norbye
Using memcached as a distributed file cache
I'm running this by you guys to make sure we're not trying something completely insane. ;) We already rely on memcached quite heavily to minimize load on our DB with stunning success, but as a music streaming service, we also serve up lots and lots of 5-6MB files, and right now we don't have a distributed cache of any kind, just lots and lots of really fast disks. Due to the nature of our content, we have some files that are insanely popular, and a lot of long tail content that gets played infrequently. I don't remember the exact numbers, but I'd guesstimate that the top 50GB of our many TB of files accounts for 40-60% of our streams on any given day. What I'd love to do is get those popular files served from memory, which should alleviate load on the disks considerably. Obviously the file system cache does some of this already, but since it's not distributed it uses the space a lot less efficiently than a distributed cache would (say one popular file lives on 3 stream nodes, it's going to be cached in memory 3 separate times instead of just once). We have multiple stream servers, obviously, and between them we could probably scrounge up 50GB or more for memcached, theoretically removing the disk load for all of the most popular content. My favorite memory cache is of course memcache, so I'm wondering if this would be an appropriate use (with the slab size turned way up, obviously). We're going to start doing some experiments with it, but I'm wondering what the community thinks. Thanks, Jay
Re: Using memcached as a distributed file cache
I'm not sure how well a reverse proxy would fit our needs, having never used one before. The way we do streaming is a client sends a one- time-use key to the stream server. The key is used to determine which file should be streamed, and then the file is returned. The effect is that no two requests are identical, and that code must be run for every single request to verify the request and lookup the appropriate file. Is it possible or practical to use a reverse proxy in that way? Jay Adam Lee wrote: I'm guessing you might get better mileage out of using something written more for this purpose, e.g. squid set up as a reverse proxy. On Mon, Nov 2, 2009 at 4:35 PM, Jay Paroline boxmon...@gmail.com wrote: I'm running this by you guys to make sure we're not trying something completely insane. ;) We already rely on memcached quite heavily to minimize load on our DB with stunning success, but as a music streaming service, we also serve up lots and lots of 5-6MB files, and right now we don't have a distributed cache of any kind, just lots and lots of really fast disks. Due to the nature of our content, we have some files that are insanely popular, and a lot of long tail content that gets played infrequently. I don't remember the exact numbers, but I'd guesstimate that the top 50GB of our many TB of files accounts for 40-60% of our streams on any given day. What I'd love to do is get those popular files served from memory, which should alleviate load on the disks considerably. Obviously the file system cache does some of this already, but since it's not distributed it uses the space a lot less efficiently than a distributed cache would (say one popular file lives on 3 stream nodes, it's going to be cached in memory 3 separate times instead of just once). We have multiple stream servers, obviously, and between them we could probably scrounge up 50GB or more for memcached, theoretically removing the disk load for all of the most popular content. My favorite memory cache is of course memcache, so I'm wondering if this would be an appropriate use (with the slab size turned way up, obviously). We're going to start doing some experiments with it, but I'm wondering what the community thinks. Thanks, Jay -- awl
Re: dormando's awesome memcached top v0.1
Just got this running on my box pointing at all our servers, so far it's looking good! The only hiccup for me was that it expected the yaml file to be in /etc and I was just editing it in place. Paying attention to the error message made it pretty obvious what I did wrong though. Jay dormando wrote: Yo, I couldn't sleep, so: http://github.com/dormando/damemtop (or: http://consoleninja.net/code/memcached/damemtop-0.1.tar.gz) Early release of a utility I've been working on in the last few days. Yes, sorry, I'm aware this makes /four/ memcached top programs. So, I had to make mine awesome. In order to be truly awesome, I need to spend another day working on it to add a few things, but it's in a state now where it can be useful to people. So, up it goes, and I'll take feedback/ideas/patches. In short, it's a top utility which lets you take any stat memcached spits out from 'stats', 'stats items', or 'stats slabs', and display it in a 'top' like interface. With totals, averages, etc. It also supports computed columns, hit_rate, fill_rate, soon to be many more. Finally, you can choose an arbitrary column to sort the output. I have more memcached's than will fit on a stretched out terminal, so it's nice to be able to sort :) In order to change the display around you'll need to edit the damemtop.yaml file (example included). Also in order to run it at all you'll need to install AnyEvent and YAML CPAN modules. I'm brutally aware of adversity for installing simple modules, but these are in very common use, and AnyEvent allows the utility to scale to hundreds of instances. It takes 0.2 seconds to poll every single stat and display against TypePad's entire cluster. Upcoming ideas/features: - a '--helpme' mode that makes a big YAML dump folks can share with the mailing list to expediate assistance. - many more computed columns. - a drill down mode for exploring a single or custom set of instances. - a slabs mode for easy analysis and aggregation of the individual slab stats. - online config editing. - more formatting. shorteners for large numbers. bytes - K - M - G/etc. - better docs, more fleshed out config loader. - scrolling output modes. - multi-cluster support (switch views between groups of servers) - rolling averages for some views. - latency monitor (testing a bunch of commands) - YAML output/input modes for logging, output into monitoring/graphing systems, input into multiple 'damemtop' listeners. - pretty colors. - reorganize code a little. It got messier than I like :/ Dunno... stuff? Maybe a quickie mode that can give you warnings or notes about your configuration based on current stats? I'll work on this for a few hours each week and kick out a new version for a month or so. I don't expect (nor want) it to reach the complexity of something like innotop. The intent for this module to replace the 'scripts/memcached-tool' program, and be distributed with memcached itself. have fun, -Dormando
Re: Copy complete cache from one server to an other via script
Actually if anyone can provide hints about existing memcached replication solutions, especially for php, that would make me very happy. We have some keys that are very expensive to regenerate when they fall out of memcached, so losing a bucket due to server failure, maintenance, etc. is very painful right now. We have way more memory than we'll need any time soon for memcache, so it would be ideal if each bucket could be replicated on another server, so if one has to go down in the middle of the day, we don't lose all of the keys stored there. Jay Jim Spath wrote: Maybe you should look into Memcached replication Josef Finsel wrote: Karsten, It's not really possible nor should you worry about it. If this is really an issue, then the way you're using memcached is probably not correct. If the memcached client is set correctly, it will distribute the keys across both servers. -Josef On Wed, Aug 19, 2009 at 11:05 AM, Karsten karsten.landw...@bertelsmann.de mailto:karsten.landw...@bertelsmann.de wrote: Hi guys, I'm a newbie to memcached server and need your help. I'm currently working on a script, or trying to start working ;), that should copy all keys (all cached object) to an other memcached server. The situation is as follows: I have two memcached server. Some users may login on a website and their session will be cached on one of the server, not on both. But both server should have the same cached content in case one server goes down. Can you tell me if this is possible and how to do it (a link to an example or something that may help me)?? I have to write this script with perl, but the perl api doesen't look like what i need. Best regards, Karsten
1.2.6 to 1.2.8 or 1.4.0?
Hello, We've been having some intermittent memcached connection issues and I noticed that we are a couple of releases behind. Our current version is 1.2.6. Before I nag our admin about upgrading, is there any reason why it might be more wise to go to 1.2.8 rather than make the leap to 1.4.0? FWIW the clients we use are PHP primarily and a couple of lower traffic Java apps. Should 1.4.0 work with any clients that were working with 1.2.6? Thanks! Jay
Re: memcache VS mysql query cache
Another major disadvantage to the mysql query cache is that any time data in a table is modified, all queries in the cache selecting or joining across that table have to be invalidated. If you have a lot of writes happening, your query cache will be virtually useless. At Grooveshark we have disabled the query cache with no detrimental effects and it seems to be helping with some mysterious locking issues we were having. Jay Joseph Engo wrote: Generally, the MySQL query cache will perform worse then memcache due to a global lock that is required for it to work. Under the right work load, the lock contention can get very serious. There is also overhead added during reads and writes. The query cache also can't handle queries that contain non- deterministic data, for example now(). When you have multiple slaves, the query cache becomes extremely inefficient because its not a distributed cache. Most DBAs completely disable the query cache in favor of memcache. On Jun 22, 2009, at 7:47 AM, PHPMysql wrote: How can the mysql query cache is more advantage than memcache
Re: Problem with memcached + hibernate: Timeout waiting for value
This might be an obvious question, but have you verified that memcached is not going in to swap? thengil wrote: I've turned on GC logging and unfortunately it seems that the problem is not ONLY due to full GCs. We've observed a full GC that took 1.2 seconds and that led to timeouts, but we've also observed timeouts that occurred when no full GC was ongoing. We will try to se what we can find via tcpdump and Wireshark next... On 11 Juni, 10:07, Dustin dsalli...@gmail.com wrote: On Jun 11, 1:05 am, thengil staffan.martins...@gmail.com wrote: Yes, the timeout is 1000ms since we haven't touched that value. We will set up wireshark to look at possible network problems... Could it somehow be related to garbage collection on the appserver? Oh yes, a GC pause could certainly do it. Perhaps running with verbose GC would show this correlation.
Re: Memcached Use In Low Latency, High Write environments
We are doing slightly more writes than reads, and performance is still quite acceptable. Unfortunately I don't have any recent benchmarks, however.
Re: typical expiration times?
We look at the expiration time as the answer to the question: how stale can your data be and still be ok? for anything that is remotely likely to fail to be invalidated. For us, 24 hours is what we use for general system-state data and cached search results. Nobody really cares or can tell if that stuff is less than 24 hours old, but if it got really stale it would become obvious. Now for stuff that can't ever really afford to be stale, such as things that change because of user interaction, we generally fly by the seat of our pants and don't have that expire, because we have to be really careful about always invalidating anyway, since being off by a minute would be just as bad as being off by a day.
Re: creating a key
Using a prefix can be handy. If you need to flush all queries from memcache at the same time, while leaving other data in memcache, for example, a simple way to do that is to simply change the prefix you are using for queries. The key will be different, so all the old entries are automatically no longer accessible, and will eventually be discarded by memcache.
Re: Webconsole debugging tool built into memcached
Hi Clint, This looks like a potentially incredibly useful tool for debugging. I would be hesitant to commit to having that on our production servers, but I would certainly run it on our dev servers. Anyway, being able to see all keys by name is actually not all that helpful for me because we md5 most of our keys, otherwise they end up being too long. If it were possible to see all keys in order of when they were added/modified, that would be the most useful memcache tool I've ever used. Actually it would be the only memcache tool. :)