We are trying to experiment with using UDP vs TCP for gets to see what kind 
of speedup we can achieve. I wrote a very simple benchmark that just uses a 
single thread to set a key once and do gets to retrieve the key over and 
over. We didn't notice any speedup using UDP. If anything we saw a slight 
slowdown which seemed strange. 

When checking the stats delta, I noticed a really high value for 
lrutail_reflocked. For a test doing 100K gets, this value increased by 76K. 
In our production system, memcached processes that have been running for 
weeks have a very low value for this stat, less than 100. Also the latency 
measured by the benchmark seems to correlate to the rate at which that 
value increases. 

I tried to reproduce using the spy java client and I see the same behavior, 
so I think it must be something wrong with my benchmark design rather than 
a protocol issue. We are using 1.6.9. Here is a list of all the stats 
values that changed during a recent run using TCP:

stats diff:
  * bytes_read: 10,706,007
  * bytes_written: 426,323,216
  * cmd_get: 101,000
  * get_hits: 101,000
  * lru_maintainer_juggles: 8,826
  * lrutail_reflocked: 76,685
  * moves_to_cold: 76,877
  * moves_to_warm: 76,917
  * moves_within_lru: 450
  * rusage_system: 0.95
  * rusage_user: 0.37
  * time: 6
  * total_connections: 2
  * uptime: 6

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/8efbc45d-1d6c-4563-a533-fdbd95457223n%40googlegroups.com.

Reply via email to