Hey,

Usually it's good to include the benchark code, but I think I can answer
this off the top of my head:

1) set at least 1,000 keys and fetch them randomly. all of memcached's
internal scale-up is based around... not just fetching a single key. I
typically test with a million or more. There are internal threads which
poke at the LRU, and since you're always accessing the one key, that key
is in use, and those internal threads report on that (lrutail_reflocked)

2) UDP mode has not had any love in a long time. It's not very popular and
has caused some strife on the internet as it doesn't have any
authentication. The UDP protocol wrapper is also not scalable. :( I wish
it were done like DNS with a redirect for too-large values.

3) Since UDP mode isn't using SO_REUSEPORT, recvmmsg, sendmmsg, or any
other modern linux API it's going to be a lot slower than the TCP mode.

4) TCP mode actually scales pretty well. Linearly for reads vs the number
of worker threads at tens of millions of requests per second on large
machines. What probems are you running into?

-Dormando

On Fri, 26 Mar 2021, kmr wrote:

> We are trying to experiment with using UDP vs TCP for gets to see what kind 
> of speedup we can achieve. I wrote a
> very simple benchmark that just uses a single thread to set a key once and do 
> gets to retrieve the key over and
> over. We didn't notice any speedup using UDP. If anything we saw a slight 
> slowdown which seemed strange. 
> When checking the stats delta, I noticed a really high value for 
> lrutail_reflocked. For a test doing 100K gets,
> this value increased by 76K. In our production system, memcached processes 
> that have been running for weeks have
> a very low value for this stat, less than 100. Also the latency measured by 
> the benchmark seems to correlate to
> the rate at which that value increases. 
>
> I tried to reproduce using the spy java client and I see the same behavior, 
> so I think it must be something wrong
> with my benchmark design rather than a protocol issue. We are using 1.6.9. 
> Here is a list of all the stats values
> that changed during a recent run using TCP:
>
> stats diff:
>   * bytes_read: 10,706,007
>   * bytes_written: 426,323,216
>   * cmd_get: 101,000
>   * get_hits: 101,000
>   * lru_maintainer_juggles: 8,826
>   * lrutail_reflocked: 76,685
>   * moves_to_cold: 76,877
>   * moves_to_warm: 76,917
>   * moves_within_lru: 450
>   * rusage_system: 0.95
>   * rusage_user: 0.37
>   * time: 6
>   * total_connections: 2
>   * uptime: 6
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups 
> "memcached" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to
> memcached+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/memcached/8efbc45d-1d6c-4563-a533-fdbd95457223n%40googlegroups.com.
>
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/8f9f9e71-50dd-c7d-d357-ce2df4c2162d%40rydia.net.

Reply via email to