Hi, Dustin,

On Mon, Dec 3, 2012 at 8:33 PM, Dustin Sallings <dsalli...@gmail.com> wrote:

> Aliaksandr Valialkin <valy...@gmail.com> writes:
>
> > Try go-memcached - fast memcached server written in Go. It can cache
> > objects with up to 2Gb sizes. It also has no 250 byte limit on key
> > sizes.
>
>   Your description sounds like you've written something very much unlike
> memcached.
>

Yes - go-memcached is just a sample application written on top of YBC (
https://github.com/valyala/ybc ) - a library implementing fast in-process
blob cache with persistence support. Initially I started working on caching
http proxy for big ISPs on top of YBC. Unlike squid (
http://www.squid-cache.org/ ), this proxy should perfectly deal with
multi-TB caches containing big objects such as videos. But then I
temporarily switched to go-memcached implementation, since it has larger
coverage for YBC API than the caching http proxy. So go-memcached
automatically inherited YBC features:
* support for large objects;
* support for persistence;
* support for cache sizes bigger than available RAM.


> > According to my performance tests on Ubuntu 12.04 x64, go-memcached's
> > speed is comparable to the original memcached.
>
>   Can you publish anything in more detail?  Calling it "fast" with the
> feature list you have seems quite misleading.  It can't be both.
>

Here are more details on these perftests with charts -
https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-in-Go .
These charts show that go-memcached becomes faster than the original
memcached in all tests with more than 32 concurrent workers. I suspect the
reason is in 'smart' flushing of write buffers - go-memcached flushes them
only if there are no incoming requests on the given TCP connection.
These perftests also compare memcache client implementations:
 * https://github.com/bradfitz/gomemcache - This is 'traditional' client,
which uses big connection pools and doesn't use requests' pipelining.
 * https://github.com/valyala/ybc/tree/master/libs/go/memcache - This is
'new' client, which uses small connection pools and requests' pipelining.
The conclusion is that the 'new' client scales much better with big number
of concurrent workers.

  There are really good reasons to avoid caching items over 1MB or so
> (depending on your network topology).  It stops becoming a cache at some
> point and becomes a file server with entirely different semantics.  You
> no longer get to measure object retrieval latency in microseconds, for
> example.
>

I agree - memcached isn't well suited for caching large objects - we
already discussed this on golang-nuts. But, as you already know from this
discussion, there is CachingClient (
http://godoc.org/github.com/valyala/ybc/libs/go/memcache#CachingClient ) and
bare in-process out-of-gc cache (
http://godoc.org/github.com/valyala/ybc/bindings/go/ybc ) for such cases -
these beasts may reduce latency to nanoseconds if properly used.

-- 
Best Regards,

Aliaksandr

Reply via email to