Aliaksandr Valialkin <valy...@gmail.com> writes:

>       Can you publish anything in more detail?  Calling it "fast" with
>     the
>     feature list you have seems quite misleading.  It can't be both.
>
> Here are more details on these perftests with charts
> - https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-
> in-Go . These charts show that go-memcached becomes faster than the
> original memcached in all tests with more than 32 concurrent workers.

  This is interesting.  The numbers seem to be a little off of what we'd
expect given previous tests, but looks like you've given enough
information here for people to understand it.  dormando's got tests
that drive the server quite a bit hardr, but I'm not sure how it scales
across different hardware.

  A couple things that might be interesting to also look at would be the
latency differences as well as how the difference is with my client.
The model of execution for high throughput is a bit different, though.
It does a reasonable job of keeping latency low as well.

  Since my client is binary only, I fully pipeline the client and
separate my reads and writes entirely.  On a mostly
set/add/incr/decr/delete/etc.. workload, I almost never have any
responses to read from the socket which tends to make stuff pretty
quick.  That said, the last person who wanted to do a lot with my client
made some changes to it haven't quite reviewed yet.  You seem to have
some good ideas in there as well.

>  http://godoc.org/github.com/valyala/ybc/bindings/go/ybc ) for such
> cases - these beasts may reduce latency to nanoseconds if properly
> used.

  Yep, though I still think sending those requests over the network is
unnecessary even in those cases.  :)

-- 
dustin

Reply via email to