Re: Nginx + Memcached and large JSON items

2012-12-14 Thread Aliaksandr Valialkin
go-memcached-bench now measures maximum response time additionally to
response times' distribution. Here are maximum response times for
workerMode=GetSetRand, workersCount=512:

workerMode=GetSetRand:

clientType=new, memcached: 223ms
clientType=new, go-memcached: 15ms
clientType=original, memcached: 245ms
clientType=original, go-memcached: 278ms


workerMode=GetHit:

clientType=new, memcached: 215ms
clientType=new, go-memcached: 12ms
clientType=original, memcached: 227ms
clientType=original, go-memcached: 289ms

workerMode=Set

clientType=new, memcached: 15ms
clientType=new, go-memcached: 15ms
clientType=original, memcached: 153ms
clientType=original, go-memcached: 184ms


workerMode=GetMiss

clientType=new, memcached: 10ms
clientType=new, go-memcached: 10ms
clientType=original, memcached: 129ms
clientType=original, go-memcached: 150ms

As you can see, go-memcached demonstrates much smaller maximum response
times during tests with new client than memcached.


-- 
Best Regards,

Aliaksandr


Re: Nginx + Memcached and large JSON items

2012-12-14 Thread Aliaksandr Valialkin
>
> Older versions can do a few million fetches/sec, newest version was doing
> 11 million on some decent hardware and had much better thread scalability.
> See the list archives and mc-crusher on my github page. Your numbers are
> pretty good for a Go thing though? Maybe mc-crusher can push it harder,
> too.
>

I wanted comparing mc-cursher with go-memcached-bench, but couldn't build
mc-crusher on ubuntu 12.04. Linker shows the following errors:

$ ./compile
/tmp/ccSoTLEv.o: In function `new_connection':
/home/valyala/work/mc-crusher/./mc-crusher.c:552: undefined reference to
`event_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:553: undefined reference to
`event_base_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:554: undefined reference to
`event_add'
/tmp/ccSoTLEv.o: In function `update_conn_event':
/home/valyala/work/mc-crusher/./mc-crusher.c:108: undefined reference to
`event_del'
/home/valyala/work/mc-crusher/./mc-crusher.c:111: undefined reference to
`event_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:112: undefined reference to
`event_base_set'
/home/valyala/work/mc-crusher/./mc-crusher.c:114: undefined reference to
`event_add'
/tmp/ccSoTLEv.o: In function `main':
/home/valyala/work/mc-crusher/./mc-crusher.c:863: undefined reference to
`event_init'
/home/valyala/work/mc-crusher/./mc-crusher.c:875: undefined reference to
`event_base_loop'
collect2: ld returned 1 exit status

I tried building mc-crusher with both libevent-dev (which is based on
libevent 2.0-5) and libevent1-dev (based on libevent 1.4-2) packages
without success.

-- 
Best Regards,

Aliaksandr


Re: Nginx + Memcached and large JSON items

2012-12-13 Thread dormando
> > Here are more details on these perftests with charts
> > - https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-
> > in-Go . These charts show that go-memcached becomes faster than the
> > original memcached in all tests with more than 32 concurrent workers.
>
>   This is interesting.  The numbers seem to be a little off of what we'd
> expect given previous tests, but looks like you've given enough
> information here for people to understand it.  dormando's got tests
> that drive the server quite a bit hardr, but I'm not sure how it scales
> across different hardware.

Older versions can do a few million fetches/sec, newest version was doing
11 million on some decent hardware and had much better thread scalability.
See the list archives and mc-crusher on my github page. Your numbers are
pretty good for a Go thing though? Maybe mc-crusher can push it harder,
too.


Re: Nginx + Memcached and large JSON items

2012-12-13 Thread Dustin Sallings
Aliaksandr Valialkin  writes:

>   Can you publish anything in more detail?  Calling it "fast" with
> the
> feature list you have seems quite misleading.  It can't be both.
>
> Here are more details on these perftests with charts
> - https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-
> in-Go . These charts show that go-memcached becomes faster than the
> original memcached in all tests with more than 32 concurrent workers.

  This is interesting.  The numbers seem to be a little off of what we'd
expect given previous tests, but looks like you've given enough
information here for people to understand it.  dormando's got tests
that drive the server quite a bit hardr, but I'm not sure how it scales
across different hardware.

  A couple things that might be interesting to also look at would be the
latency differences as well as how the difference is with my client.
The model of execution for high throughput is a bit different, though.
It does a reasonable job of keeping latency low as well.

  Since my client is binary only, I fully pipeline the client and
separate my reads and writes entirely.  On a mostly
set/add/incr/decr/delete/etc.. workload, I almost never have any
responses to read from the socket which tends to make stuff pretty
quick.  That said, the last person who wanted to do a lot with my client
made some changes to it haven't quite reviewed yet.  You seem to have
some good ideas in there as well.

>  http://godoc.org/github.com/valyala/ybc/bindings/go/ybc ) for such
> cases - these beasts may reduce latency to nanoseconds if properly
> used.

  Yep, though I still think sending those requests over the network is
unnecessary even in those cases.  :)

-- 
dustin



Re: Nginx + Memcached and large JSON items

2012-12-12 Thread Aliaksandr Valialkin
Hi, Dustin,

On Mon, Dec 3, 2012 at 8:33 PM, Dustin Sallings  wrote:

> Aliaksandr Valialkin  writes:
>
> > Try go-memcached - fast memcached server written in Go. It can cache
> > objects with up to 2Gb sizes. It also has no 250 byte limit on key
> > sizes.
>
>   Your description sounds like you've written something very much unlike
> memcached.
>

Yes - go-memcached is just a sample application written on top of YBC (
https://github.com/valyala/ybc ) - a library implementing fast in-process
blob cache with persistence support. Initially I started working on caching
http proxy for big ISPs on top of YBC. Unlike squid (
http://www.squid-cache.org/ ), this proxy should perfectly deal with
multi-TB caches containing big objects such as videos. But then I
temporarily switched to go-memcached implementation, since it has larger
coverage for YBC API than the caching http proxy. So go-memcached
automatically inherited YBC features:
* support for large objects;
* support for persistence;
* support for cache sizes bigger than available RAM.


> > According to my performance tests on Ubuntu 12.04 x64, go-memcached's
> > speed is comparable to the original memcached.
>
>   Can you publish anything in more detail?  Calling it "fast" with the
> feature list you have seems quite misleading.  It can't be both.
>

Here are more details on these perftests with charts -
https://github.com/valyala/ybc/wiki/Creating-scalable-memcache-client-in-Go .
These charts show that go-memcached becomes faster than the original
memcached in all tests with more than 32 concurrent workers. I suspect the
reason is in 'smart' flushing of write buffers - go-memcached flushes them
only if there are no incoming requests on the given TCP connection.
These perftests also compare memcache client implementations:
 * https://github.com/bradfitz/gomemcache - This is 'traditional' client,
which uses big connection pools and doesn't use requests' pipelining.
 * https://github.com/valyala/ybc/tree/master/libs/go/memcache - This is
'new' client, which uses small connection pools and requests' pipelining.
The conclusion is that the 'new' client scales much better with big number
of concurrent workers.

  There are really good reasons to avoid caching items over 1MB or so
> (depending on your network topology).  It stops becoming a cache at some
> point and becomes a file server with entirely different semantics.  You
> no longer get to measure object retrieval latency in microseconds, for
> example.
>

I agree - memcached isn't well suited for caching large objects - we
already discussed this on golang-nuts. But, as you already know from this
discussion, there is CachingClient (
http://godoc.org/github.com/valyala/ybc/libs/go/memcache#CachingClient ) and
bare in-process out-of-gc cache (
http://godoc.org/github.com/valyala/ybc/bindings/go/ybc ) for such cases -
these beasts may reduce latency to nanoseconds if properly used.

-- 
Best Regards,

Aliaksandr


Re: Nginx + Memcached and large JSON items

2012-12-03 Thread Dustin Sallings
Aliaksandr Valialkin  writes:

> Try go-memcached - fast memcached server written in Go. It can cache
> objects with up to 2Gb sizes. It also has no 250 byte limit on key
> sizes.

  Your description sounds like you've written something very much unlike
memcached.

> According to my performance tests on Ubuntu 12.04 x64, go-memcached's
> speed is comparable to the original memcached.

  Can you publish anything in more detail?  Calling it "fast" with the
feature list you have seems quite misleading.  It can't be both.

  There are really good reasons to avoid caching items over 1MB or so
(depending on your network topology).  It stops becoming a cache at some
point and becomes a file server with entirely different semantics.  You
no longer get to measure object retrieval latency in microseconds, for
example.

-- 
dustin



Re: Nginx + Memcached and large JSON items

2012-12-03 Thread Aliaksandr Valialkin
Hello, Nadav,

Try go-memcached 
- 
fast memcached server written in Go . It can cache 
objects with up to 2Gb sizes. It also has no 250 byte limit on key sizes.

Currently it supports the following memcache commands: get, gets, set, add, 
cas, delete, flush_all. It also has the following features missing in the 
original memcached:
  * Cache size may exceed available RAM size by multiple orders of 
magnitude.
  * Cached objects may survive server crashes and restarts if cache is 
backed by files.
  * It can shard objects into multiple backing files located on multiple 
distinct physical storage devices (HDDs or SSDs). Such sharding may 
linearly increase qps for I/O-bound workloads, when hot objects don't fit 
RAM.
  * It supports two useful commands (extensions to memcache protocol):
  * dogpile effect-aware get (getde). Clients with getde support may 
effectively combat negative consequences of dogpile effect such as periodic 
spikes in resource usage.
  * conditional get (cget). Clients with cget support may save network 
bandwidth and decrease latency between memcache servers and clients by 
caching objects in local in-process cache. This may be especially useful 
when dealing with large objects.
 Currently only a single memcache client takes advantage of these 
commands - 
CachingClientfor
 Go.

According to my performance tests on Ubuntu 12.04 x64, go-memcached's speed 
is comparable to the original memcached.

go-memcached can be built from source code (see how to build and run 
it section 
for details) or it can be downloaded from 
https://github.com/downloads/valyala/ybc/go-memcached-1.tar.bz2 . The 
archive contains two programs - a memcache server (go-memcached) and a 
benchmark tool for memcache servers (go-memcached-bench). These programs 
can be configured with command line flags. Run them with --help in order to 
see available configuration options:

$ ./go-memcached --help
Usage of ./go-memcached:
  -cacheFilesPath="": Path to cache file. Leave empty for anonymous 
non-persistent cache.
Enumerate multiple files delimited by comma for creating a cluster of 
caches.
This can increase performance only if frequently accessed items don't fit 
RAM
and each cache file is located on a distinct physical storage.
  -cacheSize=100: Total cache capacity in Megabytes
  -deHashtableSize=16: Dogpile effect hashtable size
  -goMaxProcs=4: Maximum number of simultaneous Go threads
  -hotDataSize=0: Hot data size in bytes. 0 disables hot data optimization
  -hotItemsCount=0: The number of hot items. 0 disables hot items 
optimization
  -listenAddr=":11211": TCP address the server will listen to
  -maxItemsCount=100: Maximum number of items the server can cache
  -osReadBufferSize=229376: Buffer size in bytes for incoming requests in OS
  -osWriteBufferSize=229376: Buffer size in bytes for outgoing responses in 
OS
  -readBufferSize=4096: Buffer size in bytes for incoming requests
  -syncInterval=10s: Interval for data syncing. 0 disables data syncing
  -writeBufferSize=4096: Buffer size in bytes for outgoing responses


$ ./go-memcached-bench --help
Usage of ./go-memcached-bench:
  -connectionsCount=4: The number of TCP connections to memcache server
  -goMaxProcs=4: The maximum number of simultaneous worker threads in go
  -key="key": The key to query in memcache
  -maxPendingRequestsCount=1024: Maximum number of pending requests
  -osReadBufferSize=229376: The size of read buffer in bytes in OS
  -osWriteBufferSize=229376: The size of write buffer in bytes in OS
  -readBufferSize=4096: The size of read buffer in bytes
  -requestsCount=100: The number of requests to send to memcache
  -serverAddrs=":11211": Comma-delimited addresses of memcache servers to 
test
  -value="value": Value to store in memcache
  -workerMode="GetMiss": Worker mode. May be 'GetMiss', 'GetHit', 'Set', 
'GetSetRand'
  -workersCount=512: The number of workers to send requests to memcache
  -writeBufferSize=4096: The size of write buffer in bytes




On Monday, December 3, 2012 11:43:39 AM UTC+2, Nadav Har Tzvi wrote:
>
> Hello there,
>
> Let me just start this topic by stating that I do know of the 1 MB item 
> size limitation in memcached and the reasons for why it is so.
>
> However I am faced here with a dilema. As part of a web service, I have to 
> return a bit large JSON object that includes base64 encoded images in it 
> (thus the large size).
> The average JSON object size should be somewhere between 1.2 MB to 2MB max.
> In order to boost the whole deal, I decided to cache those items (Server 
> has more than enough memory) and grant access from Nginx to reduce the load 
> on the service and provide quicker responses.
>
> So my question is this, should I go for increasing memcach

Re: Nginx + Memcached and large JSON items

2012-12-03 Thread smallfish
Great! Just found default item size is 64MB.
--
smallfish http://chenxiaoyu.org



On Mon, Dec 3, 2012 at 8:53 PM, Nadav Har Tzvi  wrote:

> Oh! That's great, also found a repo that has a package of 1.4.15 available
> for Ubuntu 12.04 (Since the regular 12.04 repos don't have  that version
> yet)
> If that is of any use:
> http://www.ubuntuupdates.org/ppa/nathan-renniewaldock_ppa?dist=precise
>
> If you are around here Nathan, thanks :)
>
> I am going to try bombing it with data and see how it rolls.
> Thanks Yiftach! Saved my day.
>
> On Monday, December 3, 2012 1:51:18 PM UTC+2, Yiftach wrote:
>
>> AFAIK, since version 1.4.14, the max size of a Memcached object is 500MB.
>>
>> See more details here:
>>
>> https://groups.google.com/**forum/?fromgroups=#!topic/**
>> memcached/MOfjAseECrU
>>
>>
>>
>> On Mon, Dec 3, 2012 at 11:43 AM, Nadav Har Tzvi wrote:
>>
>>> Hello there,
>>>
>>> Let me just start this topic by stating that I do know of the 1 MB item
>>> size limitation in memcached and the reasons for why it is so.
>>>
>>> However I am faced here with a dilema. As part of a web service, I have
>>> to return a bit large JSON object that includes base64 encoded images in it
>>> (thus the large size).
>>> The average JSON object size should be somewhere between 1.2 MB to 2MB
>>> max.
>>> In order to boost the whole deal, I decided to cache those items (Server
>>> has more than enough memory) and grant access from Nginx to reduce the load
>>> on the service and provide quicker responses.
>>>
>>> So my question is this, should I go for increasing memcached item size
>>> or is there any other solution to bypass this problem? Searching google
>>> didn't provide any good results, maybe you have any idea of how to deal
>>> with this?
>>>
>>> Thanks.
>>>
>>
>>
>>
>> --
>>
>> Yiftach Shoolman
>> +972-54-7634621
>>
>>


Re: Nginx + Memcached and large JSON items

2012-12-03 Thread Nadav Har Tzvi
Oh! That's great, also found a repo that has a package of 1.4.15 available 
for Ubuntu 12.04 (Since the regular 12.04 repos don't have  that version 
yet)
If that is of any use:
http://www.ubuntuupdates.org/ppa/nathan-renniewaldock_ppa?dist=precise

If you are around here Nathan, thanks :)

I am going to try bombing it with data and see how it rolls.
Thanks Yiftach! Saved my day.

On Monday, December 3, 2012 1:51:18 PM UTC+2, Yiftach wrote:
>
> AFAIK, since version 1.4.14, the max size of a Memcached object is 500MB.
>
> See more details here:
>
> https://groups.google.com/forum/?fromgroups=#!topic/memcached/MOfjAseECrU
>
>
>
> On Mon, Dec 3, 2012 at 11:43 AM, Nadav Har Tzvi 
> 
> > wrote:
>
>> Hello there,
>>
>> Let me just start this topic by stating that I do know of the 1 MB item 
>> size limitation in memcached and the reasons for why it is so.
>>
>> However I am faced here with a dilema. As part of a web service, I have 
>> to return a bit large JSON object that includes base64 encoded images in it 
>> (thus the large size).
>> The average JSON object size should be somewhere between 1.2 MB to 2MB 
>> max.
>> In order to boost the whole deal, I decided to cache those items (Server 
>> has more than enough memory) and grant access from Nginx to reduce the load 
>> on the service and provide quicker responses.
>>
>> So my question is this, should I go for increasing memcached item size or 
>> is there any other solution to bypass this problem? Searching google didn't 
>> provide any good results, maybe you have any idea of how to deal with this?
>>
>> Thanks.
>>
>
>
>
> -- 
>
> Yiftach Shoolman
> +972-54-7634621
>
>  

Re: Nginx + Memcached and large JSON items

2012-12-03 Thread Yiftach Shoolman
AFAIK, since version 1.4.14, the max size of a Memcached object is 500MB.

See more details here:

https://groups.google.com/forum/?fromgroups=#!topic/memcached/MOfjAseECrU



On Mon, Dec 3, 2012 at 11:43 AM, Nadav Har Tzvi  wrote:

> Hello there,
>
> Let me just start this topic by stating that I do know of the 1 MB item
> size limitation in memcached and the reasons for why it is so.
>
> However I am faced here with a dilema. As part of a web service, I have to
> return a bit large JSON object that includes base64 encoded images in it
> (thus the large size).
> The average JSON object size should be somewhere between 1.2 MB to 2MB max.
> In order to boost the whole deal, I decided to cache those items (Server
> has more than enough memory) and grant access from Nginx to reduce the load
> on the service and provide quicker responses.
>
> So my question is this, should I go for increasing memcached item size or
> is there any other solution to bypass this problem? Searching google didn't
> provide any good results, maybe you have any idea of how to deal with this?
>
> Thanks.
>



-- 

Yiftach Shoolman
+972-54-7634621