Hello, Nadav,

Try go-memcached<https://github.com/valyala/ybc/tree/master/apps/go/memcached> 
- 
fast memcached server written in Go <http://golang.org/>. It can cache 
objects with up to 2Gb sizes. It also has no 250 byte limit on key sizes.

Currently it supports the following memcache commands: get, gets, set, add, 
cas, delete, flush_all. It also has the following features missing in the 
original memcached:
  * Cache size may exceed available RAM size by multiple orders of 
magnitude.
  * Cached objects may survive server crashes and restarts if cache is 
backed by files.
  * It can shard objects into multiple backing files located on multiple 
distinct physical storage devices (HDDs or SSDs). Such sharding may 
linearly increase qps for I/O-bound workloads, when hot objects don't fit 
RAM.
  * It supports two useful commands (extensions to memcache protocol):
      * dogpile effect-aware get (getde). Clients with getde support may 
effectively combat negative consequences of dogpile effect such as periodic 
spikes in resource usage.
      * conditional get (cget). Clients with cget support may save network 
bandwidth and decrease latency between memcache servers and clients by 
caching objects in local in-process cache. This may be especially useful 
when dealing with large objects.
     Currently only a single memcache client takes advantage of these 
commands - 
CachingClient<https://github.com/valyala/ybc/blob/master/libs/go/memcache/caching_client.go>for
 Go.

According to my performance tests on Ubuntu 12.04 x64, go-memcached's speed 
is comparable to the original memcached.

go-memcached can be built from source code (see how to build and run 
it<https://github.com/valyala/ybc/blob/master/apps/go/memcached/README> section 
for details) or it can be downloaded from 
https://github.com/downloads/valyala/ybc/go-memcached-1.tar.bz2 . The 
archive contains two programs - a memcache server (go-memcached) and a 
benchmark tool for memcache servers (go-memcached-bench). These programs 
can be configured with command line flags. Run them with --help in order to 
see available configuration options:

$ ./go-memcached --help
Usage of ./go-memcached:
  -cacheFilesPath="": Path to cache file. Leave empty for anonymous 
non-persistent cache.
Enumerate multiple files delimited by comma for creating a cluster of 
caches.
This can increase performance only if frequently accessed items don't fit 
RAM
and each cache file is located on a distinct physical storage.
  -cacheSize=100: Total cache capacity in Megabytes
  -deHashtableSize=16: Dogpile effect hashtable size
  -goMaxProcs=4: Maximum number of simultaneous Go threads
  -hotDataSize=0: Hot data size in bytes. 0 disables hot data optimization
  -hotItemsCount=0: The number of hot items. 0 disables hot items 
optimization
  -listenAddr=":11211": TCP address the server will listen to
  -maxItemsCount=1000000: Maximum number of items the server can cache
  -osReadBufferSize=229376: Buffer size in bytes for incoming requests in OS
  -osWriteBufferSize=229376: Buffer size in bytes for outgoing responses in 
OS
  -readBufferSize=4096: Buffer size in bytes for incoming requests
  -syncInterval=10s: Interval for data syncing. 0 disables data syncing
  -writeBufferSize=4096: Buffer size in bytes for outgoing responses


$ ./go-memcached-bench --help
Usage of ./go-memcached-bench:
  -connectionsCount=4: The number of TCP connections to memcache server
  -goMaxProcs=4: The maximum number of simultaneous worker threads in go
  -key="key": The key to query in memcache
  -maxPendingRequestsCount=1024: Maximum number of pending requests
  -osReadBufferSize=229376: The size of read buffer in bytes in OS
  -osWriteBufferSize=229376: The size of write buffer in bytes in OS
  -readBufferSize=4096: The size of read buffer in bytes
  -requestsCount=1000000: The number of requests to send to memcache
  -serverAddrs=":11211": Comma-delimited addresses of memcache servers to 
test
  -value="value": Value to store in memcache
  -workerMode="GetMiss": Worker mode. May be 'GetMiss', 'GetHit', 'Set', 
'GetSetRand'
  -workersCount=512: The number of workers to send requests to memcache
  -writeBufferSize=4096: The size of write buffer in bytes




On Monday, December 3, 2012 11:43:39 AM UTC+2, Nadav Har Tzvi wrote:
>
> Hello there,
>
> Let me just start this topic by stating that I do know of the 1 MB item 
> size limitation in memcached and the reasons for why it is so.
>
> However I am faced here with a dilema. As part of a web service, I have to 
> return a bit large JSON object that includes base64 encoded images in it 
> (thus the large size).
> The average JSON object size should be somewhere between 1.2 MB to 2MB max.
> In order to boost the whole deal, I decided to cache those items (Server 
> has more than enough memory) and grant access from Nginx to reduce the load 
> on the service and provide quicker responses.
>
> So my question is this, should I go for increasing memcached item size or 
> is there any other solution to bypass this problem? Searching google didn't 
> provide any good results, maybe you have any idea of how to deal with this?
>
> Thanks.
>

Reply via email to