Status: New
Owner: ----
Labels: Type-Defect Priority-Medium

New issue 317 by 133794...@gmail.com: {Feature Request}:Allow different type of compression than deflate for large values.
http://code.google.com/p/memcached/issues/detail?id=317

By default memcached uses zlib's deflate for its compression when values are above a certain size. Now while this is "OK", I would like to see the server get moved into the 21st century. As far as algorithms to be allowed/added to the server, I have three in mind that I'd like to see. Either of them is perfectly fine from where I am concerned.

Why would you need to change the compression? Because zlib is very very slow, and outdated. Sure it will save some space in ram, but the time the CPU spends compressing it is wasteful. So I'd like to see one of the following algorithms added(not by default so that people have time to adjust).

The algorithms in question are lzo, lz4, or quicklz. I'll state why I think each is good below with their own pros and cons. All 3 roughly compress to the same ratio, but the speed between them varies greatly.

First off is quicklz, it is by far the closest to zlib's compression ratio, but is also the slowest of the bunch. It's speed is ~1.5x slower than the fastest(lz4). It is also an old algorithm that has been around awhile and likely has all of the main bugs ironed out. It is also being used in tokudb as a compression algorithm option and they seem to deal with a ton of data, and seem to think it's stable enough for their high dollar/very serious use cases.

Next up is lzo, it is directly in the middle of the road in both compression ratio and in terms of speed. It's slightly slower than lz4 in terms of compression, and a good deal slower at decompression. It is still not stable enough for the kernel, but I think the algorithm is very advanced and is worthwhile for use inside of memcached. It is being used by "zram", "compcache", and also "zswap" in the kernel.

The final one is lz4, the algorithm is very new(less than 3 years old), but it's speed is by far the best. It's compression ratio isn't as good as the other two, but the speed will surely make up for it. This algorithm hasn't yet gained any sort of widespread use, and I'm unsure of any big name applications using it.

To summarize, I'd love to see one of the three algorithms added to memcached as an option for clients to use. Although deflate works, it's slow, deflate could still be the default from now until the end of the world... but I think it's time for memcache to get another compression algorithm added to it, so that there's an option for people who don't want to use deflate/can't use it due to performance issues.


--
You received this message because this project is configured to send all issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

--

--- You received this message because you are subscribed to the Google Groups "memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to