If you are using Java you could store Riak keys as binaries using
Jackson smile format, supposedly it will compress faster and better than
default Java serialization, we use it for very large keys (say a key
with a large collection of entries), the drawback is that you won't be
able to easily
Hi Damien,
We have ~11 keys and we are using ~2TB of disk space.
(The average object length will be ~2000 bytes).
This is a lot to fit in memory (We have bad past experiencies with
couchDB...).
Thanks for the rest of the tips!
On 10 July 2013 10:13, damien krotkine dkrotk...@gmail.com
Guido, we'r not using Java and that won't be an option.
The technology stack is php and/or node.js
Thanks anyway :)
Best regards
On 10 July 2013 10:35, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Damien,
We have ~11 keys and we are using ~2TB of disk space.
(The average object
On 10 July 2013 11:03, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Guido.
Thanks for your answer!
Bitcask it's not an option due to the amount of ram needed.. We would need
a lot more of physical nodes so more money spent...
Why is it not an option?
If you use Bitcask, then each node
For the sake of using the right capacity planner use the latest GA Riak
version link which is 1.3.2, and probably comeback after 1.4 is fully is
released which should happen really soon, also check release notes
between 1.3.2 and 1.4, might give you ideas/good news.
Hi Damien,
Well let's dive into this a little bit.
I told you guys that bitcask was not an option due to a bad past
experiencie with couchbase (sorry, in the previous post I wrote couchdb),
that uses the same architecture as bitcask, keys in memory and values in
disk.
We started the migration
Hi,
Indeed you're using very big keys. If you can't change the keys, then yes
you'll have to use leveldb. However I wonder why you need keys that long :)
On 10 July 2013 13:04, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Damien,
Well let's dive into this a little bit.
I told you guys that
On 10 July 2013 10:49, Edgar Veiga edgarmve...@gmail.com
mailto:edgarmve...@gmail.com wrote:
Hello all!
I have a couple of questions that I would like to address all of
you guys, in order to start this migration the best as possible.
Context:
- I'm responsible for the
On Wed, Jul 10, 2013 at 08:19:23AM -0700, Howard Chu wrote:
If you only need a pure key/value store, you should consider
memcacheDB using LMDB as its backing store. It's far faster than
memcacheDB using BerkeleyDB.
http://symas.com/mdb/memcache/
I doubt LevelDB accessed through any