Hello all!
I have a couple of questions that I would like to address all of you guys,
in order to start this migration the best as possible.
Context:
- I'm responsible for the migration of a pure key/value store that for now
is being stored on memcacheDB.
- We're serializing php objects and stori
Then you are better off with Bitcask, that will be the fastest in your
case (no 2i, no searches, no M/R)
HTH,
Guido.
On 10/07/13 09:49, Edgar Veiga wrote:
Hello all!
I have a couple of questions that I would like to address all of you
guys, in order to start this migration the best as possi
Well, I rushed my answer before, if you want performance, you probably
want Bitcask, if you want compression then LevelDB, the following links
should help you decide better:
http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/Bitcask/
http://docs.basho.com/riak/1.2.0/tutorials/choosin
Hi Guido.
Thanks for your answer!
Bitcask it's not an option due to the amount of ram needed.. We would need
a lot more of physical nodes so more money spent...
Instead we're using less machines with SSD disks to improve elevelDB
performance.
Best regards
On 10 July 2013 09:58, Guido Medina
Hi Edgar,
You don't need to compress your objects, LevelDB will do that for you,
and if you are using Protocol Buffers it will compress the network
traffic for you too without compromising performance or any CPU bound
process. There isn't anything special about LevelDB config, I would
suggest
( first post here, hi everybody... )
If you don't need MR, 2i, etc, then BitCask will be faster. You just need
to make sure all your keys fit in memory, which should not be a problem.
How many keys do you have and what's their average length ?
About the values,you can save a lot of space by choos
If you are using Java you could store Riak keys as binaries using
Jackson smile format, supposedly it will compress faster and better than
default Java serialization, we use it for very large keys (say a key
with a large collection of entries), the drawback is that you won't be
able to easily r
Hi Damien,
We have ~11 keys and we are using ~2TB of disk space.
(The average object length will be ~2000 bytes).
This is a lot to fit in memory (We have bad past experiencies with
couchDB...).
Thanks for the rest of the tips!
On 10 July 2013 10:13, damien krotkine wrote:
>
> ( first
Guido, we'r not using Java and that won't be an option.
The technology stack is php and/or node.js
Thanks anyway :)
Best regards
On 10 July 2013 10:35, Edgar Veiga wrote:
> Hi Damien,
>
> We have ~11 keys and we are using ~2TB of disk space.
> (The average object length will be ~2000
On 10 July 2013 11:03, Edgar Veiga wrote:
> Hi Guido.
>
> Thanks for your answer!
>
> Bitcask it's not an option due to the amount of ram needed.. We would need
> a lot more of physical nodes so more money spent...
>
Why is it not an option?
If you use Bitcask, then each node needs to store its
For the sake of using the right capacity planner use the latest GA Riak
version link which is 1.3.2, and probably comeback after 1.4 is fully is
released which should happen really soon, also check release notes
between 1.3.2 and 1.4, might give you ideas/good news.
http://docs.basho.com/riak/
Hi Damien,
Well let's dive into this a little bit.
I told you guys that bitcask was not an option due to a bad past
experiencie with couchbase (sorry, in the previous post I wrote couchdb),
that uses the same architecture as bitcask, keys in memory and values in
disk.
We started the migration to
Hi,
Indeed you're using very big keys. If you can't change the keys, then yes
you'll have to use leveldb. However I wonder why you need keys that long :)
On 10 July 2013 13:04, Edgar Veiga wrote:
> Hi Damien,
>
> Well let's dive into this a little bit.
>
> I told you guys that bitcask was not
On 10 July 2013 10:49, Edgar Veiga mailto:edgarmve...@gmail.com>> wrote:
Hello all!
I have a couple of questions that I would like to address all of
you guys, in order to start this migration the best as possible.
Context:
- I'm responsible for the migration of a pure k
On Wed, Jul 10, 2013 at 08:19:23AM -0700, Howard Chu wrote:
> If you only need a pure key/value store, you should consider
> memcacheDB using LMDB as its backing store. It's far faster than
> memcacheDB using BerkeleyDB.
> http://symas.com/mdb/memcache/
>
> I doubt LevelDB accessed through a
15 matches
Mail list logo