If you are using Java you could store Riak keys as binaries using
Jackson smile format, supposedly it will compress faster and better than
default Java serialization, we use it for very large keys (say a key
with a large collection of entries), the drawback is that you won't be
able to easily
Hi Damien,
We have ~11 keys and we are using ~2TB of disk space.
(The average object length will be ~2000 bytes).
This is a lot to fit in memory (We have bad past experiencies with
couchDB...).
Thanks for the rest of the tips!
On 10 July 2013 10:13, damien krotkine dkrotk...@gmail.com
Guido, we'r not using Java and that won't be an option.
The technology stack is php and/or node.js
Thanks anyway :)
Best regards
On 10 July 2013 10:35, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Damien,
We have ~11 keys and we are using ~2TB of disk space.
(The average object
On 10 July 2013 11:03, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Guido.
Thanks for your answer!
Bitcask it's not an option due to the amount of ram needed.. We would need
a lot more of physical nodes so more money spent...
Why is it not an option?
If you use Bitcask, then each node
For the sake of using the right capacity planner use the latest GA Riak
version link which is 1.3.2, and probably comeback after 1.4 is fully is
released which should happen really soon, also check release notes
between 1.3.2 and 1.4, might give you ideas/good news.
Hi,
some of our nodes upgraded to Riak 1.4.0, and are now refusing to start and
join the cluster.
Is there documentation on the upgrade path from 1.3.2 to 1.4.0? It appears we
have accidentally begun this journey, and I don't know if it's easier to go
back or forwards now..
PS. It would have
Hi Damien,
Well let's dive into this a little bit.
I told you guys that bitcask was not an option due to a bad past
experiencie with couchbase (sorry, in the previous post I wrote couchdb),
that uses the same architecture as bitcask, keys in memory and values in
disk.
We started the migration
On 09/07/13 22:24, Mark Wagner wrote:
Hey all,
I'm new to riak and I'm working on an ETL script that needs to pull data
from a riak cluster.
My client has sent me a backup from one of their cluster nodes. bitcask
data,. rings and config.
*snip*
At this point I believe I should be able to
Release notes: https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md
Maybe related to this?
Known Issues
https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md#leveldb-13-to-14-conversionleveldb
1.3 to 1.4 conversion
The first execution of 1.4.0 leveldb using a 1.3.x or
Hi,
Indeed you're using very big keys. If you can't change the keys, then yes
you'll have to use leveldb. However I wonder why you need keys that long :)
On 10 July 2013 13:04, Edgar Veiga edgarmve...@gmail.com wrote:
Hi Damien,
Well let's dive into this a little bit.
I told you guys that
Thanks Guido.
Looks like we've upgraded to 1.4.0 completely now and the cluster is back up.
I'm not sure of the exact root cause, but what we were seeing was that too many
nodes went down for the ring to be healthy, and then when nodes were restarted
they waited for the ring to appear for a
Hi Toby,
I'm sure someone from Basho will answer soon, I just pointed you to the
release notes direction. I have only overlooked the release notes
until we decide to migrate to 1.4.0 when is final (Right now on rc1)
HTH,
Guido.
On 10/07/13 12:29, Toby Corkindale wrote:
Thanks Guido.
Yeah, I didn't think 1.4.0 was into final release yet either -- yet it came
through on the Debian and Ubuntu apt repositories automatically this evening.
- Original Message -
From: Guido Medina guido.med...@temetra.com
To: riak-users riak-users@lists.basho.com
Sent: Wednesday, 10 July,
Hi.
Sorry about the fake sender in the subject of the original message. Our mail
security system is funny like that...
Anyhow, we discovered that distcp puts a temp file name in place and then tries
to do a PUT (copy) that copy the file to the permanent name. From the
documentation that
Hi, Mark.
You've already received a little advice generally so I won't pile on that part,
but one thing stood out to me:
My client has sent me a backup from one of their cluster nodes. bitcask
data,. rings and config.
Unless I'm misunderstanding what you're doing, what you're working on
We try to post the packages to all the right places before we make
the announcement so I'd highly recommend you don't just auto-update Riak
packages when they hit apt/yum.
I'm glad you eventually got all your nodes upgraded.
-Jared
On Wed, Jul 10, 2013 at 5:42 AM, Toby Corkindale
Thanks for the info!!! I appreciate the help and I am glad to know that it
should just work!
I don't really need the data from the whole cluster while developing the
script. I just need to get my queries working etc...
One issue I am facing is I don't know the structure of the data. So I am
On 10 July 2013 10:49, Edgar Veiga edgarmve...@gmail.com
mailto:edgarmve...@gmail.com wrote:
Hello all!
I have a couple of questions that I would like to address all of
you guys, in order to start this migration the best as possible.
Context:
- I'm responsible for the
Riak Users,
As was somewhat hinted at by the apt/yum repo update, we are happy to
announce the release of Riak 1.4.0.
A blog post giving a high level overview of the release can be found here:
http://basho.com/basho-announces-availability-of-riak-1-4/
Does that mention counters? I believe it
Hi Dan. I do not know much about distcp, but if it is the case that it uses
a PUT (copy) operation to transfer data then distcp will not currently work
with RiakCS. Support for that operation is on our roadmap, but it is not
done yet unfortunately.
Kelly
On Wed, Jul 10, 2013 at 6:20 AM, Sajner,
On Wed, Jul 10, 2013 at 08:19:23AM -0700, Howard Chu wrote:
If you only need a pure key/value store, you should consider
memcacheDB using LMDB as its backing store. It's far faster than
memcacheDB using BerkeleyDB.
http://symas.com/mdb/memcache/
I doubt LevelDB accessed through any
In a multi node cluster with a bucket in memory_backend, will inserting an
object in this bucket be replicated
to other nodes?
--
View this message in context:
http://riak-users.197444.n3.nabble.com/riak-kv-memory-backend-replication-tp4028258.html
Sent from the Riak Users mailing list
Hi,
I just upgraded to 1.4 and have updated my client to the Java 1.1.1 client.
According to the release notes, it says all bucket properties are now
configurable through the PB API.
I tried setting my backend through the Java client, however I get an Exception
Backend not supported for PB.
Correct. Unless you've specific an n value of 1 for the bucket.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Wed, Jul 10, 2013 at 12:57 PM, kpandey kumar.pan...@gmail.com wrote:
In a multi node cluster with
Riak Users,
Yesterday we had to rebuild 1.4.0 packages and at the same time had to
refresh the Apt/Yum repositories with those new packages. Something in
this process must have broken down. We have had someone report a problem
with the Ubuntu precise repo filed here:
Quick update, looks like this issue is resolved in 1.4.0.1, the same program
running on 1.4.0.1 (redhat 5) without a problem so far.
Thanks to everybody,
Lei
--
View this message in context:
Looking at http://docs.basho.com/riak/latest/references/Client-Libraries/ I see
that Erlang riak client should support Cluster connections/pools. But looking
at Erlang riak client source code I would say that it doesn't support Cluster
connections/pools out of box. And I have to develop my own
This should now be fixed, sorry for any troubles it might have caused.
-Jared
On Wed, Jul 10, 2013 at 3:25 PM, Jared Morrow ja...@basho.com wrote:
Riak Users,
Yesterday we had to rebuild 1.4.0 packages and at the same time had to
refresh the Apt/Yum repositories with those new packages.
The X there indicates that it does not support connection pooling out of
the box in contrast to the check. I'd look at poolboy (to use in
conjunction with riakc_pb_socket) and riakpool (which pulls in
riak-erlang-client as a dependency).
On Wed, Jul 10, 2013 at 3:33 PM, Konstantin Kalin
Oops… was looking at wrong column. Sorry and thanks for the advice.
Thank you,
Konstantin.
On Jul 10, 2013, at 3:41 PM, Jeremy Ong wrote:
The X there indicates that it does not support connection pooling out of the
box in contrast to the check. I'd look at poolboy (to use in conjunction
We intend to overhaul the Erlang client soon. In the meantime, Riak CS has
done exactly what Jeremy has suggested. Look here:
https://github.com/basho/riak_cs/blob/develop/src/riak_cs_riakc_pool_worker.erl
On Wed, Jul 10, 2013 at 5:46 PM, Konstantin Kalin
konstantin.ka...@gmail.com wrote:
On 10/07/13 23:14, Jared Morrow wrote:
We try to post the packages to all the right places before we make
the announcement so I'd highly recommend you don't just auto-update
Riak packages when they hit apt/yum.
Ah, actually we don't automatically update them, but someone was
performing a
Hi,
The counters stuff looks awesome can't wait to use it.
Is this already supported via the currently available clients (specifically,
the Java 1.1.1 client)?
Also, when can we expect some tutorial / documentation around using counters? I
looked at the GitHub link, however, some use
If a counter doesn't exist, it's created when you increment/decrement the
counter. Otherwise it returns your API's equivalent of nothing to see.
So, in CorrugatedIron, you'd call RiakClient.IncrementCounter(bucket,
counter, 1);
After that command completes, the counter has a value of 1;
34 matches
Mail list logo