Hi, Mike.
Assuming that the cluster is using the default storage engine
(bitcask) then the backup story is straightforward. Bitcask only ever
appends to files, and never re-opens a file for writing after it is
closed. This means that your favorite existing server filesystem
backup mechanism will
In the exciting event that your application or riak goes rogue and
deletes everything, bitcask will allow you to recover amazing,
life-saving amounts of data from its log-structured format.
ASK ME HOW I KNOW. :-P
Uh, more typically, I've heard that FS-level snapshots of /var/lib/riak
or simpl
Hey All,
I'm sizing up some database options for a fairly ambitious app I'm building
out for a client of mine. I've read a good amount of the available docs and
have toyed around with Riak enough to know that it's one of my finalists
(one of two, to be precise).
Before I set off building this app
Afternoon, Evening, Morning to All -
Massive Recap for today: new code, blog posts, jobs, sample apps, and more.
Enjoy and, as usual, have a great weekend.
Mark
Community Manager
Basho Technologies
wiki.basho.com
twitter.com/pharkmillups
---
Riak Recap For May 1
I kept overlooking that you were not using a list and you were using
ordsets. I was puzzled at why the merged result on the statebox README
wasn't [b, a, b].
It may be helpful to reiterate that this only works for operations that are
idempotent.
Eric.
On May 12, 2011 7:29 PM, "Mike Oxford" wrote
Sean,
Thanks to you and Ben for clarifying how that works. Since that was
so helpful, I'll ask a followup question, and also a question on
a mostly un-related topic...
1) When I've removed a couple of nodes and the remaining nodes pick up
the slack, is there any way for me to look under the h
Our team has a fork of the riaksearch source that we believe adds a very simple
but highly desirable improvement. We've reached out to Basho directly with the
idea but, as they say, code talks. The fork can be found at
https://github.com/gpascale/riak_search and we will be issuing a pull reque
Peter,
You've hit on a major feature of Riak: to be available in the face of network
and hardware failure.
When a node is down, other nodes (ones that do not "own" the replicas for a
given key) will pick up the slack and serve read and write requests on behalf
of the downed node. This means
Here is my understanding. Corrections welcome.
You're missing that Riak is happy to be "eventually consistent". Drop
out 2 of your nodes, and it rebalances who is responsible for what,
then under the hood migrates and replicates its data more leisurely.
Data is still being written to 4 differen
I'm a Riak newbie, trying to get some familiarity with the system by
runing some tests on Amazon EC2. I'm seeing some behavior that I don't
understand...
I've set up a test where I create a 4-node cluster using 4 EC2 machines.
I've created a bucket with n_val=4, r=quorum, and w=quorum. For
n
10 matches
Mail list logo