John, As you already seem to understand, Riak doesn't provide a way to make multiple ops atomic. Part of the reason is because Riak's main focus thus far has been availability. Distributed transactions would work, but at the cost of availability. I think a flaw with the redo log approach is that you need to serialize all operations to A & B through _one_ client to keep from reading an inconsistent state.
A much simpler option, if you can bend your data, is to combine A and B into one object. -Ryan On Mon, Jan 9, 2012 at 12:33 AM, John DeTreville <[email protected]>wrote: > (An earlier post seems not to have gone through. My apologies in the > eventual case of a duplicate.) > > I'm thinking of using Riak to replace a large Oracle system, and I'm > trying to understand its guarantees. I have a few introductory questions; > this is the third of three. > > I would like to do two updates atomically, but of course I cannot. I > imagine I could construct my own redo log, and perform a sequence of > operations something like: > > write redo log entry (timestamp, A's update, B's update) to redo log > update A > update B > delete redo log entry from redo log > > Asynchronously, I could read dangling entries from the redo log and repeat > them, deleting them upon success. (Let's imagine for simplicity that the > updates are idempotent and commutative.) This seems doable, but it's not > pretty. Is this the best I can do? Or should I think about the problem > differently? > > (BTW, I believe that secondary indexes won't help me.) > > Cheers, > John > _______________________________________________ > riak-users mailing list > [email protected] > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >
_______________________________________________ riak-users mailing list [email protected] http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
