Right, I certainly don't want distributed transactions, which I agree would 
destroy availability. (I should add that my system is geographically 
distributed, making everything much worse.)

Still, that leaves open the question of doing what my application needs without 
transactions. Let's consider two situations involving updates.

The first situation is when I can reduce an update to a single write, such as 
by using Riak's secondary indexes. Unfortunately, I don't have a great 
understanding of the performance of secondary indexes, and I don't have a great 
understanding of their failure modes. Can you offer any guidance?

The second situation is when I really need to do multiple writes, in which case 
I must model (some subset of) transactional semantics at the application level. 
One example is implementing my own redo log, as mentioned earlier. Have other 
users ever had such problems? What are the good ways to solve them? Heck, what 
are the bad ways (just so I'll know what to avoid)?

Cheers,
John

On Jan 9, 2012, at 2:54 PM, Ryan Zezeski wrote:

> John,
> 
> As you already seem to understand, Riak doesn't provide a way to make 
> multiple ops atomic.  Part of the reason is because Riak's main focus thus 
> far has been availability.  Distributed transactions would work, but at the 
> cost of availability.  I think a flaw with the redo log approach is that you 
> need to serialize all operations to A & B through _one_ client to keep from 
> reading an inconsistent state.
> 
> A much simpler option, if you can bend your data, is to combine A and B into 
> one object.
> 
> -Ryan
> 
> On Mon, Jan 9, 2012 at 12:33 AM, John DeTreville <[email protected]> wrote:
> (An earlier post seems not to have gone through. My apologies in the eventual 
> case of a duplicate.)
> 
> I'm thinking of using Riak to replace a large Oracle system, and I'm trying 
> to understand its guarantees. I have a few introductory questions; this is 
> the third of three.
> 
> I would like to do two updates atomically, but of course I cannot. I imagine 
> I could construct my own redo log, and perform a sequence of operations 
> something like:
> 
>   write redo log entry (timestamp, A's update, B's update) to redo log
>   update A
>   update B
>   delete redo log entry from redo log
> 
> Asynchronously, I could read dangling entries from the redo log and repeat 
> them, deleting them upon success. (Let's imagine for simplicity that the 
> updates are idempotent and commutative.) This seems doable, but it's not 
> pretty. Is this the best I can do? Or should I think about the problem 
> differently?
> 
> (BTW, I believe that secondary indexes won't help me.)
> 
> Cheers,
> John
> _______________________________________________
> riak-users mailing list
> [email protected]
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 


_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to