Hi,

On Feb 29, 2012, at 2:52 PM, Marcel Reutegger wrote:

Hi,

On Feb 28, 2012, at 3:54 PM, Marcel Reutegger wrote:

I'd solve this differently. Saves are always performed on one
partition,
even if some of the change set actually goes beyond a given partition.
this is however assuming that our implementation supports dynamic
partitioning and redistribution (e.g. when a new cluster node is added to the federation). in this case the excessive part of the change set
would eventually be migrated to the correct cluster node.

I'd like to better understand your approach: if we have, say,
Partitions P and Q, containing subtrees /p and /q, respectively, then
a save that spans elements in both /p and /q might be saved in P
first, and later migrated to Q? What happens if this later migration
leads to a conflict?

I guess this would be the result of a concurrent save when there's
an additional conflicting save under /q at the same time. good
question... CouchDB solves this with a deterministic algorithm
that simply picks one revision as the latest one and flags the conflict.
maybe we could use something similar?

So, this could result in a save on P that initially succeeds but ultimately fails, because the concurrent one on Q wins? I'm wondering how this could be reflected to an MK client: if a save corresponds to a MK commit call that immediately returns a new revision ID, would you suggest that the mentioned algorithm adds a "shadow" commit (leading to a new head revision ID) on P, that effectively reverts the conflicting save on P?

Dominique


regards
marcel



Reply via email to