On Mon, 16 Jan 2006, Jules Gosnell wrote:

> REMOVE_NODE is when a node leaves cleanly, FAILED_NODE when a node dies ...

I figured. I imagine that if I had to add this distinction to totem I
would add a message were the node in question announces that it is
leaving, and then stops forwarding the token. On the other hand, it does
not need to announce anything, and the other nodes will detect that it
left. In fact totem does not judge a node either way: you can leave
because you want to or under duress, and the consequences as far
distribute algorithms are probably minimal. I think the only where this
might is for logging purposes (but that could be handled at the
application level) or to speed the membership protocol, although it's
already pretty fast.

So I would not draw a distinction there.

> By also treating nodes joining, leaving and dieing, as split and merge
> operations I can reduce the number of cases that I have to deal with.

I would even add that the difference is known only to the application.

> and ensure that what might be very uncommonly run code (run on network
> partition/healing) is the same code that is commonly run on e.g. node
> join/leave - so it is likely to be more robust.

Sounds good.

> In the case of a binary split, I envisage two sets of nodes losing
> contact with each other. Each cluster fragment will repair its internal
> structure. I expect that after this repair, neither fragment will carry
> a complete copy of the cluster's original state (unless we are
> replicating 1->all, which WADI will usually not do), rather, the two
> datasets will intersect and their union will be the original dataset.
> Replicated state will carry a version number.

I think a version number should work very well.

> If client affinity survives the split (i.e. clients continue to talk to
> the same nodes), then we should find ourselves in a working state, with
> two smaller clusters carrying overlapping and diverging state. Each
> piece of state should be static in one subcluster and divergant in the
> other (it has only one client). The version carried by each piece of
> state may be used to decide which is the most recent version.
>
> (If client affinity is not maintained, then, without a backchannel of
> some sort, we are in trouble).
>
> When a merge occurs, WADI will be able to merge the internal
> representations of the participants, delegating awkward decisions about
> divergant state to deploy-time pluggable algorithms. Hopefully, each
> piece of state will only have diverged in one cluster fragment so the
> choosing which copy to go forward with will be trivial.

> A node death can just be thought of as a 'split' which never 'merges'.

Definitely :)

> Of course, multiple splits could occur concurrently and merging them is
> a little more complicated than I may have implied, but I am getting
> there....

Although I consider the problem of session replication less than
glamorous, since it is at hand, I would approach it this way:

1. The user should configure a minimum-degree-of-replication R. This is
the number of replicas of a specific session which need to be available in
order for an HTTP request to be serviced.

2. When an HTTP request arrives, if the cluster which received does not
have R copies then it blocks (it waits until there are.) This should in
data centers because partitions are likely to be very short-lived (aka
virtual partitions, which are due to congestion, not to any hardware
issue.)

3. If at any time an HTTP reaches a server which does not have itself a
replica of the session it sends a client redirect to a node which does.

4. When a new cluster is formed (with nodes coming or going), it takes an
inventory of all the sessions and their version numbers. Sessions which do
not have the necessary degree of replication need to be fixed, which will
require some state transfer, and possibly migration of some session for
proper load balancing.

Guglielmo

Reply via email to