The problem you are describing is called split-brain.  Ceph has an odd number 
of monitors and quorum is required before objects can be served.  The partition 
with the smaller number of monitors will wait harmlessly until connectivity is 
reestablished .

Adam Nielsen <a.niel...@shikadi.net> wrote:

>Thanks both for your answers - very informative.  I think I will set up a test 
>Ceph system on my home servers to try it out.
>
>I have one more question:
>
>Ceph seems to handle failed nodes well enough, but what about failed network 
>links?  Say you have a few systems in two locations, connected by a single 
>link.  If the link fails, you will have two isolated networks, each of which 
>will think the other has failed and presumably will try to go on as best it 
>can.  What happens when the link comes back up again?  What if the same file 
>was modified by both isolated clusters when the link was down?  What version 
>will end up back in the cluster?
>
>Thanks again,
>Adam.
>
>
>--
>To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>the body of a message to majord...@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to