> Rest LGTM, thanks

Based on your comments, I propose the following interdiff.


commit 57bc7a30b21e4fca3c6fb27d121da0a3f5886500
Author: Klaus Aehlig <[email protected]>
Date:   Tue Oct 13 12:53:54 2015 +0200

    Interdiff

diff --git a/doc/design-n-m-redundancy.rst b/doc/design-n-m-redundancy.rst
index 696bd5e..4536f4c 100644
--- a/doc/design-n-m-redundancy.rst
+++ b/doc/design-n-m-redundancy.rst
@@ -12,7 +12,10 @@ Current state and shortcomings
 ==============================
 
 Ganeti keeps the cluster N+1 redundant, also taking into account
-:doc:`design-shared-storage-redundancy`. However, e.g., for planning
+:doc:`design-shared-storage-redundancy`. In other words, Ganeti
+tries to keep the cluster in a state, where after failure of a single
+node, no matter which one, all instances can be started immediately.
+However, e.g., for planning
 maintenance, it is sometimes desirable to know from how many node
 losses the cluster can recover from. This is also useful information,
 when operating big clusters and expecting long times for hardware repair.
@@ -28,8 +31,11 @@ The intuitive meaning of an N+M redundant cluster is that M 
nodes can
 fail without instances being lost. However, when DRBD is used, already
 failure of 2 nodes can cause complete loss of an instance. Therefore, the
 best we can hope for, is to be able to recover from M sequential failures.
+This intuition that a cluster is N+M redundant, if M nodes can fail one-by-one,
+leaving enough time for a rebalance in between, without losing instances, is
+formalized in the next definition.
 
-Definition of M+M redundancy
+Definition of N+M redundancy
 ----------------------------
 
 We keep the definition of :doc:`design-shared-storage-redundancy`. Moreover,


-- 
Klaus Aehlig
Google Germany GmbH, Dienerstr. 12, 80331 Muenchen
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg
Geschaeftsfuehrer: Matthew Scott Sucherman, Paul Terence Manicle

Reply via email to