* Actualize instance pinning documentation in the location design document

* Add instance pinning documentation to the hbal man page.

Signed-off-by: Oleg Ponomarev <[email protected]>
---
 doc/design-location.rst | 10 ++++++++--
 man/hbal.rst            | 27 +++++++++++++++++++++------
 2 files changed, 29 insertions(+), 8 deletions(-)

diff --git a/doc/design-location.rst b/doc/design-location.rst
index 55aa228..c718cc6 100644
--- a/doc/design-location.rst
+++ b/doc/design-location.rst
@@ -121,8 +121,14 @@ failure tag. Those tags indicate the the instance wants to 
be placed on a
 node tagged *x*. To make ``htools`` honor those desires, the metric is 
extended,
 appropriately weighted, by the following component.
 
-- The number of instances tagged *htools:desiredlocation:x* where their
-  primary node is not tagged with *x*.
+- Sum of dissatisfied desired locations number among all cluster instances. 
+  Instance desired location is dissatisfied when instance is tagged e.g.
+  *htools:desiredlocation:x* but its primary node is not tagged with *x*
+
+Such metric extension allows to specify multiple desired locations for each
+instance. These desired locations may be contradictive as well. Contradictive 
+desired locations mean that we don't care which one of desired locations will
+be satisfied.
 
 Again, instance pinning is just heuristics, not a hard enforced requirement;
 it will only be achieved by the cluster metrics favouring such placements.
diff --git a/man/hbal.rst b/man/hbal.rst
index 8cc4a72..49022e5 100644
--- a/man/hbal.rst
+++ b/man/hbal.rst
@@ -139,6 +139,8 @@ following components:
 - standard deviation of the CPU load provided by MonD
 - the count of instances with primary and secondary in the same failure
   domain
+- the overall sum of dissatisfied desired locations among all cluster 
+  instances
 
 The free memory and free disk values help ensure that all nodes are
 somewhat balanced in their resource usage. The reserved memory helps
@@ -147,8 +149,8 @@ instances, and that no node keeps too much memory reserved 
for
 N+1. And finally, the N+1 percentage helps guide the algorithm towards
 eliminating N+1 failures, if possible.
 
-Except for the N+1 failures, offline instances counts, and failure
-domain violation counts, we use the
+Except for the N+1 failures, offline instances counts, failure
+domain violation counts and desired locations count, we use the
 standard deviation since when used with values within a fixed range
 (we use percents expressed as values between zero and one) it gives
 consistent results across all metrics (there are some small issues
@@ -186,10 +188,10 @@ heuristic, instances from nodes with high CPU load will 
tend to move to
 nodes with less CPU load.
 
 On a perfectly balanced cluster (all nodes the same size, all
-instances the same size and spread across the nodes equally), the
-values for all metrics would be zero, with the exception of the total
-percentage of reserved memory. This doesn't happen too often in
-practice :)
+instances the same size and spread across the nodes equally,
+all desired locations satisfied), the values for all metrics
+would be zero, with the exception of the total percentage of
+reserved memory. This doesn't happen too often in practice :)
 
 OFFLINE INSTANCES
 ~~~~~~~~~~~~~~~~~
@@ -264,6 +266,19 @@ Instances with primary and secondary node having a common 
cause of failure are
 considered badly placed. While such placements are always allowed, they count
 heavily towards the cluster score.
 
+DESIRED LOCATION TAGS
+~~~~~~~~~~~~~~~~~~~~~
+
+Sometimes, administrators want specific instances located in a particular,
+typically geographic, location. To support these kind of requests, instances
+can be assigned tags of the form *htools:desiredlocation:x* where *x* is a
+failure tag. Those tags indicate the the instance wants to be placed on a
+node tagged *x*.
+
+Instance pinning is just heuristics, not a hard enforced requirement;
+it will only be achieved by the cluster metrics favouring such placements.
+
+
 OPTIONS
 -------
 
-- 
1.9.1

Reply via email to