This is my attempt at understanding the situation, from reading
descriptions provided on list in the context of toolstack patches
which were attempting to work around the anomaly.

The multiple `xxx' entries reflect 1. my lack of complete understanding
2. API defects which I think I have identified.

Signed-off-by: Ian Jackson <ian.jack...@eu.citrix.com>
Cc: Wei Liu <wei.l...@citrix.com>
CC: Dario Faggioli <dario.faggi...@citrix.com>
CC: Juergen Gross <jgr...@suse.com>
CC: George Dunlap <george.dun...@eu.citrix.com>
CC: Jan Beulich <jbeul...@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
---
 xen/include/public/sysctl.h |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 0849908..cfccf38 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -560,6 +560,34 @@ struct xen_sysctl_cpupool_op {
 typedef struct xen_sysctl_cpupool_op xen_sysctl_cpupool_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpupool_op_t);
 
+/*
+ * cpupool operations may return EBUSY if the operation cannot be
+ * executed right now because of another cpupool operation which is
+ * still in progress.  In this case, EBUSY means that the failed
+ * operation had no effect.
+ *
+ * Some operations including at least RMCPU (xxx which others?) may
+ * also return EBUSY because a guest has temporarily pinned one of its
+ * vcpus to the pcpu in question.  It is the pious hope (xxx) of the
+ * author of this comment that this can only occur for domains which
+ * have been granted some kind of hardware privilege (eg passthrough).
+ *
+ * In this case the operation may have been partially carried out and
+ * the pcpu is left in an anomalous state.  In this state the pcpu may
+ * be used by some not readily predictable subset of the vcpus
+ * (domains) whose vcpus are in the old cpupool.  (xxx is this true?)
+ *
+ * This can be detected by seeing whether the pcpu can be added to a
+ * different cpupool.  (xxx this is a silly interface; the situation
+ * should be reported by a different errno value, at least.)  If the
+ * pcpu can't be added to a different cpupool for this reason,
+ * attempts to do so will returning (xxx what errno value?)
+ *
+ * The anomalous situation can be recovered by adding the pcpu back to
+ * the cpupool it came from (xxx this permits a buggy or malicious
+ * guest to prevent the cpu ever being removed from its cpupool).
+ */ 
+
 #define ARINC653_MAX_DOMAINS_PER_SCHEDULE   64
 /*
  * This structure is used to pass a new ARINC653 schedule from a
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to