Re: [Xen-devel] [PATCH 17/19] xen: credit2: the private scheduler lock can be an rwlock.

2016-07-07 Thread George Dunlap
On Sat, Jun 18, 2016 at 12:13 AM, Dario Faggioli
 wrote:
> In fact, the data it protects only change either at init-time,
> during cpupools manipulation, or when changing domains' weights.
> In all other cases (namely, load balancing, reading weights
> and status dumping), information is only read.
>
> Therefore, let the lock be an read/write one. This means there
> is no full serialization point for the whole scheduler and
> for all the pCPUs of the host any longer.
>
> This is particularly good for scalability (especially when doing
> load balancing).
>
> Also, update the high level description of the locking discipline,
> and take the chance for rewording it a little bit (as well as
> for adding a couple of locking related ASSERT()-s).
>
> Signed-off-by: Dario Faggioli 

Looks good:

Reviewed-by: George Dunlap 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH 17/19] xen: credit2: the private scheduler lock can be an rwlock.

2016-06-17 Thread Dario Faggioli
In fact, the data it protects only change either at init-time,
during cpupools manipulation, or when changing domains' weights.
In all other cases (namely, load balancing, reading weights
and status dumping), information is only read.

Therefore, let the lock be an read/write one. This means there
is no full serialization point for the whole scheduler and
for all the pCPUs of the host any longer.

This is particularly good for scalability (especially when doing
load balancing).

Also, update the high level description of the locking discipline,
and take the chance for rewording it a little bit (as well as
for adding a couple of locking related ASSERT()-s).

Signed-off-by: Dario Faggioli 
---
Cc: George Dunlap 
Cc: Anshul Makkar 
Cc: David Vrabel 
---
 xen/common/sched_credit2.c |  133 ++--
 1 file changed, 80 insertions(+), 53 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 3fdc91c..93943fa 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,17 +85,37 @@
 
 /*
  * Locking:
- * - Schedule-lock is per-runqueue
- *  + Protects runqueue data, runqueue insertion, &c
- *  + Also protects updates to private sched vcpu structure
- *  + Must be grabbed using vcpu_schedule_lock_irq() to make sure 
vcpu->processr
- *doesn't change under our feet.
- * - Private data lock
- *  + Protects access to global domain list
- *  + All other private data is written at init and only read afterwards.
+ *
+ * - runqueue lock
+ *  + it is per-runqueue, so:
+ *   * cpus in a runqueue take the runqueue lock, when using
+ * pcpu_schedule_lock() / vcpu_schedule_lock() (and friends),
+ *   * a cpu may (try to) take a "remote" runqueue lock, e.g., for
+ * load balancing;
+ *  + serializes runqueue operations (removing and inserting vcpus);
+ *  + protects runqueue-wide data in csched2_runqueue_data;
+ *  + protects vcpu parameters in csched2_vcpu for the vcpu in the
+ *runqueue.
+ *
+ * - Private scheduler lock
+ *  + protects scheduler-wide data in csched2_private, such as:
+ *   * the list of domains active in this scheduler,
+ *   * what cpus and what runqueues are active and in what
+ * runqueue each cpu is;
+ *  + serializes the operation of changing the weights of domains;
+ *
+ * - Type:
+ *  + runqueue locks are 'regular' spinlocks;
+ *  + the private scheduler lock can be an rwlock. In fact, data
+ *it protects is modified only during initialization, cpupool
+ *manipulation and when changing weights, and read in all
+ *other cases (e.g., during load balancing).
+ *
  * Ordering:
- * - We grab private->schedule when updating domain weight; so we
- *  must never grab private if a schedule lock is held.
+ *  + tylock must be used when wanting to take a runqueue lock,
+ *if we already hold another one;
+ *  + if taking both a runqueue lock and the private scheduler
+ *lock is, the latter must always be taken for first.
  */
 
 /*
@@ -342,7 +362,7 @@ struct csched2_runqueue_data {
  * System-wide private data
  */
 struct csched2_private {
-spinlock_t lock;
+rwlock_t lock;
 cpumask_t initialized; /* CPU is initialized for this pool */
 
 struct list_head sdom; /* Used mostly for dump keyhandler. */
@@ -1300,13 +1320,14 @@ static void
 csched2_vcpu_wake(const struct scheduler *ops, struct vcpu *vc)
 {
 struct csched2_vcpu * const svc = CSCHED2_VCPU(vc);
+unsigned int cpu = vc->processor;
 s_time_t now;
 
-/* Schedule lock should be held at this point. */
+ASSERT(spin_is_locked(per_cpu(schedule_data, cpu).schedule_lock));
 
 ASSERT(!is_idle_vcpu(vc));
 
-if ( unlikely(curr_on_cpu(vc->processor) == vc) )
+if ( unlikely(curr_on_cpu(cpu) == vc) )
 {
 SCHED_STAT_CRANK(vcpu_wake_running);
 goto out;
@@ -1397,7 +1418,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu 
*vc)
 ASSERT(!cpumask_empty(&prv->active_queues));
 
 /* Locking:
- * - vc->processor is already locked
+ * - Runqueue lock of vc->processor is already locked
  * - Need to grab prv lock to make sure active runqueues don't
  *   change
  * - Need to grab locks for other runqueues while checking
@@ -1410,8 +1431,9 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu 
*vc)
  * just grab the prv lock.  Instead, we'll have to trylock, and
  * do something else reasonable if we fail.
  */
+ASSERT(spin_is_locked(per_cpu(schedule_data, 
vc->processor).schedule_lock));
 
-if ( !spin_trylock(&prv->lock) )
+if ( !read_trylock(&prv->lock) )
 {
 /* We may be here because someon requested us to migrate */
 __clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
@@ -1493,7 +1515,7 @@ csched2_cpu_pick(const struct scheduler *ops, struct vcpu 
*vc)
 }
 
 out_up:
-spin_unlock(&prv->lock);
+read_unlock(&prv->lock);
 
 if ( unlikely(tb_init_done) )
 {
@@ -1647,15 +1669