Re: [Xen-devel] [PATCH 06/60] xen/sched: move per-vcpu scheduler private data pointer to sched_unit

2019-07-19 Thread Dario Faggioli
On Fri, 2019-07-19 at 07:03 +0200, Juergen Gross wrote:
> On 19.07.19 00:52, Dario Faggioli wrote:
> > On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> > > This prepares making the different schedulers vcpu agnostic.
>
> > But shouldn't then the struct be called csched2_unit, and cited
> > functions be called csched2_alloc_udata() and csched2_unit()?
> > Now, I know that these transformation happen later in the series.
> > And, this time, I'm not asking to consolidate the patches.
> > 
> > However:
> > - can the changelog of this patch be a little more explicit, not
> > only
> >informing that this is a preparatory patch, but also explaining
> >briefly the temporary inconcistency
> 
> Sure.
> 
Ok, thanks.

> > - could the patches that deal with this be grouped together, so
> > that
> >they are close to each other in the series [...]
> 
> There are some patches which could be reordered, but I'm not sure the
> needed work is worth it. By moving the patches you named closer to
> each
> other there will be other closely related patches ripped apart from
> each other.
>
Yes, I appreciate that.
> 
> In the end it will make no sense to apply only some patches of the
> main
> series. The huge amount of patches is meant only to make review
> easier.
>
And yet, this happens some times, especially for big series, and was
exactly what I was thinking to when making this comment.

Anyway, let's go with an updated changelog for now... And I'll build a
better and more solid opinion after having looked at the rest of the
series.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 06/60] xen/sched: move per-vcpu scheduler private data pointer to sched_unit

2019-07-18 Thread Juergen Gross

On 19.07.19 00:52, Dario Faggioli wrote:

On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:

This prepares making the different schedulers vcpu agnostic.


Ok, but the scheduler private data is, actually, for all the
schedulers, per-vcpu scheduler data such as, for instance,
struct csched2_vcpu.

After this patch we have (sticking to Credit2 as an example)
csched2_vcpu allocated by a function called csched2_alloc_vdata() but
stored on a per-sched_unit basis.

Similarly, we have an accessor method called csched2_vcpu() which
returns the struct csched2_vcpu which was stored in the per-unit
private space.

But shouldn't then the struct be called csched2_unit, and cited
functions be called csched2_alloc_udata() and csched2_unit()?
Now, I know that these transformation happen later in the series.
And, this time, I'm not asking to consolidate the patches.

However:
- can the changelog of this patch be a little more explicit, not only
   informing that this is a preparatory patch, but also explaining
   briefly the temporary inconcistency


Sure.


- could the patches that deal with this be grouped together, so that
   they are close to each other in the series (e.g., this patch, the
   renaming hunks of patch 10 and patches that are currently 20 to 24)?
   Or are there dependencies that I am not considering?


There are some patches which could be reordered, but I'm not sure the
needed work is worth it. By moving the patches you named closer to each
other there will be other closely related patches ripped apart from
each other.

In the end it will make no sense to apply only some patches of the main
series. The huge amount of patches is meant only to make review easier.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 06/60] xen/sched: move per-vcpu scheduler private data pointer to sched_unit

2019-07-18 Thread Dario Faggioli
On Tue, 2019-05-28 at 12:32 +0200, Juergen Gross wrote:
> This prepares making the different schedulers vcpu agnostic.
> 
Ok, but the scheduler private data is, actually, for all the
schedulers, per-vcpu scheduler data such as, for instance,
struct csched2_vcpu.

After this patch we have (sticking to Credit2 as an example)
csched2_vcpu allocated by a function called csched2_alloc_vdata() but
stored on a per-sched_unit basis.

Similarly, we have an accessor method called csched2_vcpu() which
returns the struct csched2_vcpu which was stored in the per-unit
private space.

But shouldn't then the struct be called csched2_unit, and cited
functions be called csched2_alloc_udata() and csched2_unit()?
Now, I know that these transformation happen later in the series.
And, this time, I'm not asking to consolidate the patches.

However:
- can the changelog of this patch be a little more explicit, not only 
  informing that this is a preparatory patch, but also explaining 
  briefly the temporary inconcistency
- could the patches that deal with this be grouped together, so that 
  they are close to each other in the series (e.g., this patch, the 
  renaming hunks of patch 10 and patches that are currently 20 to 24)?
  Or are there dependencies that I am not considering?

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 06/60] xen/sched: move per-vcpu scheduler private data pointer to sched_unit

2019-05-28 Thread Juergen Gross
This prepares making the different schedulers vcpu agnostic.

Signed-off-by: Juergen Gross 
---
 xen/common/sched_arinc653.c |  4 ++--
 xen/common/sched_credit.c   |  6 +++---
 xen/common/sched_credit2.c  | 10 +-
 xen/common/sched_null.c |  4 ++--
 xen/common/sched_rt.c   |  4 ++--
 xen/common/schedule.c   | 24 
 xen/include/xen/sched.h |  2 +-
 7 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_arinc653.c
index 283730b3f8..840891318e 100644
--- a/xen/common/sched_arinc653.c
+++ b/xen/common/sched_arinc653.c
@@ -53,7 +53,7 @@
  * Return a pointer to the ARINC 653-specific scheduler data information
  * associated with the given VCPU (vc)
  */
-#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_priv)
+#define AVCPU(vc) ((arinc653_vcpu_t *)(vc)->sched_unit->priv)
 
 /**
  * Return the global scheduler private data given the scheduler ops pointer
@@ -647,7 +647,7 @@ a653_switch_sched(struct scheduler *new_ops, unsigned int 
cpu,
 
 ASSERT(!pdata && svc && is_idle_vcpu(svc->vc));
 
-idle_vcpu[cpu]->sched_priv = vdata;
+idle_vcpu[cpu]->sched_unit->priv = vdata;
 
 return >_lock;
 }
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index eba4db38bb..4cfef189aa 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -83,7 +83,7 @@
 ((struct csched_private *)((_ops)->sched_data))
 #define CSCHED_PCPU(_c) \
 ((struct csched_pcpu *)per_cpu(schedule_data, _c).sched_priv)
-#define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_priv)
+#define CSCHED_VCPU(_vcpu)  ((struct csched_vcpu *) (_vcpu)->sched_unit->priv)
 #define CSCHED_DOM(_dom)((struct csched_dom *) (_dom)->sched_priv)
 #define RUNQ(_cpu)  (&(CSCHED_PCPU(_cpu)->runq))
 
@@ -641,7 +641,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int 
cpu,
 
 ASSERT(svc && is_idle_vcpu(svc->vcpu));
 
-idle_vcpu[cpu]->sched_priv = vdata;
+idle_vcpu[cpu]->sched_unit->priv = vdata;
 
 /*
  * We are holding the runqueue lock already (it's been taken in
@@ -1022,7 +1022,7 @@ static void
 csched_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
 {
 struct vcpu *vc = unit->vcpu;
-struct csched_vcpu *svc = vc->sched_priv;
+struct csched_vcpu *svc = unit->priv;
 spinlock_t *lock;
 
 BUG_ON( is_idle_vcpu(vc) );
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index def8d87484..86b44dc6cf 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -572,7 +572,7 @@ static inline struct csched2_pcpu *csched2_pcpu(unsigned 
int cpu)
 
 static inline struct csched2_vcpu *csched2_vcpu(const struct vcpu *v)
 {
-return v->sched_priv;
+return v->sched_unit->priv;
 }
 
 static inline struct csched2_dom *csched2_dom(const struct domain *d)
@@ -970,7 +970,7 @@ _runq_assign(struct csched2_vcpu *svc, struct 
csched2_runqueue_data *rqd)
 static void
 runq_assign(const struct scheduler *ops, struct vcpu *vc)
 {
-struct csched2_vcpu *svc = vc->sched_priv;
+struct csched2_vcpu *svc = vc->sched_unit->priv;
 
 ASSERT(svc->rqd == NULL);
 
@@ -997,7 +997,7 @@ _runq_deassign(struct csched2_vcpu *svc)
 static void
 runq_deassign(const struct scheduler *ops, struct vcpu *vc)
 {
-struct csched2_vcpu *svc = vc->sched_priv;
+struct csched2_vcpu *svc = vc->sched_unit->priv;
 
 ASSERT(svc->rqd == c2rqd(ops, vc->processor));
 
@@ -3108,7 +3108,7 @@ static void
 csched2_unit_insert(const struct scheduler *ops, struct sched_unit *unit)
 {
 struct vcpu *vc = unit->vcpu;
-struct csched2_vcpu *svc = vc->sched_priv;
+struct csched2_vcpu *svc = unit->priv;
 struct csched2_dom * const sdom = svc->sdom;
 spinlock_t *lock;
 
@@ -3887,7 +3887,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned 
int cpu,
 ASSERT(!local_irq_is_enabled());
 write_lock(>lock);
 
-idle_vcpu[cpu]->sched_priv = vdata;
+idle_vcpu[cpu]->sched_unit->priv = vdata;
 
 rqi = init_pdata(prv, pdata, cpu);
 
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
index bf52c84b0f..b622c4d7dc 100644
--- a/xen/common/sched_null.c
+++ b/xen/common/sched_null.c
@@ -117,7 +117,7 @@ static inline struct null_private *null_priv(const struct 
scheduler *ops)
 
 static inline struct null_vcpu *null_vcpu(const struct vcpu *v)
 {
-return v->sched_priv;
+return v->sched_unit->priv;
 }
 
 static inline bool vcpu_check_affinity(struct vcpu *v, unsigned int cpu,
@@ -392,7 +392,7 @@ static spinlock_t *null_switch_sched(struct scheduler 
*new_ops,
 
 ASSERT(nvc && is_idle_vcpu(nvc->vcpu));
 
-idle_vcpu[cpu]->sched_priv = vdata;
+idle_vcpu[cpu]->sched_unit->priv = vdata;
 
 /*
  * We are holding the runqueue lock already (it's been taken in
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 63ec732157..700ee9353e 100644
--- a/xen/common/sched_rt.c
+++