Re: [Xen-devel] [PATCH 2/9] xen/sched: make sched-if.h really scheduler private

2019-12-18 Thread Dario Faggioli
On Wed, 2019-12-18 at 10:15 +0100, Jürgen Groß wrote:
> On 18.12.19 10:10, Andrew Cooper wrote:
> > On 18/12/2019 09:08, Dario Faggioli wrote:
> > > > Signed-off-by: Juergen Gross 
> > > > 
> > > Reviewed-by: Dario Faggioli 
> > 
> > Thoughts on simply naming it private.h?
> 
> Fine with me. Dario?
> 
Ah, yes, indeed.

In fact, it being called sched-if inside common/sched would have been
another instance of the 'sched' repetition I suggested myself to
limit/get rid of... but I did not notice it myself when looking at the
patch. :-)

I am indeed ok with private.h

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/9] xen/sched: make sched-if.h really scheduler private

2019-12-18 Thread Jürgen Groß

On 18.12.19 10:10, Andrew Cooper wrote:

On 18/12/2019 09:08, Dario Faggioli wrote:

On Wed, 2019-12-18 at 08:48 +0100, Juergen Gross wrote:

include/xen/sched-if.h should be private to scheduler code, so move
it
to common/sched/sched-if.h and move the remaining use cases to
cpupool.c and schedule.c.


Very, very nice cleanup.


Yup - very nice to see.




Signed-off-by: Juergen Gross 


Reviewed-by: Dario Faggioli 


Thoughts on simply naming it private.h?


Fine with me. Dario?


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/9] xen/sched: make sched-if.h really scheduler private

2019-12-18 Thread Andrew Cooper
On 18/12/2019 09:08, Dario Faggioli wrote:
> On Wed, 2019-12-18 at 08:48 +0100, Juergen Gross wrote:
>> include/xen/sched-if.h should be private to scheduler code, so move
>> it
>> to common/sched/sched-if.h and move the remaining use cases to
>> cpupool.c and schedule.c.
>>
> Very, very nice cleanup.

Yup - very nice to see.

>
>> Signed-off-by: Juergen Gross 
>>
> Reviewed-by: Dario Faggioli 

Thoughts on simply naming it private.h?

~Andrew



signature.asc
Description: OpenPGP digital signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 2/9] xen/sched: make sched-if.h really scheduler private

2019-12-18 Thread Dario Faggioli
On Wed, 2019-12-18 at 08:48 +0100, Juergen Gross wrote:
> include/xen/sched-if.h should be private to scheduler code, so move
> it
> to common/sched/sched-if.h and move the remaining use cases to
> cpupool.c and schedule.c.
> 
Very, very nice cleanup.

> Signed-off-by: Juergen Gross 
>
Reviewed-by: Dario Faggioli 

Thanks and Regards 
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)



signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 2/9] xen/sched: make sched-if.h really scheduler private

2019-12-17 Thread Juergen Gross
include/xen/sched-if.h should be private to scheduler code, so move it
to common/sched/sched-if.h and move the remaining use cases to
cpupool.c and schedule.c.

Signed-off-by: Juergen Gross 
---
 xen/arch/x86/dom0_build.c|   5 +-
 xen/common/domain.c  |  70 --
 xen/common/domctl.c  | 135 +--
 xen/common/sched/cpupool.c   |  13 +-
 xen/{include/xen => common/sched}/sched-if.h |   3 -
 xen/common/sched/sched_arinc653.c|   3 +-
 xen/common/sched/sched_credit.c  |   2 +-
 xen/common/sched/sched_credit2.c |   3 +-
 xen/common/sched/sched_null.c|   3 +-
 xen/common/sched/sched_rt.c  |   3 +-
 xen/common/sched/schedule.c  | 191 ++-
 xen/include/xen/domain.h |   3 +
 xen/include/xen/sched.h  |   7 +
 13 files changed, 228 insertions(+), 213 deletions(-)
 rename xen/{include/xen => common/sched}/sched-if.h (99%)

diff --git a/xen/arch/x86/dom0_build.c b/xen/arch/x86/dom0_build.c
index 28b964e018..56c2dee0fc 100644
--- a/xen/arch/x86/dom0_build.c
+++ b/xen/arch/x86/dom0_build.c
@@ -9,7 +9,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 
 #include 
@@ -227,9 +226,9 @@ unsigned int __init dom0_max_vcpus(void)
 dom0_nodes = node_online_map;
 for_each_node_mask ( node, dom0_nodes )
 cpumask_or(_cpus, _cpus, _to_cpumask(node));
-cpumask_and(_cpus, _cpus, cpupool0->cpu_valid);
+cpumask_and(_cpus, _cpus, cpupool_valid_cpus(cpupool0));
 if ( cpumask_empty(_cpus) )
-cpumask_copy(_cpus, cpupool0->cpu_valid);
+cpumask_copy(_cpus, cpupool_valid_cpus(cpupool0));
 
 max_vcpus = cpumask_weight(_cpus);
 if ( opt_dom0_max_vcpus_min > max_vcpus )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 66c7fc..f4f0a66262 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -10,7 +10,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -565,75 +564,6 @@ void __init setup_system_domains(void)
 #endif
 }
 
-void domain_update_node_affinity(struct domain *d)
-{
-cpumask_var_t dom_cpumask, dom_cpumask_soft;
-cpumask_t *dom_affinity;
-const cpumask_t *online;
-struct sched_unit *unit;
-unsigned int cpu;
-
-/* Do we have vcpus already? If not, no need to update node-affinity. */
-if ( !d->vcpu || !d->vcpu[0] )
-return;
-
-if ( !zalloc_cpumask_var(_cpumask) )
-return;
-if ( !zalloc_cpumask_var(_cpumask_soft) )
-{
-free_cpumask_var(dom_cpumask);
-return;
-}
-
-online = cpupool_domain_master_cpumask(d);
-
-spin_lock(>node_affinity_lock);
-
-/*
- * If d->auto_node_affinity is true, let's compute the domain's
- * node-affinity and update d->node_affinity accordingly. if false,
- * just leave d->auto_node_affinity alone.
- */
-if ( d->auto_node_affinity )
-{
-/*
- * We want the narrowest possible set of pcpus (to get the narowest
- * possible set of nodes). What we need is the cpumask of where the
- * domain can run (the union of the hard affinity of all its vcpus),
- * and the full mask of where it would prefer to run (the union of
- * the soft affinity of all its various vcpus). Let's build them.
- */
-for_each_sched_unit ( d, unit )
-{
-cpumask_or(dom_cpumask, dom_cpumask, unit->cpu_hard_affinity);
-cpumask_or(dom_cpumask_soft, dom_cpumask_soft,
-   unit->cpu_soft_affinity);
-}
-/* Filter out non-online cpus */
-cpumask_and(dom_cpumask, dom_cpumask, online);
-ASSERT(!cpumask_empty(dom_cpumask));
-/* And compute the intersection between hard, online and soft */
-cpumask_and(dom_cpumask_soft, dom_cpumask_soft, dom_cpumask);
-
-/*
- * If not empty, the intersection of hard, soft and online is the
- * narrowest set we want. If empty, we fall back to hard
- */
-dom_affinity = cpumask_empty(dom_cpumask_soft) ?
-   dom_cpumask : dom_cpumask_soft;
-
-nodes_clear(d->node_affinity);
-for_each_cpu ( cpu, dom_affinity )
-node_set(cpu_to_node(cpu), d->node_affinity);
-}
-
-spin_unlock(>node_affinity_lock);
-
-free_cpumask_var(dom_cpumask_soft);
-free_cpumask_var(dom_cpumask);
-}
-
-
 int domain_set_node_affinity(struct domain *d, const nodemask_t *affinity)
 {
 /* Being disjoint with the system is just wrong. */
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index 03d0226039..3407db44fd 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -11,7 +11,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -65,9 +64,9 @@ static int bitmap_to_xenctl_bitmap(struct