On Tue, 2015-05-12 at 12:01 -0400, Meng Xu wrote:
> Hi Dario,
> 
Hi,

> 2015-05-12 10:06 GMT-04:00 Dario Faggioli <dario.faggi...@citrix.com>:

> > --- a/xen/common/sched_rt.c
> > +++ b/xen/common/sched_rt.c
> > @@ -124,6 +124,24 @@
> >  #define TRC_RTDS_BUDGET_REPLENISH TRC_SCHED_CLASS_EVT(RTDS, 4)
> >  #define TRC_RTDS_SCHED_TASKLET    TRC_SCHED_CLASS_EVT(RTDS, 5)
> >
> > + /*
> > +  * Useful to avoid too many cpumask_var_t on the stack.
> > +  */
> > +static cpumask_t **_cpumask_scratch;
> > +#define cpumask_scratch _cpumask_scratch[smp_processor_id()]
> 
> The cpumask_scratch seems never used in this patch.. Did I miss anything?
>
No, it's never used, because, as it happens, the use case this patch
deals with needs to explicitly reference the _cpumask_scratch array.
That should be the exception rather than the rule, and the reason why it
is necessary this time is given in the comments.

> If it's not used in any other places, do we really need this #define?
> 
We do, IMO. The changelog says that, in addition to improving the dump
output, this change puts in place a "scratch cpumask machinery", useful
to reduce the use of either on stack or dynamically allocated cpumask-s.
That #define is part of the machinery, as it is what people should use
everywhere (possible), to reference the scratch bitmap.

So, let me ask a question myself: when can we expect a patch that takes
advantage of the scratch cpumask machinery being introduced here, in
order to get rid of some (most, hopefully) of the cpumask_t and
cpumask_var_t all around sched_rt.c? :-D

When that will happen, you'll need that #define, and if I kill it from
here, you'll have to introduce it yourself. As said, I like it being
introduced here better, but can live with you adding it with your
patch(es), if that's what everyone else prefer.

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to