Re: [libvirt] [PATCH 29/34] conf: Don't store vcpusched orthogonally to other vcpu info

2016-01-29 Thread Peter Krempa
On Mon, Jan 18, 2016 at 18:06:22 -0500, John Ferlan wrote:
> On 01/14/2016 11:27 AM, Peter Krempa wrote:

[...]

> > +virDomainDefGetVcpuSched(virDomainDefPtr def,
> > + unsigned int vcpu)
> > +{
> > +virDomainVcpuInfoPtr vcpuinfo;
> > +
> > +if (!(vcpuinfo = virDomainDefGetVcpu(def, vcpu)))
> > +return NULL;
> 
> Does there need to be a vcpuinfo->online check?  (aka OnlineVcpuMap)

No. If you look carefully you'll notice that the scheduler can be
specified also for offline vCPUs and is applied when cpus are onlined.

I'm planing to do the same for the pinning bitmaps in part 3.

> 
> Will the caller need to differentiate?  I know this is the parsing
> code... just thinking while typing especially for non-static function.
> Later thoughts made me think this should be static for parse...

Yes it's not used anywhere else. I don't remember why I didn't make it
static.

> 


> > @@ -14592,6 +14601,55 @@ virDomainSchedulerParse(xmlNodePtr node,
> > 
> > 
> >  static int
> > +virDomainThreadSchedParseHelper(xmlNodePtr node,
> > +const char *name,
> > +virDomainThreadSchedParamPtr 
> > (*func)(virDomainDefPtr, unsigned int),
> > +virDomainDefPtr def)
> > +{
> > +ssize_t next = -1;
> > +virBitmapPtr map = NULL;
> > +virDomainThreadSchedParamPtr sched;
> > +virProcessSchedPolicy policy;
> > +int priority;
> > +int ret = -1;
> > +
> > +if (!(map = virDomainSchedulerParse(node, name, , )))
> > +goto cleanup;
> > +
> 
> Replacing the virBitmapOverlaps...
> 
> > +while ((next = virBitmapNextSetBit(map, next)) > -1) {
> > +if (!(sched = func(def, next)))
> > +goto cleanup;
> 
> Could this be 'continue;' ?  That is, is the data required?  I'm
> thinking primarily of the vcpu->online == false case...

No, it's invalid to specify it for a non-existant object, but valid for
offline one.

> 
> > +
> > +if (sched->policy != VIR_PROC_POLICY_NONE) {
> > +virReportError(VIR_ERR_XML_DETAIL,
> > +   _("%ssched attributes 'vcpus' must not 
> > overlap"),
> 
> Since the code will be shared in patch 30, change message to:
> 
> "%sched attribute '%s' must not overlap",

Indeed, I forgot to fix this.

> 
> using 'name' for both %s params. Similar to virDomainFormatSchedDef has
> done things.
> 
> > +   name);
> > +goto cleanup;

[...]


> > @@ -21504,6 +21543,128 @@ 
> > virDomainDefHasCapabilitiesFeatures(virDomainDefPtr def)
> > 
> > 
> 
> Probably a good place to note the arguments, but specifically that
> "name" is used to generate the XML " 
> >  static int
> > +virDomainFormatSchedDef(virDomainDefPtr def,
> > +virBufferPtr buf,
> > +const char *name,
> > +virDomainThreadSchedParamPtr 
> > (*func)(virDomainDefPtr, unsigned int),
> > +virBitmapPtr resourceMap)
> > +

[...]

> > +virBufferAddLit(buf, "/>\n");
> > +
> > +/* subtract all vCPUs that were already found */
> > +virBitmapSubtract(schedMap, currentMap);
> > +}
> > +}
> 
> This is one heckuva loop - I hope others can look as well because my
> eyes and brain decided to run in the other direction ;-)

Well the loop is complex because the original design is orthogonal. If
it was designed properly, it would be quite a lot simpler.

> 
> > +
> > +ret = 0;
> > +
> > + cleanup:
> > +virBitmapFree(schedMap);
> > +virBitmapFree(prioMap);
> > +return ret;
> > +}
> > +
> > +
> > +static int
> > +virDomainFormatVcpuSchedDef(virDomainDefPtr def,
> > +virBufferPtr buf)
> > +{
> > +virBitmapPtr allcpumap;
> > +int ret;
> > +
> > +if (virDomainDefGetVcpusMax(def) == 0)
> > +return 0;
> 
> Hmm... a zero for maxvcpus...  In patch 2 we disallow machines with 0
> vcpus online... Just a strange check.

Just in qemu. Some other hypervisors think 0 is the right way to
describe maximum host vcpus which would make this crash.

> 
> > +
> > +if (!(allcpumap = virBitmapNew(virDomainDefGetVcpusMax(def
> 
> use of same function - although I assume the compiler will optimize that
> for you anyway...
> 
> > +return -1;
> > +
> > +virBitmapSetAll(allcpumap);
> > +
> > +ret = virDomainFormatSchedDef(def, buf, "vcpu", 
> > virDomainDefGetVcpuSched,
> > +  allcpumap);
> 
> This differs slightly from Parse which uses "vcpus" - I'm wondering if

The parser historically used "vcpus" or "iothreads" I kept it there that
way but I don't like it so I made this as it should be.

> it should be consistent.  At the very least documented...

I'll probably change the parser code to drop the extra s which can be
constructed internally.

Peter



signature.asc
Description: Digital signature

Re: [libvirt] [PATCH 29/34] conf: Don't store vcpusched orthogonally to other vcpu info

2016-01-29 Thread Peter Krempa
On Sat, Jan 16, 2016 at 10:23:13 -0500, John Ferlan wrote:
> 
> 
> On 01/14/2016 11:27 AM, Peter Krempa wrote:
> > Due to bad design the vcpu sched element is orthogonal to the way how
> > the data belongs to the corresponding objects. Now that vcpus are a
> > struct that allow to store other info too, let's convert the data to the
> > sane structure.
> > 
> > The helpers for the conversion are made universal so that they can be
> > reused for iothreads too.
> > 
> > This patch also resolves https://bugzilla.redhat.com/show_bug.cgi?id=1235180
> > since with the correct storage approach you can't have dangling data.
> > ---
> >  src/conf/domain_conf.c  | 231 
> > +++-
> >  src/conf/domain_conf.h  |   8 +-
> >  src/qemu/qemu_driver.c  |   6 +-
> >  src/qemu/qemu_process.c |   8 +-
> >  4 files changed, 202 insertions(+), 51 deletions(-)
> > 

[...]

> > +for (i = VIR_PROC_POLICY_NONE + 1; i < VIR_PROC_POLICY_LAST; i++) {
> > +virBitmapClearAll(schedMap);
> > +
> > +/* find vcpus using a particular scheduler */
> > +next = -1;
> > +while ((next = virBitmapNextSetBit(resourceMap, next)) > -1) {
> > +sched = func(def, next);
> > +
> > +if (sched->policy == i)
> > +ignore_value(virBitmapSetBit(schedMap, next));
> > +}
> > +
> > +/* it's necessary to discriminate priority levels for schedulers 
> > that
> > + * have them */
> > +while (!virBitmapIsAllClear(schedMap)) {

So start of this loop guarantees, that theres at least one element in
the bitmap ...

> > +virBitmapPtr currentMap = NULL;
> > +ssize_t nextprio;
> > +bool hasPriority = false;
> > +int priority;
> > +
> > +switch ((virProcessSchedPolicy) i) {
> > +case VIR_PROC_POLICY_NONE:
> > +case VIR_PROC_POLICY_BATCH:
> > +case VIR_PROC_POLICY_IDLE:
> > +case VIR_PROC_POLICY_LAST:
> > +currentMap = schedMap;
> > +break;
> > +
> > +case VIR_PROC_POLICY_FIFO:
> > +case VIR_PROC_POLICY_RR:
> > +virBitmapClearAll(prioMap);
> > +hasPriority = true;
> > +
> > +/* we need to find a subset of vCPUs with the given 
> > scheduler
> > + * that share the priority */
> > +nextprio = virBitmapNextSetBit(schedMap, -1);
> 
> Coverity notes that virBitmapNextSetBit can return -1; however, [1]

... thus this won't return -1 in any case here. Coverity is obviously
wrong as usual since it's terrible at introspecting the bitmap code.

> 
> > +sched = func(def, nextprio);
> > +priority = sched->priority;
> > +
> > +ignore_value(virBitmapSetBit(prioMap, nextprio));
> 
> [1] passing a -1 'nextprio' to virBitmapSetBit as 'size_t b' cannot happen.

So this doesn't make sense.

Peter


signature.asc
Description: Digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 29/34] conf: Don't store vcpusched orthogonally to other vcpu info

2016-01-18 Thread John Ferlan


On 01/14/2016 11:27 AM, Peter Krempa wrote:
> Due to bad design the vcpu sched element is orthogonal to the way how
> the data belongs to the corresponding objects. Now that vcpus are a
> struct that allow to store other info too, let's convert the data to the
> sane structure.
> 
> The helpers for the conversion are made universal so that they can be
> reused for iothreads too.
> 
> This patch also resolves https://bugzilla.redhat.com/show_bug.cgi?id=1235180
> since with the correct storage approach you can't have dangling data.
> ---
>  src/conf/domain_conf.c  | 231 
> +++-
>  src/conf/domain_conf.h  |   8 +-
>  src/qemu/qemu_driver.c  |   6 +-
>  src/qemu/qemu_process.c |   8 +-
>  4 files changed, 202 insertions(+), 51 deletions(-)
> 

This is related to patch 26 at least w/r/t virBitmapSubtract usage.

There's also multiple things going on - between code motion and code
addition. In particular virDomainFormatSchedDef is does quite a lot...
Hopefully someone else (perhaps Martin) who's worked in the sched code
before can take a look!

> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> index b4f6fe9..e2dda9a 100644
> --- a/src/conf/domain_conf.c
> +++ b/src/conf/domain_conf.c
> @@ -1415,6 +1415,19 @@ virDomainDefGetVcpu(virDomainDefPtr def,
>  }
> 
> 
> +virDomainThreadSchedParamPtr

s/^/^static /  ??

> +virDomainDefGetVcpuSched(virDomainDefPtr def,
> + unsigned int vcpu)
> +{
> +virDomainVcpuInfoPtr vcpuinfo;
> +
> +if (!(vcpuinfo = virDomainDefGetVcpu(def, vcpu)))
> +return NULL;

Does there need to be a vcpuinfo->online check?  (aka OnlineVcpuMap)

Will the caller need to differentiate?  I know this is the parsing
code... just thinking while typing especially for non-static function.
Later thoughts made me think this should be static for parse...

> +
> +return >sched;
> +}
> +
> +
>  /**
>   * virDomainDefHasVcpusPin:
>   * @def: domain definition
> @@ -2546,10 +2559,6 @@ void virDomainDefFree(virDomainDefPtr def)
> 
>  virBitmapFree(def->cputune.emulatorpin);
> 
> -for (i = 0; i < def->cputune.nvcpusched; i++)
> -virBitmapFree(def->cputune.vcpusched[i].ids);
> -VIR_FREE(def->cputune.vcpusched);
> -
>  for (i = 0; i < def->cputune.niothreadsched; i++)
>  virBitmapFree(def->cputune.iothreadsched[i].ids);
>  VIR_FREE(def->cputune.iothreadsched);
> @@ -14592,6 +14601,55 @@ virDomainSchedulerParse(xmlNodePtr node,
> 
> 
>  static int
> +virDomainThreadSchedParseHelper(xmlNodePtr node,
> +const char *name,
> +virDomainThreadSchedParamPtr 
> (*func)(virDomainDefPtr, unsigned int),
> +virDomainDefPtr def)
> +{
> +ssize_t next = -1;
> +virBitmapPtr map = NULL;
> +virDomainThreadSchedParamPtr sched;
> +virProcessSchedPolicy policy;
> +int priority;
> +int ret = -1;
> +
> +if (!(map = virDomainSchedulerParse(node, name, , )))
> +goto cleanup;
> +

Replacing the virBitmapOverlaps...

> +while ((next = virBitmapNextSetBit(map, next)) > -1) {
> +if (!(sched = func(def, next)))
> +goto cleanup;

Could this be 'continue;' ?  That is, is the data required?  I'm
thinking primarily of the vcpu->online == false case...

> +
> +if (sched->policy != VIR_PROC_POLICY_NONE) {
> +virReportError(VIR_ERR_XML_DETAIL,
> +   _("%ssched attributes 'vcpus' must not overlap"),

Since the code will be shared in patch 30, change message to:

"%sched attribute '%s' must not overlap",

using 'name' for both %s params. Similar to virDomainFormatSchedDef has
done things.

> +   name);
> +goto cleanup;
> +}
> +
> +sched->policy = policy;
> +sched->priority = priority;
> +}
> +
> +ret = 0;
> +
> + cleanup:
> +virBitmapFree(map);
> +return ret;
> +}
> +
> +
> +static int
> +virDomainVcpuThreadSchedParse(xmlNodePtr node,
> +  virDomainDefPtr def)
> +{
> +return virDomainThreadSchedParseHelper(node, "vcpus",
> +   virDomainDefGetVcpuSched,
> +   def);
> +}
> +
> +
> +static int
>  virDomainThreadSchedParse(xmlNodePtr node,
>unsigned int minid,
>unsigned int maxid,
> @@ -15145,29 +15203,10 @@ virDomainDefParseXML(xmlDocPtr xml,
> _("cannot extract vcpusched nodes"));
>  goto error;
>  }
> -if (n) {
> -if (VIR_ALLOC_N(def->cputune.vcpusched, n) < 0)
> -goto error;
> -def->cputune.nvcpusched = n;
> 
> -for (i = 0; i < def->cputune.nvcpusched; i++) {
> -if (virDomainThreadSchedParse(nodes[i],
> -  0,
> -  

Re: [libvirt] [PATCH 29/34] conf: Don't store vcpusched orthogonally to other vcpu info

2016-01-16 Thread John Ferlan


On 01/14/2016 11:27 AM, Peter Krempa wrote:
> Due to bad design the vcpu sched element is orthogonal to the way how
> the data belongs to the corresponding objects. Now that vcpus are a
> struct that allow to store other info too, let's convert the data to the
> sane structure.
> 
> The helpers for the conversion are made universal so that they can be
> reused for iothreads too.
> 
> This patch also resolves https://bugzilla.redhat.com/show_bug.cgi?id=1235180
> since with the correct storage approach you can't have dangling data.
> ---
>  src/conf/domain_conf.c  | 231 
> +++-
>  src/conf/domain_conf.h  |   8 +-
>  src/qemu/qemu_driver.c  |   6 +-
>  src/qemu/qemu_process.c |   8 +-
>  4 files changed, 202 insertions(+), 51 deletions(-)
> 

As noted from my review of 10/34, I'm just noting something Coverity
found - will review more as I get to this later.

> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
> index b4f6fe9..e2dda9a 100644

[...]

> 
>  static int
> +virDomainFormatSchedDef(virDomainDefPtr def,
> +virBufferPtr buf,
> +const char *name,
> +virDomainThreadSchedParamPtr 
> (*func)(virDomainDefPtr, unsigned int),
> +virBitmapPtr resourceMap)
> +{
> +virBitmapPtr schedMap = NULL;
> +virBitmapPtr prioMap = NULL;
> +virDomainThreadSchedParamPtr sched;
> +char *tmp = NULL;
> +ssize_t next;
> +size_t i;
> +int ret = -1;
> +
> +if (!(schedMap = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN)) ||
> +!(prioMap = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN)))
> +goto cleanup;
> +
> +for (i = VIR_PROC_POLICY_NONE + 1; i < VIR_PROC_POLICY_LAST; i++) {
> +virBitmapClearAll(schedMap);
> +
> +/* find vcpus using a particular scheduler */
> +next = -1;
> +while ((next = virBitmapNextSetBit(resourceMap, next)) > -1) {
> +sched = func(def, next);
> +
> +if (sched->policy == i)
> +ignore_value(virBitmapSetBit(schedMap, next));
> +}
> +
> +/* it's necessary to discriminate priority levels for schedulers that
> + * have them */
> +while (!virBitmapIsAllClear(schedMap)) {
> +virBitmapPtr currentMap = NULL;
> +ssize_t nextprio;
> +bool hasPriority = false;
> +int priority;
> +
> +switch ((virProcessSchedPolicy) i) {
> +case VIR_PROC_POLICY_NONE:
> +case VIR_PROC_POLICY_BATCH:
> +case VIR_PROC_POLICY_IDLE:
> +case VIR_PROC_POLICY_LAST:
> +currentMap = schedMap;
> +break;
> +
> +case VIR_PROC_POLICY_FIFO:
> +case VIR_PROC_POLICY_RR:
> +virBitmapClearAll(prioMap);
> +hasPriority = true;
> +
> +/* we need to find a subset of vCPUs with the given scheduler
> + * that share the priority */
> +nextprio = virBitmapNextSetBit(schedMap, -1);

Coverity notes that virBitmapNextSetBit can return -1; however, [1]

> +sched = func(def, nextprio);
> +priority = sched->priority;
> +
> +ignore_value(virBitmapSetBit(prioMap, nextprio));

[1] passing a -1 'nextprio' to virBitmapSetBit as 'size_t b' cannot happen.

> +
> +while ((nextprio = virBitmapNextSetBit(schedMap, nextprio)) 
> > -1) {
> +sched = func(def, nextprio);
> +if (sched->priority == priority)
> +ignore_value(virBitmapSetBit(prioMap, nextprio));
> +}
> +
> +currentMap = prioMap;
> +break;
> +}
> +
> +/* now we have the complete group */
> +if (!(tmp = virBitmapFormat(currentMap)))
> +goto cleanup;
> +
> +virBufferAsprintf(buf,
> +  "<%ssched %ss='%s' scheduler='%s'",
> +  name, name, tmp,
> +  virProcessSchedPolicyTypeToString(i));
> +VIR_FREE(tmp);
> +
> +if (hasPriority)
> +virBufferAsprintf(buf, " priority='%d'", priority);
> +
> +virBufferAddLit(buf, "/>\n");
> +
> +/* subtract all vCPUs that were already found */
> +virBitmapSubtract(schedMap, currentMap);
> +}
> +}
> +
> +ret = 0;
> +
> + cleanup:
> +virBitmapFree(schedMap);
> +virBitmapFree(prioMap);
> +return ret;
> +}
> +
> +

[...]

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list