On Sun, Nov 15, 2020 at 10:08 PM Andi Kleen wrote:
>
> Actually thinking about it more you should probably pass around ctx/cgroup
> in a single abstract argument. Otherwise have to change all the metrics
> functions for the next filter too.
Ok, will do.
Thanks,
Namhyung
Hi Andi,
On Sun, Nov 15, 2020 at 10:05 PM Andi Kleen wrote:
>
> > @@ -57,6 +59,9 @@ static int saved_value_cmp(struct rb_node *rb_node, const
> > void *entry)
> > if (a->ctx != b->ctx)
> > return a->ctx - b->ctx;
> >
> > + if (a->cgrp != b->cgrp)
> > + return
Actually thinking about it more you should probably pass around ctx/cgroup
in a single abstract argument. Otherwise have to change all the metrics
functions for the next filter too.
> @@ -57,6 +59,9 @@ static int saved_value_cmp(struct rb_node *rb_node, const
> void *entry)
> if (a->ctx != b->ctx)
> return a->ctx - b->ctx;
>
> + if (a->cgrp != b->cgrp)
> + return (char *)a->cgrp < (char *)b->cgrp ? -1 : +1;
This means the sort order will
As of now it doesn't consider cgroups when collecting shadow stats and
metrics so counter values from different cgroups will be saved in a
same slot. This resulted in an incorrect numbers when those cgroups
have different workloads.
For example, let's look at the below - the cgroup A and C runs s
5 matches
Mail list logo