Hi, Peter
Thanks for the reply :)
On 06/23/2014 05:42 PM, Peter Zijlstra wrote:
[snip]
>>
>> cpu 0 cpu 1
>>
>> dbench task_sys
>> dbench task_sys
>> dbench
>> dbench
>> dbench
>>
On Wed, Jun 11, 2014 at 05:18:29PM +0800, Michael wang wrote:
> On 06/11/2014 04:24 PM, Peter Zijlstra wrote:
> [snip]
> >>
> >> IMHO, when we put tasks one group deeper, in other word the totally
> >> weight of these tasks is 1024 (prev is 3072), the load become more
> >> balancing in root, which
On 06/11/2014 04:24 PM, Peter Zijlstra wrote:
[snip]
>>
>> IMHO, when we put tasks one group deeper, in other word the totally
>> weight of these tasks is 1024 (prev is 3072), the load become more
>> balancing in root, which make bl-routine consider the system is
>> balanced, which make we migrate
On Wed, Jun 11, 2014 at 02:13:42PM +0800, Michael wang wrote:
> Hi, Peter
>
> Thanks for the reply :)
>
> On 06/10/2014 08:12 PM, Peter Zijlstra wrote:
> [snip]
> >> Wake-affine for sure pull tasks together for workload like dbench, what
> >> make
> >> it difference when put dbench into a group
Hi, Peter
Thanks for the reply :)
On 06/10/2014 08:12 PM, Peter Zijlstra wrote:
[snip]
>> Wake-affine for sure pull tasks together for workload like dbench, what make
>> it difference when put dbench into a group one level deeper is the
>> load-balance, which happened less.
>
> We load-balance l
On Tue, Jun 10, 2014 at 04:56:12PM +0800, Michael wang wrote:
> On 05/16/2014 03:54 PM, Peter Zijlstra wrote:
> [snip]
> >
> > Hmm, that _should_ more or less work and does indeed suggest there's
> > something iffy.
> >
>
> I think we locate the reason why cpu-cgroup doesn't works well on dbench
On 05/16/2014 03:54 PM, Peter Zijlstra wrote:
[snip]
>
> Hmm, that _should_ more or less work and does indeed suggest there's
> something iffy.
>
I think we locate the reason why cpu-cgroup doesn't works well on dbench
now... finally.
I'd like to link the reproduce way of the issue here since l
On 05/16/2014 03:54 PM, Peter Zijlstra wrote:
[snip]
>>> Right. I played a little (sane groups), saw load balancing as well.
>>
>> Yeah, now we found that even l2 groups will face the same issue, allow
>> me to re-list the details here:
>
> Hmm, that _should_ more or less work and does indeed sug
On Fri, May 16, 2014 at 12:24:35PM +0800, Michael wang wrote:
> Hey, Mike :)
>
> On 05/16/2014 10:51 AM, Mike Galbraith wrote:
> > On Fri, 2014-05-16 at 10:23 +0800, Michael wang wrote:
> >
> >> But we found that one difference when group get deeper is the tasks of
> >> that group become to gathe
On Fri, May 16, 2014 at 10:23:11AM +0800, Michael wang wrote:
> On 05/15/2014 07:57 PM, Peter Zijlstra wrote:
> [snip]
> >>
> >> It's like:
> >>
> >>/cgroup/cpu/l1/l2/l3/l4/l5/l6/A
> >>
> >> about level 7, the issue can not be solved any more.
> >
> > That's pretty retarded and yeah, that's wa
Hey, Mike :)
On 05/16/2014 10:51 AM, Mike Galbraith wrote:
> On Fri, 2014-05-16 at 10:23 +0800, Michael wang wrote:
>
>> But we found that one difference when group get deeper is the tasks of
>> that group become to gathered on CPU more often, some time all the
>> dbench instances was running on
On Fri, 2014-05-16 at 10:23 +0800, Michael wang wrote:
> But we found that one difference when group get deeper is the tasks of
> that group become to gathered on CPU more often, some time all the
> dbench instances was running on the same CPU, this won't happen for l1
> group, may could explain w
On 05/15/2014 07:57 PM, Peter Zijlstra wrote:
[snip]
>>
>> It's like:
>>
>> /cgroup/cpu/l1/l2/l3/l4/l5/l6/A
>>
>> about level 7, the issue can not be solved any more.
>
> That's pretty retarded and yeah, that's way past the point where things
> make sense. You might be lucky and have l1-5 as
On Thu, May 15, 2014 at 05:35:25PM +0800, Michael wang wrote:
> On 05/15/2014 05:06 PM, Peter Zijlstra wrote:
> [snip]
> >> However, when the group level is too deep, that doesn't works any more...
> >>
> >> I'm not sure but seems like 'deep group level' and 'vruntime bonus for
> >> sleeper' is the
On 05/15/2014 05:06 PM, Peter Zijlstra wrote:
[snip]
>> However, when the group level is too deep, that doesn't works any more...
>>
>> I'm not sure but seems like 'deep group level' and 'vruntime bonus for
>> sleeper' is the keep points here, will try to list the root cause after
>> more investiga
On Thu, May 15, 2014 at 04:46:28PM +0800, Michael wang wrote:
> On 05/15/2014 04:35 PM, Peter Zijlstra wrote:
> > On Thu, May 15, 2014 at 11:46:06AM +0800, Michael wang wrote:
> >> But for the dbench, stress combination, that's not spin-wasted, dbench
> >> throughput do dropped, how could we explai
On 05/15/2014 04:35 PM, Peter Zijlstra wrote:
> On Thu, May 15, 2014 at 11:46:06AM +0800, Michael wang wrote:
>> But for the dbench, stress combination, that's not spin-wasted, dbench
>> throughput do dropped, how could we explain that one?
>
> I've no clue what dbench does.. At this point you'll
On Thu, May 15, 2014 at 11:46:06AM +0800, Michael wang wrote:
> But for the dbench, stress combination, that's not spin-wasted, dbench
> throughput do dropped, how could we explain that one?
I've no clue what dbench does.. At this point you'll have to
expose/trace the per-task runtime accounting f
On 05/14/2014 05:44 PM, Peter Zijlstra wrote:
[snip]
>> and then:
>> echo $$ > /sys/fs/cgroup/cpu/A/tasks ; ./my_tool -l
>> echo $$ > /sys/fs/cgroup/cpu/B/tasks ; ./my_tool -l
>> echo $$ > /sys/fs/cgroup/cpu/C/tasks ; ./my_tool 50
>>
>> the results in top is around:
>>
>>
On Wed, May 14, 2014 at 03:36:50PM +0800, Michael wang wrote:
> distro mount cpu-subsys under '/sys/fs/cgroup/cpu', create group like:
> mkdir /sys/fs/cgroup/cpu/A
> mkdir /sys/fs/cgroup/cpu/B
> mkdir /sys/fs/cgroup/cpu/C
Yeah, distro is on crack, nobody sane mounts anything ther
Hi, Peter
On 05/13/2014 10:23 PM, Peter Zijlstra wrote:
[snip]
>
> I you want to investigate !spinners, replace the ABC with slightly more
> complex loads like: https://lkml.org/lkml/2012/6/18/212
I've done a little reform, enabled multi-threads and add a mutex,
please check the code below for d
On 05/13/2014 10:23 PM, Peter Zijlstra wrote:
[snip]
>
> The point remains though, don't use massive and awkward software stacks
> that are impossible to operate.
>
> I you want to investigate !spinners, replace the ABC with slightly more
> complex loads like: https://lkml.org/lkml/2012/6/18/212
On 05/13/2014 09:36 PM, Rik van Riel wrote:
[snip]
>>
>> echo 2048 > /cgroup/c/cpu.shares
>>
>> Where [ABC].sh are spinners:
>
> I suspect the "are spinners" is key.
>
> Infinite loops can run all the time, while dbench spends a lot of
> its time waiting for locks. That waiting may interfere with
On 05/13/2014 05:47 PM, Peter Zijlstra wrote:
> On Tue, May 13, 2014 at 11:34:43AM +0800, Michael wang wrote:
>> During our testing, we found that the cpu.shares doesn't work as
>> expected, the testing is:
>>
>
> /me zaps all the kvm nonsense as that's non reproducable and only serves
> to annoy.
On Tue, May 13, 2014 at 09:36:20AM -0400, Rik van Riel wrote:
> On 05/13/2014 05:47 AM, Peter Zijlstra wrote:
> > On Tue, May 13, 2014 at 11:34:43AM +0800, Michael wang wrote:
> >> During our testing, we found that the cpu.shares doesn't work as
> >> expected, the testing is:
> >>
> >
> > /me zaps
On 05/13/2014 05:47 AM, Peter Zijlstra wrote:
> On Tue, May 13, 2014 at 11:34:43AM +0800, Michael wang wrote:
>> During our testing, we found that the cpu.shares doesn't work as
>> expected, the testing is:
>>
>
> /me zaps all the kvm nonsense as that's non reproducable and only serves
> to annoy.
On Tue, May 13, 2014 at 11:34:43AM +0800, Michael wang wrote:
> During our testing, we found that the cpu.shares doesn't work as
> expected, the testing is:
>
/me zaps all the kvm nonsense as that's non reproducable and only serves
to annoy.
Pro-tip: never use kvm to report cpu-cgroup issues.
>
During our testing, we found that the cpu.shares doesn't work as
expected, the testing is:
X86 HOST:
12 CPU
GUEST(KVM):
6 VCPU
We create 3 GUEST, each with 1024 shares, the workload inside them is:
GUEST_1:
dbench 6
GUEST_2:
stress -c 6
GUEST_3:
stress -c
28 matches
Mail list logo