On 2014/5/28 19:55, Mike Galbraith wrote:
> On Wed, 2014-05-28 at 19:43 +0800, Libo Chen wrote:
>> On 2014/5/28 17:08, Thomas Gleixner wrote:
>>> On Wed, 28 May 2014, Libo Chen wrote:
>>>
On 2014/5/28 9:53, Mike Galbraith wrote:
> On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
>
On 2014/5/28 17:08, Thomas Gleixner wrote:
> On Wed, 28 May 2014, Libo Chen wrote:
>
>> On 2014/5/28 9:53, Mike Galbraith wrote:
>>> On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
>>>
oh yes, no tsc only hpet in my box.
>>>
>>> Making poor E5-2658 box a crippled wreck.
>>
>> yes,it is.
On 2014/5/28 17:08, Thomas Gleixner wrote:
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
yes,it is. But cpu usage will be
On 2014/5/28 19:55, Mike Galbraith wrote:
On Wed, 2014-05-28 at 19:43 +0800, Libo Chen wrote:
On 2014/5/28 17:08, Thomas Gleixner wrote:
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only
On Wed, 2014-05-28 at 19:43 +0800, Libo Chen wrote:
> On 2014/5/28 17:08, Thomas Gleixner wrote:
> > On Wed, 28 May 2014, Libo Chen wrote:
> >
> >> On 2014/5/28 9:53, Mike Galbraith wrote:
> >>> On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
> >>>
> oh yes, no tsc only hpet in my box.
On 2014/5/28 17:08, Thomas Gleixner wrote:
> On Wed, 28 May 2014, Libo Chen wrote:
>
>> On 2014/5/28 9:53, Mike Galbraith wrote:
>>> On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
>>>
oh yes, no tsc only hpet in my box.
>>>
>>> Making poor E5-2658 box a crippled wreck.
>>
>> yes,it is.
On Wed, May 28, 2014 at 12:30:19PM +0200, Peter Zijlstra wrote:
> On Wed, May 28, 2014 at 11:08:40AM +0200, Thomas Gleixner wrote:
> > On Wed, 28 May 2014, Libo Chen wrote:
> >
> > > On 2014/5/28 9:53, Mike Galbraith wrote:
> > > > On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
> > > >
> >
On Wed, May 28, 2014 at 11:08:40AM +0200, Thomas Gleixner wrote:
> On Wed, 28 May 2014, Libo Chen wrote:
>
> > On 2014/5/28 9:53, Mike Galbraith wrote:
> > > On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
> > >
> > >> oh yes, no tsc only hpet in my box.
> > >
> > > Making poor E5-2658 box
On Wed, 28 May 2014, Libo Chen wrote:
> On 2014/5/28 9:53, Mike Galbraith wrote:
> > On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
> >
> >> oh yes, no tsc only hpet in my box.
> >
> > Making poor E5-2658 box a crippled wreck.
>
> yes,it is. But cpu usage will be down from 15% to 5% when
On Wed, 2014-05-28 at 14:54 +0800, Libo Chen wrote:
> On 2014/5/28 9:53, Mike Galbraith wrote:
> > On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
> >
> >> oh yes, no tsc only hpet in my box.
> >
> > Making poor E5-2658 box a crippled wreck.
>
> yes,it is. But cpu usage will be down from
On 2014/5/28 9:53, Mike Galbraith wrote:
> On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
>
>> oh yes, no tsc only hpet in my box.
>
> Making poor E5-2658 box a crippled wreck.
yes,it is. But cpu usage will be down from 15% to 5% when binding cpu, so maybe
read_hpet
is not the root cause.
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
yes,it is. But cpu usage will be down from 15% to 5% when binding cpu, so maybe
read_hpet
is not the root cause.
On Wed, 2014-05-28 at 14:54 +0800, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
yes,it is. But cpu usage will be down from 15% to 5% when
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
yes,it is. But cpu usage will be down from 15% to 5% when binding
cpu,
On Wed, May 28, 2014 at 11:08:40AM +0200, Thomas Gleixner wrote:
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
On Wed, May 28, 2014 at 12:30:19PM +0200, Peter Zijlstra wrote:
On Wed, May 28, 2014 at 11:08:40AM +0200, Thomas Gleixner wrote:
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc
On 2014/5/28 17:08, Thomas Gleixner wrote:
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
yes,it is. But cpu usage will be
On Wed, 2014-05-28 at 19:43 +0800, Libo Chen wrote:
On 2014/5/28 17:08, Thomas Gleixner wrote:
On Wed, 28 May 2014, Libo Chen wrote:
On 2014/5/28 9:53, Mike Galbraith wrote:
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
> oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On 2014/5/28 4:53, Thomas Gleixner wrote:
> On Tue, 27 May 2014, Libo Chen wrote:
>> On 2014/5/27 17:55, Mike Galbraith wrote:
>>> On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
> On 2014/5/26 22:19, Mike Galbraith wrote:
>>> On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/27 21:20, Mike Galbraith wrote:
> On Tue, 2014-05-27 at 20:50 +0800, Libo Chen wrote:
>
>> in my box:
>>
>> perf top -g --sort=symbol
>>
>> Events: 3K cycles
>> 73.27% [k] read_hpet
>> 4.30% [k] _raw_spin_lock_irqsave
>> 1.88% [k] __schedule
>> 1.00% [k] idle_cpu
>> 0.91%
On Tue, 27 May 2014, Libo Chen wrote:
> On 2014/5/27 17:55, Mike Galbraith wrote:
> > On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
> >> > On 2014/5/26 22:19, Mike Galbraith wrote:
> >>> > > On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
> > >> On 2014/5/26 13:11, Mike Galbraith
On Tue, 2014-05-27 at 20:50 +0800, Libo Chen wrote:
> in my box:
>
> perf top -g --sort=symbol
>
> Events: 3K cycles
> 73.27% [k] read_hpet
> 4.30% [k] _raw_spin_lock_irqsave
> 1.88% [k] __schedule
> 1.00% [k] idle_cpu
> 0.91% [k] native_write_msr_safe
> 0.68% [k]
On Tue, May 27, 2014 at 08:55:20PM +0800, Libo Chen wrote:
> On 2014/5/27 17:48, Peter Zijlstra wrote:
> > In any case, I'm not sure what the 'regression' report is against, as
> > there's only a single kernel version mentioned: 3.4, and that's almost a
> upstream has the same problem, I have
On 2014/5/27 18:55, Mike Galbraith wrote:
> On Tue, 2014-05-27 at 12:43 +0200, Peter Zijlstra wrote:
>> On Tue, May 27, 2014 at 12:05:33PM +0200, Mike Galbraith wrote:
>>> On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
>>>
So I suppose this is due to the select_idle_sibling()
On 2014/5/27 17:48, Peter Zijlstra wrote:
> So:
>
> 1) what kind of weird ass workload is that? Why are you waking up so
> often to do no work?
it's just a testcase, I agree it doesn`t exist in real world.
>
> 2) turning on/off share_pkg_resource is a horrid hack whichever way
> aruond you
On 2014/5/27 17:55, Mike Galbraith wrote:
> On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
>> > On 2014/5/26 22:19, Mike Galbraith wrote:
>>> > > On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
> >> On 2014/5/26 13:11, Mike Galbraith wrote:
>>> > >
> > >>> Your synthetic test
On Tue, 2014-05-27 at 12:43 +0200, Peter Zijlstra wrote:
> On Tue, May 27, 2014 at 12:05:33PM +0200, Mike Galbraith wrote:
> > On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
> >
> > > So I suppose this is due to the select_idle_sibling() nonsense again,
> > > where we assumes L3 is a
On Tue, May 27, 2014 at 12:05:33PM +0200, Mike Galbraith wrote:
> On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
>
> > So I suppose this is due to the select_idle_sibling() nonsense again,
> > where we assumes L3 is a fair compromise between cheap enough and
> > effective enough.
>
>
On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
> So I suppose this is due to the select_idle_sibling() nonsense again,
> where we assumes L3 is a fair compromise between cheap enough and
> effective enough.
Nodz.
> Of course, Intel keeps growing the cpu count covered by L3 to
On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
> On 2014/5/26 22:19, Mike Galbraith wrote:
> > On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
> >> On 2014/5/26 13:11, Mike Galbraith wrote:
> >
> >>> Your synthetic test is the absolute worst case scenario. There has to
> >>> be work
On Mon, May 26, 2014 at 07:49:10PM +0800, Libo Chen wrote:
> On 2014/5/26 15:56, Mike Galbraith wrote:
> > On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
> >> hi,
> >> my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
> >> 3.4.24stable, startup 50 same process, every
On Tue, 2014-05-27 at 15:44 +0800, Libo Chen wrote:
> On 2014/5/26 22:03, Mike Galbraith wrote:
> > On Mon, 2014-05-26 at 19:49 +0800, Libo Chen wrote:
> >
> >> how to turn off SD_SHARE_PKG_RESOURCES in userspace ?
> >
> > I use a script Ingo gave me years and years ago to
> > twiddle
On 2014/5/26 22:19, Mike Galbraith wrote:
> On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
>> On 2014/5/26 13:11, Mike Galbraith wrote:
>
>>> Your synthetic test is the absolute worst case scenario. There has to
>>> be work between wakeups for select_idle_sibling() to have any chance
>>>
On 2014/5/26 22:03, Mike Galbraith wrote:
> On Mon, 2014-05-26 at 19:49 +0800, Libo Chen wrote:
>
>> how to turn off SD_SHARE_PKG_RESOURCES in userspace ?
>
> I use a script Ingo gave me years and years ago to
> twiddle /proc/sys/kernel/sched_domain/cpuN/domainN/flags domain wise.
> Doing that
On 2014/5/26 22:03, Mike Galbraith wrote:
On Mon, 2014-05-26 at 19:49 +0800, Libo Chen wrote:
how to turn off SD_SHARE_PKG_RESOURCES in userspace ?
I use a script Ingo gave me years and years ago to
twiddle /proc/sys/kernel/sched_domain/cpuN/domainN/flags domain wise.
Doing that won't do
On 2014/5/26 22:19, Mike Galbraith wrote:
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/26 13:11, Mike Galbraith wrote:
Your synthetic test is the absolute worst case scenario. There has to
be work between wakeups for select_idle_sibling() to have any chance
whatsoever of
On Tue, 2014-05-27 at 15:44 +0800, Libo Chen wrote:
On 2014/5/26 22:03, Mike Galbraith wrote:
On Mon, 2014-05-26 at 19:49 +0800, Libo Chen wrote:
how to turn off SD_SHARE_PKG_RESOURCES in userspace ?
I use a script Ingo gave me years and years ago to
twiddle
On Mon, May 26, 2014 at 07:49:10PM +0800, Libo Chen wrote:
On 2014/5/26 15:56, Mike Galbraith wrote:
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is
On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
On 2014/5/26 22:19, Mike Galbraith wrote:
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/26 13:11, Mike Galbraith wrote:
Your synthetic test is the absolute worst case scenario. There has to
be work between wakeups
On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
So I suppose this is due to the select_idle_sibling() nonsense again,
where we assumes L3 is a fair compromise between cheap enough and
effective enough.
Nodz.
Of course, Intel keeps growing the cpu count covered by L3 to ridiculous
On Tue, May 27, 2014 at 12:05:33PM +0200, Mike Galbraith wrote:
On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
So I suppose this is due to the select_idle_sibling() nonsense again,
where we assumes L3 is a fair compromise between cheap enough and
effective enough.
Nodz.
On Tue, 2014-05-27 at 12:43 +0200, Peter Zijlstra wrote:
On Tue, May 27, 2014 at 12:05:33PM +0200, Mike Galbraith wrote:
On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
So I suppose this is due to the select_idle_sibling() nonsense again,
where we assumes L3 is a fair
On 2014/5/27 17:55, Mike Galbraith wrote:
On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
On 2014/5/26 22:19, Mike Galbraith wrote:
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/26 13:11, Mike Galbraith wrote:
Your synthetic test is the absolute worst case
On 2014/5/27 17:48, Peter Zijlstra wrote:
So:
1) what kind of weird ass workload is that? Why are you waking up so
often to do no work?
it's just a testcase, I agree it doesn`t exist in real world.
2) turning on/off share_pkg_resource is a horrid hack whichever way
aruond you turn it.
On 2014/5/27 18:55, Mike Galbraith wrote:
On Tue, 2014-05-27 at 12:43 +0200, Peter Zijlstra wrote:
On Tue, May 27, 2014 at 12:05:33PM +0200, Mike Galbraith wrote:
On Tue, 2014-05-27 at 11:48 +0200, Peter Zijlstra wrote:
So I suppose this is due to the select_idle_sibling() nonsense again,
On Tue, May 27, 2014 at 08:55:20PM +0800, Libo Chen wrote:
On 2014/5/27 17:48, Peter Zijlstra wrote:
In any case, I'm not sure what the 'regression' report is against, as
there's only a single kernel version mentioned: 3.4, and that's almost a
upstream has the same problem, I have
On Tue, 2014-05-27 at 20:50 +0800, Libo Chen wrote:
in my box:
perf top -g --sort=symbol
Events: 3K cycles
73.27% [k] read_hpet
4.30% [k] _raw_spin_lock_irqsave
1.88% [k] __schedule
1.00% [k] idle_cpu
0.91% [k] native_write_msr_safe
0.68% [k] select_task_rq_fair
On Tue, 27 May 2014, Libo Chen wrote:
On 2014/5/27 17:55, Mike Galbraith wrote:
On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
On 2014/5/26 22:19, Mike Galbraith wrote:
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/26 13:11, Mike Galbraith wrote:
Your
On 2014/5/27 21:20, Mike Galbraith wrote:
On Tue, 2014-05-27 at 20:50 +0800, Libo Chen wrote:
in my box:
perf top -g --sort=symbol
Events: 3K cycles
73.27% [k] read_hpet
4.30% [k] _raw_spin_lock_irqsave
1.88% [k] __schedule
1.00% [k] idle_cpu
0.91% [k]
On 2014/5/28 4:53, Thomas Gleixner wrote:
On Tue, 27 May 2014, Libo Chen wrote:
On 2014/5/27 17:55, Mike Galbraith wrote:
On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
On 2014/5/26 22:19, Mike Galbraith wrote:
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/26 13:11,
On Wed, 2014-05-28 at 09:04 +0800, Libo Chen wrote:
oh yes, no tsc only hpet in my box.
Making poor E5-2658 box a crippled wreck.
-Mike
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
> On 2014/5/26 13:11, Mike Galbraith wrote:
> > Your synthetic test is the absolute worst case scenario. There has to
> > be work between wakeups for select_idle_sibling() to have any chance
> > whatsoever of turning in a win. At 0 work, it
On Mon, 2014-05-26 at 19:49 +0800, Libo Chen wrote:
> how to turn off SD_SHARE_PKG_RESOURCES in userspace ?
I use a script Ingo gave me years and years ago to
twiddle /proc/sys/kernel/sched_domain/cpuN/domainN/flags domain wise.
Doing that won't do you any good without a handler to build/tear
On 2014/5/26 13:11, Mike Galbraith wrote:
> On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
>> hi,
>> my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
>> 3.4.24stable, startup 50 same process, every process is sample:
>>
>> #include
>>
>> int main()
>>
On 2014/5/26 15:56, Mike Galbraith wrote:
> On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
>> hi,
>> my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
>> 3.4.24stable, startup 50 same process, every process is sample:
>>
>> #include
>>
>> int main()
>>
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
> hi,
> my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
> 3.4.24stable, startup 50 same process, every process is sample:
>
> #include
>
> int main()
> {
> for (;;)
> {
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is sample:
#include unistd.h
int main()
{
for (;;)
{
On 2014/5/26 15:56, Mike Galbraith wrote:
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is sample:
#include unistd.h
int main()
{
On 2014/5/26 13:11, Mike Galbraith wrote:
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is sample:
#include unistd.h
int main()
{
On Mon, 2014-05-26 at 19:49 +0800, Libo Chen wrote:
how to turn off SD_SHARE_PKG_RESOURCES in userspace ?
I use a script Ingo gave me years and years ago to
twiddle /proc/sys/kernel/sched_domain/cpuN/domainN/flags domain wise.
Doing that won't do you any good without a handler to build/tear
On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
On 2014/5/26 13:11, Mike Galbraith wrote:
Your synthetic test is the absolute worst case scenario. There has to
be work between wakeups for select_idle_sibling() to have any chance
whatsoever of turning in a win. At 0 work, it becomes
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
> hi,
> my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
> 3.4.24stable, startup 50 same process, every process is sample:
>
> #include
>
> int main()
> {
> for (;;)
> {
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is sample:
#include
int main()
{
for (;;)
{
unsigned int i = 0;
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is sample:
#include unistd.h
int main()
{
for (;;)
{
unsigned int i = 0;
On Mon, 2014-05-26 at 11:04 +0800, Libo Chen wrote:
hi,
my box has 16 cpu (E5-2658,8 core, 2 thread per core), i did a test on
3.4.24stable, startup 50 same process, every process is sample:
#include unistd.h
int main()
{
for (;;)
{
66 matches
Mail list logo