This patch is adapted from a very similar patch
by Frederic Weisbecker. The commit message text is also mostly
from Frederic's patch.
When some call site uses get_cpu_*_time_us() to read a sleeptime
stat, it deduces the total sleeptime by adding the pending time
to the last sleeptime snapshot if
This patch is adapted from a very similar patch
by Frederic Weisbecker. The commit message text is also mostly
from Frederic's patch.
When some call site uses get_cpu_*_time_us() to read a sleeptime
stat, it deduces the total sleeptime by adding the pending time
to the last sleeptime snapshot if
This patch is adapted from a very similar patch
by Frederic Weisbecker. The commit message text is also mostly
from Frederic's patch.
When some call site uses get_cpu_*_time_us() to read a sleeptime
stat, it deduces the total sleeptime by adding the pending time
to the last sleeptime snapshot if
This patch is adapted from a very similar patch
by Frederic Weisbecker. The commit message text is also mostly
from Frederic's patch.
When some call site uses get_cpu_*_time_us() to read a sleeptime
stat, it deduces the total sleeptime by adding the pending time
to the last sleeptime snapshot if
On Wed, Oct 02, 2013 at 07:35:49AM -0700, Arjan van de Ven wrote:
> On 10/2/2013 5:45 AM, Frederic Weisbecker wrote:
> >On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
> >>On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
> >>>Yeah thinking more about it, the
On 10/2/2013 5:45 AM, Frederic Weisbecker wrote:
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt disable was probably not
necessary. Now that's trading 2 atomics + 1
On Wed, Oct 02, 2013 at 02:45:41PM +0200, Frederic Weisbecker wrote:
> On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
> > On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
> > > Yeah thinking more about it, the preempt disable was probably not
> > > necessary.
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
> > Yeah thinking more about it, the preempt disable was probably not
> > necessary. Now that's trading 2 atomics + 1 Lock/Unlock with 2 Lock/Unlock.
>
> It
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt disable was probably not
necessary. Now that's trading 2 atomics + 1 Lock/Unlock with 2 Lock/Unlock.
It trades the
On Wed, Oct 02, 2013 at 02:45:41PM +0200, Frederic Weisbecker wrote:
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt disable was probably not
necessary. Now that's
On 10/2/2013 5:45 AM, Frederic Weisbecker wrote:
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt disable was probably not
necessary. Now that's trading 2 atomics + 1
On Wed, Oct 02, 2013 at 07:35:49AM -0700, Arjan van de Ven wrote:
On 10/2/2013 5:45 AM, Frederic Weisbecker wrote:
On Tue, Oct 01, 2013 at 06:59:57PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
> Yeah thinking more about it, the preempt disable was probably not
> necessary. Now that's trading 2 atomics + 1 Lock/Unlock with 2 Lock/Unlock.
It trades the current 2 atomics for 2 LOCK/UNLOCK. And on x86_64 that's
2 atomics.
On Tue, Oct 01, 2013 at 05:56:33PM +0200, Peter Zijlstra wrote:
>
> So what's wrong with something like:
>
> struct cpu_idletime {
>seqlock_t seqlock;
>unsigned long nr_iowait;
>u64 start;
>u64 idle_time,
>u64 iowait_time,
> } __cacheline_aligned_in_smp;
>
So what's wrong with something like:
struct cpu_idletime {
seqlock_t seqlock;
unsigned long nr_iowait;
u64 start;
u64 idle_time,
u64 iowait_time,
} __cacheline_aligned_in_smp;
DEFINE_PER_CPU(struct cpu_idletime, cpu_idletime);
void io_schedule(void)
{
On Tue, Oct 01, 2013 at 05:00:37PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 01, 2013 at 04:05:27PM +0200, Frederic Weisbecker wrote:
>
> > struct cpu_idletime {
> >nr_iowait,
> >seqlock,
> >idle_start,
> >idle_time,
> >iowait_time,
> > }
On Tue, Oct 01, 2013 at 04:05:27PM +0200, Frederic Weisbecker wrote:
> struct cpu_idletime {
>nr_iowait,
>seqlock,
>idle_start,
>idle_time,
>iowait_time,
> } __cacheline_aligned_in_smp;
>
> DEFINE_PER_CPU(struct cpu_idletime, cpu_idletime);
>
>
2013/10/1 Frederic Weisbecker :
> 2013/10/1 Frederic Weisbecker :
>> On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
>>> On 08/21, Peter Zijlstra wrote:
>>> >
>>> > The other consideration is that this adds two branches to the normal
>>> > schedule path. I really don't know what the
2013/10/1 Frederic Weisbecker :
> I forgot...
> cpu_idletime->idle_start;
cpu_idletime->idle_start = NOW();
grrr.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
2013/10/1 Frederic Weisbecker :
> On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
>> On 08/21, Peter Zijlstra wrote:
>> >
>> > The other consideration is that this adds two branches to the normal
>> > schedule path. I really don't know what the regular ratio between
>> > schedule()
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
> On 08/21, Peter Zijlstra wrote:
> >
> > The other consideration is that this adds two branches to the normal
> > schedule path. I really don't know what the regular ratio between
> > schedule() and io_schedule() is -- and I suspect
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really don't know what the regular ratio between
schedule() and io_schedule() is -- and I suspect it can
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really don't know what the regular ratio between
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
I forgot...
cpu_idletime-idle_start;
cpu_idletime-idle_start = NOW();
grrr.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
2013/10/1 Frederic Weisbecker fweis...@gmail.com:
On Wed, Aug 21, 2013 at 06:41:46PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really
On Tue, Oct 01, 2013 at 04:05:27PM +0200, Frederic Weisbecker wrote:
struct cpu_idletime {
nr_iowait,
seqlock,
idle_start,
idle_time,
iowait_time,
} __cacheline_aligned_in_smp;
DEFINE_PER_CPU(struct cpu_idletime, cpu_idletime);
io_schedule()
On Tue, Oct 01, 2013 at 05:00:37PM +0200, Peter Zijlstra wrote:
On Tue, Oct 01, 2013 at 04:05:27PM +0200, Frederic Weisbecker wrote:
struct cpu_idletime {
nr_iowait,
seqlock,
idle_start,
idle_time,
iowait_time,
} __cacheline_aligned_in_smp;
So what's wrong with something like:
struct cpu_idletime {
seqlock_t seqlock;
unsigned long nr_iowait;
u64 start;
u64 idle_time,
u64 iowait_time,
} __cacheline_aligned_in_smp;
DEFINE_PER_CPU(struct cpu_idletime, cpu_idletime);
void io_schedule(void)
{
On Tue, Oct 01, 2013 at 05:56:33PM +0200, Peter Zijlstra wrote:
So what's wrong with something like:
struct cpu_idletime {
seqlock_t seqlock;
unsigned long nr_iowait;
u64 start;
u64 idle_time,
u64 iowait_time,
} __cacheline_aligned_in_smp;
On Tue, Oct 01, 2013 at 06:47:10PM +0200, Frederic Weisbecker wrote:
Yeah thinking more about it, the preempt disable was probably not
necessary. Now that's trading 2 atomics + 1 Lock/Unlock with 2 Lock/Unlock.
It trades the current 2 atomics for 2 LOCK/UNLOCK. And on x86_64 that's
2 atomics.
On 08/21, Peter Zijlstra wrote:
>
> OK, so do not remove the MAX_SCHEDULE_TIMEOUT check from
> __schedule_timeout() and change the above to:
>
> static __always_inline long schedule_timeout(long timeout)
> {
> if (__builtin_constant_p(timeout) && timeout == MAX_SCHEDULE_TIMEOUT) {
>
On Wed, Aug 21, 2013 at 07:09:27PM +0200, Oleg Nesterov wrote:
> So, unlike me, you like -02 more than -Os ;)
I haven't checked the actual flags they enable in a while, but I think I
prefer something in the middle.
Esp. -freorder-blocks and the various -falign flags are something you
really want
On 08/21, Peter Zijlstra wrote:
>
> On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
> >
> > Well, the only overhead is "if(to == MAX_SCHEDULE_TIMEOUT)" at the start.
> > I don't think it makes sense to copy-and-paste the identical code to
> > avoid it. But please ignore, this is
On 08/21, Peter Zijlstra wrote:
>
> The other consideration is that this adds two branches to the normal
> schedule path. I really don't know what the regular ratio between
> schedule() and io_schedule() is -- and I suspect it can very much depend
> on workload -- but it might be a net loss due to
On Wed, Aug 21, 2013 at 02:33:11PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
> > On 08/21, Peter Zijlstra wrote:
>
> > > Yes its the right rq, but the wrong time.
> >
> > Hmm. Just in case, it is not that I think this patch really makes sense,
On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
> > > Btw. Whatever we do, can't we unify io_schedule/io_schedule_timeout?
> >
> > I suppose we could, a timeout of MAX_SCHEDULE_TIMEOUT will act like a
> > regular schedule, but it gets all the overhead of doing
> >
On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
> On 08/21, Peter Zijlstra wrote:
> > Yes its the right rq, but the wrong time.
>
> Hmm. Just in case, it is not that I think this patch really makes sense,
> but I'd like to understand why do you think it is wrong.
> But it is not
On 08/21, Frederic Weisbecker wrote:
>
> On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
> >
> > --- x/kernel/sched/core.c
> > +++ x/kernel/sched/core.c
> > @@ -2435,6 +2435,9 @@ need_resched:
> > rq->curr = next;
> > ++*switch_count;
> >
> > + if
On 08/21, Peter Zijlstra wrote:
>
> On Tue, Aug 20, 2013 at 08:25:53PM +0200, Oleg Nesterov wrote:
> > On 08/20, Peter Zijlstra wrote:
> > >
> > > On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
>
> > > > + if (unlikely(prev->in_iowait)) {
> > > > +
On Tue, Aug 20, 2013 at 08:25:53PM +0200, Oleg Nesterov wrote:
> On 08/20, Peter Zijlstra wrote:
> >
> > On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
> > > + if (unlikely(prev->in_iowait)) {
> > > + raw_spin_lock_irq(>lock);
> > > +
On Tue, Aug 20, 2013 at 08:25:53PM +0200, Oleg Nesterov wrote:
On 08/20, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
+ if (unlikely(prev-in_iowait)) {
+ raw_spin_lock_irq(rq-lock);
+ rq-nr_iowait--;
On 08/21, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 08:25:53PM +0200, Oleg Nesterov wrote:
On 08/20, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
+ if (unlikely(prev-in_iowait)) {
+
On 08/21, Frederic Weisbecker wrote:
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
--- x/kernel/sched/core.c
+++ x/kernel/sched/core.c
@@ -2435,6 +2435,9 @@ need_resched:
rq-curr = next;
++*switch_count;
+ if
On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
Yes its the right rq, but the wrong time.
Hmm. Just in case, it is not that I think this patch really makes sense,
but I'd like to understand why do you think it is wrong.
But it is not after
On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
Btw. Whatever we do, can't we unify io_schedule/io_schedule_timeout?
I suppose we could, a timeout of MAX_SCHEDULE_TIMEOUT will act like a
regular schedule, but it gets all the overhead of doing
schedule_timeout(). So I
On Wed, Aug 21, 2013 at 02:33:11PM +0200, Peter Zijlstra wrote:
On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
On 08/21, Peter Zijlstra wrote:
Yes its the right rq, but the wrong time.
Hmm. Just in case, it is not that I think this patch really makes sense,
but I'd
On 08/21, Peter Zijlstra wrote:
The other consideration is that this adds two branches to the normal
schedule path. I really don't know what the regular ratio between
schedule() and io_schedule() is -- and I suspect it can very much depend
on workload -- but it might be a net loss due to
On 08/21, Peter Zijlstra wrote:
On Wed, Aug 21, 2013 at 01:35:51PM +0200, Oleg Nesterov wrote:
Well, the only overhead is if(to == MAX_SCHEDULE_TIMEOUT) at the start.
I don't think it makes sense to copy-and-paste the identical code to
avoid it. But please ignore, this is really minor
On Wed, Aug 21, 2013 at 07:09:27PM +0200, Oleg Nesterov wrote:
So, unlike me, you like -02 more than -Os ;)
I haven't checked the actual flags they enable in a while, but I think I
prefer something in the middle.
Esp. -freorder-blocks and the various -falign flags are something you
really want
On 08/21, Peter Zijlstra wrote:
OK, so do not remove the MAX_SCHEDULE_TIMEOUT check from
__schedule_timeout() and change the above to:
static __always_inline long schedule_timeout(long timeout)
{
if (__builtin_constant_p(timeout) timeout == MAX_SCHEDULE_TIMEOUT) {
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
> On 08/20, Peter Zijlstra wrote:
> >
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -453,7 +453,8 @@ struct rq {
> > u64 clock;
> > u64 clock_task;
> >
> > - atomic_t nr_iowait;
> > + int
On Tue, Aug 20, 2013 at 03:21:11PM +0900, Fernando Luis Vázquez Cao wrote:
> (2013年08月17日 01:46), Frederic Weisbecker wrote:
> >On Fri, Aug 16, 2013 at 06:26:54PM +0200, Oleg Nesterov wrote:
> >>On 08/16, Frederic Weisbecker wrote:
> >>>On Fri, Aug 16, 2013 at 06:02:01PM +0200, Oleg Nesterov
On 08/20, Peter Zijlstra wrote:
>
> On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
> > --- x/kernel/sched/core.c
> > +++ x/kernel/sched/core.c
> > @@ -2435,6 +2435,9 @@ need_resched:
> > rq->curr = next;
> > ++*switch_count;
> >
> > + if
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
> On 08/20, Peter Zijlstra wrote:
> I am wondering how the extra lock(rq)/unlock(rq) in schedule() is bad
> compared to atomic_dec.
>
> IOW, what if we simply make rq->nr_iowait "int" and change schedule()
> to update it? Something
On 08/20, Peter Zijlstra wrote:
>
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -453,7 +453,8 @@ struct rq {
> u64 clock;
> u64 clock_task;
>
> - atomic_t nr_iowait;
> + int nr_iowait_local;
> + atomic_t nr_iowait_remote;
I am wondering how the extra
On Tue, Aug 20, 2013 at 08:31:53AM -0700, Arjan van de Ven wrote:
> On 8/20/2013 1:44 AM, Peter Zijlstra wrote:
> >Of course, if we can get away with completely removing all of that
> >(which I think Arjan suggested was a real possibility) then that would
> >be ever so much better still :-)
>
>
On 8/20/2013 8:35 AM, Frederic Weisbecker wrote:
On Tue, Aug 20, 2013 at 08:33:50AM -0700, Arjan van de Ven wrote:
On 8/20/2013 8:29 AM, Frederic Weisbecker wrote:
Of course, if we can get away with completely removing all of that
(which I think Arjan suggested was a real possibility) then
On Tue, Aug 20, 2013 at 08:33:50AM -0700, Arjan van de Ven wrote:
> On 8/20/2013 8:29 AM, Frederic Weisbecker wrote:
>
> >>
> >>Of course, if we can get away with completely removing all of that
> >>(which I think Arjan suggested was a real possibility) then that would
> >>be ever so much better
On 8/20/2013 8:29 AM, Frederic Weisbecker wrote:
Of course, if we can get away with completely removing all of that
(which I think Arjan suggested was a real possibility) then that would
be ever so much better still :-)
Would be lovely. But I don't know much about cpufreq, I hope somebody
On 8/20/2013 1:44 AM, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 03:59:36PM +0900, Fernando Luis Vázquez Cao wrote:
That said, if deemed acceptable, option A is the one I would
choose.
Right, so I think we can do A without much extra cost mostly because we
already have 2 atomics in the
On Tue, Aug 20, 2013 at 10:44:05AM +0200, Peter Zijlstra wrote:
> On Tue, Aug 20, 2013 at 03:59:36PM +0900, Fernando Luis Vázquez Cao wrote:
> > That said, if deemed acceptable, option A is the one I would
> > choose.
>
> Right, so I think we can do A without much extra cost mostly because we
>
On Tue, Aug 20, 2013 at 03:59:36PM +0900, Fernando Luis Vázquez Cao wrote:
> That said, if deemed acceptable, option A is the one I would
> choose.
Right, so I think we can do A without much extra cost mostly because we
already have 2 atomics in the io_schedule() path. If we replace those
two
(2013年08月17日 01:46), Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:26:54PM +0200, Oleg Nesterov wrote:
On 08/16, Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:02:01PM +0200, Oleg Nesterov wrote:
+ do {
+ seq = read_seqcount_begin(>sleeptime_seq);
+
(2013年08月19日 20:10), Peter Zijlstra wrote:
On Fri, Aug 16, 2013 at 06:46:28PM +0200, Frederic Weisbecker wrote:
Option A:
Should we flush that iowait to the src CPU? But then it means we must handle
concurrent updates to iowait_sleeptime, idle_sleeptime from the migration
code and from idle
(2013年08月19日 20:10), Peter Zijlstra wrote:
On Fri, Aug 16, 2013 at 06:46:28PM +0200, Frederic Weisbecker wrote:
Option A:
Should we flush that iowait to the src CPU? But then it means we must handle
concurrent updates to iowait_sleeptime, idle_sleeptime from the migration
code and from idle
(2013年08月17日 01:46), Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:26:54PM +0200, Oleg Nesterov wrote:
On 08/16, Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:02:01PM +0200, Oleg Nesterov wrote:
+ do {
+ seq = read_seqcount_begin(ts-sleeptime_seq);
+
On Tue, Aug 20, 2013 at 03:59:36PM +0900, Fernando Luis Vázquez Cao wrote:
That said, if deemed acceptable, option A is the one I would
choose.
Right, so I think we can do A without much extra cost mostly because we
already have 2 atomics in the io_schedule() path. If we replace those
two
On Tue, Aug 20, 2013 at 10:44:05AM +0200, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 03:59:36PM +0900, Fernando Luis Vázquez Cao wrote:
That said, if deemed acceptable, option A is the one I would
choose.
Right, so I think we can do A without much extra cost mostly because we
already
On 8/20/2013 1:44 AM, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 03:59:36PM +0900, Fernando Luis Vázquez Cao wrote:
That said, if deemed acceptable, option A is the one I would
choose.
Right, so I think we can do A without much extra cost mostly because we
already have 2 atomics in the
On 8/20/2013 8:29 AM, Frederic Weisbecker wrote:
Of course, if we can get away with completely removing all of that
(which I think Arjan suggested was a real possibility) then that would
be ever so much better still :-)
Would be lovely. But I don't know much about cpufreq, I hope somebody
On Tue, Aug 20, 2013 at 08:33:50AM -0700, Arjan van de Ven wrote:
On 8/20/2013 8:29 AM, Frederic Weisbecker wrote:
Of course, if we can get away with completely removing all of that
(which I think Arjan suggested was a real possibility) then that would
be ever so much better still :-)
On 8/20/2013 8:35 AM, Frederic Weisbecker wrote:
On Tue, Aug 20, 2013 at 08:33:50AM -0700, Arjan van de Ven wrote:
On 8/20/2013 8:29 AM, Frederic Weisbecker wrote:
Of course, if we can get away with completely removing all of that
(which I think Arjan suggested was a real possibility) then
On Tue, Aug 20, 2013 at 08:31:53AM -0700, Arjan van de Ven wrote:
On 8/20/2013 1:44 AM, Peter Zijlstra wrote:
Of course, if we can get away with completely removing all of that
(which I think Arjan suggested was a real possibility) then that would
be ever so much better still :-)
I'm quite
On 08/20, Peter Zijlstra wrote:
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -453,7 +453,8 @@ struct rq {
u64 clock;
u64 clock_task;
- atomic_t nr_iowait;
+ int nr_iowait_local;
+ atomic_t nr_iowait_remote;
I am wondering how the extra
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
On 08/20, Peter Zijlstra wrote:
I am wondering how the extra lock(rq)/unlock(rq) in schedule() is bad
compared to atomic_dec.
IOW, what if we simply make rq-nr_iowait int and change schedule()
to update it? Something like
On 08/20, Peter Zijlstra wrote:
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
--- x/kernel/sched/core.c
+++ x/kernel/sched/core.c
@@ -2435,6 +2435,9 @@ need_resched:
rq-curr = next;
++*switch_count;
+ if
On Tue, Aug 20, 2013 at 03:21:11PM +0900, Fernando Luis Vázquez Cao wrote:
(2013年08月17日 01:46), Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:26:54PM +0200, Oleg Nesterov wrote:
On 08/16, Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:02:01PM +0200, Oleg Nesterov wrote:
+
On Tue, Aug 20, 2013 at 06:33:12PM +0200, Oleg Nesterov wrote:
On 08/20, Peter Zijlstra wrote:
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -453,7 +453,8 @@ struct rq {
u64 clock;
u64 clock_task;
- atomic_t nr_iowait;
+ int nr_iowait_local;
+
On 8/19/2013 3:58 AM, Peter Zijlstra wrote:
On Fri, Aug 16, 2013 at 07:12:09PM +0200, Frederic Weisbecker wrote:
Or may be Peter could tell us as well. Peter, do you have a preference?
Still trying to wrap my head around it, but conceptually
get_cpu_iowait_time_us() doesn't make any kind of
On 8/19/2013 3:58 AM, Peter Zijlstra wrote:
I'm also not entirely clear on the 'desired' semantics here. Do we count
iowait time as idle or not?
cpufreq only looks at this because it does not want to count this as idle...
other than that it could not care less.
--
To unsubscribe from this
On Mon, Aug 19, 2013 at 01:10:26PM +0200, Peter Zijlstra wrote:
> So no, if we need per-cpu iowait time we have to do A.
>
> Since we already have atomics in the io_schedule*() paths, please
> replace those with (seq)locks. Also see if you can place the entire
> iowait accounting thing in a
On Fri, Aug 16, 2013 at 06:46:28PM +0200, Frederic Weisbecker wrote:
Option A:
> Should we flush that iowait to the src CPU? But then it means we must handle
> concurrent updates to iowait_sleeptime, idle_sleeptime from the migration
> code and from idle enter / exit.
>
> So I fear we need a
On Fri, Aug 16, 2013 at 07:12:09PM +0200, Frederic Weisbecker wrote:
> Or may be Peter could tell us as well. Peter, do you have a preference?
Still trying to wrap my head around it, but conceptually
get_cpu_iowait_time_us() doesn't make any kind of sense. iowait isn't
per cpu since effectively
On Fri, Aug 16, 2013 at 07:12:09PM +0200, Frederic Weisbecker wrote:
Or may be Peter could tell us as well. Peter, do you have a preference?
Still trying to wrap my head around it, but conceptually
get_cpu_iowait_time_us() doesn't make any kind of sense. iowait isn't
per cpu since effectively
On Fri, Aug 16, 2013 at 06:46:28PM +0200, Frederic Weisbecker wrote:
Option A:
Should we flush that iowait to the src CPU? But then it means we must handle
concurrent updates to iowait_sleeptime, idle_sleeptime from the migration
code and from idle enter / exit.
So I fear we need a
On Mon, Aug 19, 2013 at 01:10:26PM +0200, Peter Zijlstra wrote:
So no, if we need per-cpu iowait time we have to do A.
Since we already have atomics in the io_schedule*() paths, please
replace those with (seq)locks. Also see if you can place the entire
iowait accounting thing in a separate
On 8/19/2013 3:58 AM, Peter Zijlstra wrote:
I'm also not entirely clear on the 'desired' semantics here. Do we count
iowait time as idle or not?
cpufreq only looks at this because it does not want to count this as idle...
other than that it could not care less.
--
To unsubscribe from this
On 8/19/2013 3:58 AM, Peter Zijlstra wrote:
On Fri, Aug 16, 2013 at 07:12:09PM +0200, Frederic Weisbecker wrote:
Or may be Peter could tell us as well. Peter, do you have a preference?
Still trying to wrap my head around it, but conceptually
get_cpu_iowait_time_us() doesn't make any kind of
On Sun, Aug 18, 2013 at 06:54:00PM +0200, Oleg Nesterov wrote:
> On 08/16, Frederic Weisbecker wrote:
> >
> > This fixes a reported bug where non-monotonic sleeptime stats are returned
> > by /proc/stat
> > when it is frequently read.
>
> Plus it fixes the problems with 32bit machines reading
On Sun, Aug 18, 2013 at 06:36:39PM +0200, Oleg Nesterov wrote:
> On 08/16, Frederic Weisbecker wrote:
> >
> > On Fri, Aug 16, 2013 at 06:49:22PM +0200, Oleg Nesterov wrote:
> > >
> > > Personally I am fine either way.
> >
> > Me too.
> >
> > So my proposition is that we can keep the existing
On 08/16, Frederic Weisbecker wrote:
>
> This fixes a reported bug where non-monotonic sleeptime stats are returned by
> /proc/stat
> when it is frequently read.
Plus it fixes the problems with 32bit machines reading u64 values.
But perhaps the changelog should mention other races we discussed.
On 08/16, Frederic Weisbecker wrote:
>
> On Fri, Aug 16, 2013 at 06:49:22PM +0200, Oleg Nesterov wrote:
> >
> > Personally I am fine either way.
>
> Me too.
>
> So my proposition is that we can keep the existing patches as they fix other
> distinct races
perhaps... but it would be nice to fix
On 08/16, Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:49:22PM +0200, Oleg Nesterov wrote:
Personally I am fine either way.
Me too.
So my proposition is that we can keep the existing patches as they fix other
distinct races
perhaps... but it would be nice to fix the goes
On 08/16, Frederic Weisbecker wrote:
This fixes a reported bug where non-monotonic sleeptime stats are returned by
/proc/stat
when it is frequently read.
Plus it fixes the problems with 32bit machines reading u64 values.
But perhaps the changelog should mention other races we discussed.
On Sun, Aug 18, 2013 at 06:36:39PM +0200, Oleg Nesterov wrote:
On 08/16, Frederic Weisbecker wrote:
On Fri, Aug 16, 2013 at 06:49:22PM +0200, Oleg Nesterov wrote:
Personally I am fine either way.
Me too.
So my proposition is that we can keep the existing patches as they fix
On Sun, Aug 18, 2013 at 06:54:00PM +0200, Oleg Nesterov wrote:
On 08/16, Frederic Weisbecker wrote:
This fixes a reported bug where non-monotonic sleeptime stats are returned
by /proc/stat
when it is frequently read.
Plus it fixes the problems with 32bit machines reading u64 values.
On Fri, Aug 16, 2013 at 06:49:22PM +0200, Oleg Nesterov wrote:
> On 08/16, Frederic Weisbecker wrote:
> >
> > tick_nohz_stop_idle() to iowait if we called tick_nohz_start_idle() with
> > nr_iowait > 0.
> > All we need is just a new field in ts-> that records on which state we
> > entered
> >
On 08/16, Frederic Weisbecker wrote:
>
> tick_nohz_stop_idle() to iowait if we called tick_nohz_start_idle() with
> nr_iowait > 0.
> All we need is just a new field in ts-> that records on which state we entered
> idle.
Or we can turn ->idle_active into enum. And all other nr_iowait_cpu's
in
On Fri, Aug 16, 2013 at 06:33:55PM +0200, Oleg Nesterov wrote:
> On 08/16, Frederic Weisbecker wrote:
> >
> > On Fri, Aug 16, 2013 at 06:02:01PM +0200, Oleg Nesterov wrote:
> > >
> > > Unless I missread this patch, this is still racy a bit.
> > >
> > > Suppose it is called on CPU_0 and cpu == 1.
On Fri, Aug 16, 2013 at 06:26:54PM +0200, Oleg Nesterov wrote:
> On 08/16, Frederic Weisbecker wrote:
> >
> > On Fri, Aug 16, 2013 at 06:02:01PM +0200, Oleg Nesterov wrote:
> > > > + do {
> > > > + seq = read_seqcount_begin(>sleeptime_seq);
> > > > + if
1 - 100 of 120 matches
Mail list logo