On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> OK, so here are our options:
>
> 1.Add the RCU conditional to need_resched(), as David suggests.
> Peter has concerns about overhead.
>
> 2.Create a new need_resched_rcu_qs() that is to be used when
>
On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> OK, so here are our options:
>
> 1.Add the RCU conditional to need_resched(), as David suggests.
> Peter has concerns about overhead.
>
> 2.Create a new need_resched_rcu_qs() that is to be used when
>
On Mon, Jul 09, 2018 at 04:43:38PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
>
> > OK, so here are our options:
> >
> > 1. Add the RCU conditional to need_resched(), as David suggests.
> > Peter has concerns about overhead.
>
> Not
On Mon, Jul 09, 2018 at 04:43:38PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
>
> > OK, so here are our options:
> >
> > 1. Add the RCU conditional to need_resched(), as David suggests.
> > Peter has concerns about overhead.
>
> Not
On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> OK, so here are our options:
>
> 1.Add the RCU conditional to need_resched(), as David suggests.
> Peter has concerns about overhead.
Not only overhead, its plain broken, because:
1) we keep preemption state in
On Mon, Jul 09, 2018 at 07:29:32AM -0700, Paul E. McKenney wrote:
> OK, so here are our options:
>
> 1.Add the RCU conditional to need_resched(), as David suggests.
> Peter has concerns about overhead.
Not only overhead, its plain broken, because:
1) we keep preemption state in
On Mon, Jul 09, 2018 at 01:47:14PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 05:34 -0700, Paul E. McKenney wrote:
> > The reason that David's latencies went from 100ms to one second is
> > because I made this code less aggressive about invoking resched_cpu().
>
> Ten seconds. We saw
On Mon, Jul 09, 2018 at 01:47:14PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 05:34 -0700, Paul E. McKenney wrote:
> > The reason that David's latencies went from 100ms to one second is
> > because I made this code less aggressive about invoking resched_cpu().
>
> Ten seconds. We saw
On Mon, Jul 09, 2018 at 03:02:27PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 02:55:16PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > > But KVM defeats this by checking need_resched() before invoking
> > > cond_resched().
> >
>
On Mon, Jul 09, 2018 at 03:02:27PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 02:55:16PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > > But KVM defeats this by checking need_resched() before invoking
> > > cond_resched().
> >
>
On Mon, Jul 09, 2018 at 02:55:16PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > But KVM defeats this by checking need_resched() before invoking
> > cond_resched().
>
> That's not wrong or even uncommon I think.
In fact, I think we recently
On Mon, Jul 09, 2018 at 02:55:16PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > But KVM defeats this by checking need_resched() before invoking
> > cond_resched().
>
> That's not wrong or even uncommon I think.
In fact, I think we recently
On Mon, 2018-07-09 at 14:55 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > But KVM defeats this by checking need_resched() before invoking
> > cond_resched().
>
> That's not wrong or even uncommon I think.
Right. Which is precisely why I
On Mon, 2018-07-09 at 14:55 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> > But KVM defeats this by checking need_resched() before invoking
> > cond_resched().
>
> That's not wrong or even uncommon I think.
Right. Which is precisely why I
On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> But KVM defeats this by checking need_resched() before invoking
> cond_resched().
That's not wrong or even uncommon I think.
On Mon, Jul 09, 2018 at 05:34:57AM -0700, Paul E. McKenney wrote:
> But KVM defeats this by checking need_resched() before invoking
> cond_resched().
That's not wrong or even uncommon I think.
On Mon, 2018-07-09 at 05:34 -0700, Paul E. McKenney wrote:
> The reason that David's latencies went from 100ms to one second is
> because I made this code less aggressive about invoking resched_cpu().
Ten seconds. We saw synchronize_sched() take ten seconds in 4.15. We
wouldn't have been happy
On Mon, 2018-07-09 at 05:34 -0700, Paul E. McKenney wrote:
> The reason that David's latencies went from 100ms to one second is
> because I made this code less aggressive about invoking resched_cpu().
Ten seconds. We saw synchronize_sched() take ten seconds in 4.15. We
wouldn't have been happy
On Mon, Jul 09, 2018 at 01:06:57PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> >
> > Er Marius, our
On Mon, Jul 09, 2018 at 01:06:57PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> >
> > Er Marius, our
On Mon, Jul 09, 2018 at 12:12:15PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 13:06 +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
> > > > But either proposal is exactly the same in this respect. The whole
> > > > rcu_urgent_qs thing
On Mon, Jul 09, 2018 at 12:12:15PM +0100, David Woodhouse wrote:
> On Mon, 2018-07-09 at 13:06 +0200, Peter Zijlstra wrote:
> > On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
> > > > But either proposal is exactly the same in this respect. The whole
> > > > rcu_urgent_qs thing
On Mon, 2018-07-09 at 13:06 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> >
> > >
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> > Er Marius, our
On Mon, 2018-07-09 at 13:06 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
>
> >
> > >
> > > But either proposal is exactly the same in this respect. The whole
> > > rcu_urgent_qs thing won't be set any earlier either.
> > Er Marius, our
On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
> > But either proposal is exactly the same in this respect. The whole
> > rcu_urgent_qs thing won't be set any earlier either.
>
> Er Marius, our latencies in expand_fdtable() definitely went from
> ~10s to well below one
On Mon, Jul 09, 2018 at 11:56:41AM +0100, David Woodhouse wrote:
> > But either proposal is exactly the same in this respect. The whole
> > rcu_urgent_qs thing won't be set any earlier either.
>
> Er Marius, our latencies in expand_fdtable() definitely went from
> ~10s to well below one
On Mon, 2018-07-09 at 12:44 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 10:18:55AM +0100, David Woodhouse wrote:
> >
> > >
> > > Which seems like an entirely reasonable amount of time to kick a task.
> > > Not scheduling for a second is like an eternity.
> >
> > If that is our only
On Mon, 2018-07-09 at 12:44 +0200, Peter Zijlstra wrote:
> On Mon, Jul 09, 2018 at 10:18:55AM +0100, David Woodhouse wrote:
> >
> > >
> > > Which seems like an entirely reasonable amount of time to kick a task.
> > > Not scheduling for a second is like an eternity.
> >
> > If that is our only
On Mon, Jul 09, 2018 at 10:18:55AM +0100, David Woodhouse wrote:
> > Which seems like an entirely reasonable amount of time to kick a task.
> > Not scheduling for a second is like an eternity.
>
> If that is our only "fix" for KVM, then wouldn't that mean that things
> like expand_fdtable() would
On Mon, Jul 09, 2018 at 10:18:55AM +0100, David Woodhouse wrote:
> > Which seems like an entirely reasonable amount of time to kick a task.
> > Not scheduling for a second is like an eternity.
>
> If that is our only "fix" for KVM, then wouldn't that mean that things
> like expand_fdtable() would
On Mon, 2018-07-09 at 10:53 +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 10:11:50AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> > > On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > > >
> > > > diff --git
On Mon, 2018-07-09 at 10:53 +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 10:11:50AM -0700, Paul E. McKenney wrote:
> > On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> > > On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > > >
> > > > diff --git
On Fri, Jul 06, 2018 at 06:14:44PM +0100, David Woodhouse wrote:
> On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > > The preempt state is alread a bit complicated and shadowed in the
> > > preempt_count (on some architectures) adding additional bits to it like
> > > this is just
On Fri, Jul 06, 2018 at 06:14:44PM +0100, David Woodhouse wrote:
> On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > > The preempt state is alread a bit complicated and shadowed in the
> > > preempt_count (on some architectures) adding additional bits to it like
> > > this is just
On Fri, Jul 06, 2018 at 10:11:50AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index e4d4e60..89f5814
On Fri, Jul 06, 2018 at 10:11:50AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> > On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index e4d4e60..89f5814
On Fri, Jul 06, 2018 at 06:14:44PM +0100, David Woodhouse wrote:
> On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > > The preempt state is alread a bit complicated and shadowed in the
> > > preempt_count (on some architectures) adding additional bits to it like
> > > this is just
On Fri, Jul 06, 2018 at 06:14:44PM +0100, David Woodhouse wrote:
> On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > > The preempt state is alread a bit complicated and shadowed in the
> > > preempt_count (on some architectures) adding additional bits to it like
> > > this is just
On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > The preempt state is alread a bit complicated and shadowed in the
> > preempt_count (on some architectures) adding additional bits to it like
> > this is just asking for trouble.
>
> How about a separate need_resched_rcu() that
On Fri, 2018-07-06 at 10:11 -0700, Paul E. McKenney wrote:
> > The preempt state is alread a bit complicated and shadowed in the
> > preempt_count (on some architectures) adding additional bits to it like
> > this is just asking for trouble.
>
> How about a separate need_resched_rcu() that
On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index e4d4e60..89f5814 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@
On Fri, Jul 06, 2018 at 06:29:05PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index e4d4e60..89f5814 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@
On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index e4d4e60..89f5814 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1616,7 +1616,8 @@ static inline int spin_needbreak(spinlock_t *lock)
>
>
On Fri, Jul 06, 2018 at 03:53:30PM +0100, David Woodhouse wrote:
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index e4d4e60..89f5814 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1616,7 +1616,8 @@ static inline int spin_needbreak(spinlock_t *lock)
>
>
101 - 144 of 144 matches
Mail list logo