Re: [PATCH v3] sched/cputime: Fix using smp_processor_id() in preemptible

2017-07-20 Thread Paul McKenney
On Wed, Jul 19, 2017 at 4:38 PM, Paul McKenney <paulmck...@gmail.com> wrote: > On Thu, Jul 13, 2017 at 11:49 PM, Wanpeng Li <kernel...@gmail.com> wrote: >> >> Ping for the merge window. :) >> 2017-07-09 15:40 GMT+08:00 Wanpeng Li <kernel...@gmail.com>: >&g

Re: [PATCH v3] sched/cputime: Fix using smp_processor_id() in preemptible

2017-07-20 Thread Paul McKenney
On Wed, Jul 19, 2017 at 4:38 PM, Paul McKenney wrote: > On Thu, Jul 13, 2017 at 11:49 PM, Wanpeng Li wrote: >> >> Ping for the merge window. :) >> 2017-07-09 15:40 GMT+08:00 Wanpeng Li : >> > From: Wanpeng Li >> > >> > BUG: using smp_proces

Re: [PATCH 07/10] rcu: Separate the RCU synchronization types and APIs into

2017-02-11 Thread Paul McKenney
On Wed, Feb 8, 2017 at 10:34 AM, Ingo Molnar wrote: > So rcupdate.h is a pretty complex header, in particular it includes > which includes - creating a > dependency that includes in , > which prevents the isolation of from the derived > header. > > Solve part of the problem

Re: [PATCH 07/10] rcu: Separate the RCU synchronization types and APIs into

2017-02-11 Thread Paul McKenney
On Wed, Feb 8, 2017 at 10:34 AM, Ingo Molnar wrote: > So rcupdate.h is a pretty complex header, in particular it includes > which includes - creating a > dependency that includes in , > which prevents the isolation of from the derived > header. > > Solve part of the problem by decoupling

Re: linux-next: build failure after merge of the rcu tree

2017-01-19 Thread Paul McKenney
On Wed, Jan 18, 2017 at 7:34 PM, Stephen Rothwell wrote: > Hi Paul, > > After merging the rcu tree, today's linux-next build (x86_64 allmodconfig) > failed like this: > > net/smc/af_smc.c:102:16: error: 'SLAB_DESTROY_BY_RCU' undeclared here (not in > a function) >

Re: linux-next: build failure after merge of the rcu tree

2017-01-19 Thread Paul McKenney
On Wed, Jan 18, 2017 at 7:34 PM, Stephen Rothwell wrote: > Hi Paul, > > After merging the rcu tree, today's linux-next build (x86_64 allmodconfig) > failed like this: > > net/smc/af_smc.c:102:16: error: 'SLAB_DESTROY_BY_RCU' undeclared here (not in > a function) > .slab_flags =

Re: kvm: use-after-free in process_srcu

2017-01-19 Thread Paul McKenney
(Trouble with VPN, so replying from gmail.) On Thu, Jan 19, 2017 at 1:27 AM, Paolo Bonzini wrote: > > > On 18/01/2017 23:15, Paul E. McKenney wrote: >> On Wed, Jan 18, 2017 at 09:53:19AM +0100, Paolo Bonzini wrote: >>> >>> >>> On 17/01/2017 21:34, Paul E. McKenney wrote:

Re: kvm: use-after-free in process_srcu

2017-01-19 Thread Paul McKenney
(Trouble with VPN, so replying from gmail.) On Thu, Jan 19, 2017 at 1:27 AM, Paolo Bonzini wrote: > > > On 18/01/2017 23:15, Paul E. McKenney wrote: >> On Wed, Jan 18, 2017 at 09:53:19AM +0100, Paolo Bonzini wrote: >>> >>> >>> On 17/01/2017 21:34, Paul E. McKenney wrote: Do any of your

Re: [RFC PATCH] membarrier: handle nohz_full with expedited thread registration

2017-01-19 Thread Paul McKenney
On Wed, Jan 18, 2017 at 3:00 AM, Peter Zijlstra wrote: > On Tue, Jan 17, 2017 at 12:53:21PM -0800, Paul E. McKenney wrote: >> On Tue, Jan 17, 2017 at 04:55:22AM +0100, Frederic Weisbecker wrote: >> >> [ . . . ] >> >> > In fact due to the complexity involved, I have to ask

Re: [RFC PATCH] membarrier: handle nohz_full with expedited thread registration

2017-01-19 Thread Paul McKenney
On Wed, Jan 18, 2017 at 3:00 AM, Peter Zijlstra wrote: > On Tue, Jan 17, 2017 at 12:53:21PM -0800, Paul E. McKenney wrote: >> On Tue, Jan 17, 2017 at 04:55:22AM +0100, Frederic Weisbecker wrote: >> >> [ . . . ] >> >> > In fact due to the complexity involved, I have to ask first if we >> > really

Re: [patch] Real-Time Preemption, -RT-2.6.12-rc1-V0.7.41-00

2005-03-21 Thread Paul Mckenney
> got this early-bootup crash on an SMP box: > > BUG: Unable to handle kernel NULL pointer dereference at virtual address > > printing eip: > c0131aec > *pde = > Oops: 0002 [#1] > PREEMPT SMP > Modules linked in: > CPU:1 > EIP:0060:[]Not tainted VLI > EFLAGS:

Re: [patch] Real-Time Preemption, -RT-2.6.12-rc1-V0.7.41-00

2005-03-21 Thread Paul Mckenney
got this early-bootup crash on an SMP box: BUG: Unable to handle kernel NULL pointer dereference at virtual address printing eip: c0131aec *pde = Oops: 0002 [#1] PREEMPT SMP Modules linked in: CPU:1 EIP:0060:[c0131aec]Not tainted VLI EFLAGS: 00010293

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-22 Thread Paul McKenney
> > But if you are suppressing preemption in all read-side critical sections, > > then wouldn't any already-preempted tasks be guaranteed to -not- be in > > a read-side critical section, and therefore be guaranteed to be unaffected > > by the update (in other words, wouldn't such tasks not need

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-22 Thread Paul McKenney
But if you are suppressing preemption in all read-side critical sections, then wouldn't any already-preempted tasks be guaranteed to -not- be in a read-side critical section, and therefore be guaranteed to be unaffected by the update (in other words, wouldn't such tasks not need to be

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-10 Thread Paul McKenney
> On Tue, 10 Apr 2001, Paul McKenney wrote: > > The algorithms we have been looking at need to have absolute guarantees > > that earlier activity has completed. The most straightforward way to > > guarantee this is to have the critical-section activity run with preemptio

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-10 Thread Paul McKenney
> > As you've observed, with the approach of waiting for all pre-empted tasks > > to synchronize, the possibility of a task staying pre-empted for a long > > time could affect the latency of an update/synchonize (though its hard for > > me to judge how likely that is). > > It's very unlikely on

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-10 Thread Paul McKenney
As you've observed, with the approach of waiting for all pre-empted tasks to synchronize, the possibility of a task staying pre-empted for a long time could affect the latency of an update/synchonize (though its hard for me to judge how likely that is). It's very unlikely on a system

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-10 Thread Paul McKenney
On Tue, 10 Apr 2001, Paul McKenney wrote: The algorithms we have been looking at need to have absolute guarantees that earlier activity has completed. The most straightforward way to guarantee this is to have the critical-section activity run with preemption disabled. Most

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-07 Thread Paul McKenney
> > > > 2. Isn't it possible to get in trouble even on a UP if a task > > > > is preempted in a critical region? For example, suppose the > > > > preempting task does a synchronize_kernel()? > > > > > > Ugly. I guess one way to solve it would be to readd the 2.2 scheduler > > >

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-07 Thread Paul McKenney
> > I see your point here, but need to think about it. One question: > > isn't it the case that the alternative to using synchronize_kernel() > > is to protect the read side with explicit locks, which will themselves > > suppress preemption? If so, why not just suppress preemption on the read

Re: [PATCH for 2.5] preemptible kernel

2001-04-07 Thread Paul McKenney
Andi, thank you for the background! More comments interspersed... > On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote: > > 1. On a busy system, isn't it possible for a preempted task > > to stay preempted for a -long- time, especially if there are > >

Re: [PATCH for 2.5] preemptible kernel

2001-04-07 Thread Paul McKenney
Andi, thank you for the background! More comments interspersed... On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote: 1. On a busy system, isn't it possible for a preempted task to stay preempted for a -long- time, especially if there are lots of real-time tasks

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-07 Thread Paul McKenney
I see your point here, but need to think about it. One question: isn't it the case that the alternative to using synchronize_kernel() is to protect the read side with explicit locks, which will themselves suppress preemption? If so, why not just suppress preemption on the read side

Re: [Lse-tech] Re: [PATCH for 2.5] preemptible kernel

2001-04-07 Thread Paul McKenney
2. Isn't it possible to get in trouble even on a UP if a task is preempted in a critical region? For example, suppose the preempting task does a synchronize_kernel()? Ugly. I guess one way to solve it would be to readd the 2.2 scheduler taskqueue, and just queue

Re: [PATCH for 2.5] preemptible kernel

2001-04-06 Thread Paul McKenney
Please accept my apologies if I am missing something basic, but... 1. On a busy system, isn't it possible for a preempted task to stay preempted for a -long- time, especially if there are lots of real-time tasks in the mix? 2. Isn't it possible to get in trouble even on a UP if a

Re: [PATCH for 2.5] preemptible kernel

2001-04-06 Thread Paul McKenney
Please accept my apologies if I am missing something basic, but... 1. On a busy system, isn't it possible for a preempted task to stay preempted for a -long- time, especially if there are lots of real-time tasks in the mix? 2. Isn't it possible to get in trouble even on a UP if a

Re: [Lse-tech] Re: a quest for a better scheduler

2001-04-04 Thread Paul McKenney
> Just a quick comment. Andrea, unless your machine has some hardware > that imply pernode runqueues will help (nodelevel caches etc), I fail > to understand how this is helping you ... here's a simple theory though. > If your system is lightly loaded, your pernode queues are actually >

Re: [Lse-tech] Re: a quest for a better scheduler

2001-04-04 Thread Paul McKenney
Just a quick comment. Andrea, unless your machine has some hardware that imply pernode runqueues will help (nodelevel caches etc), I fail to understand how this is helping you ... here's a simple theory though. If your system is lightly loaded, your pernode queues are actually implementing