On Wed, Jul 19, 2017 at 4:38 PM, Paul McKenney <paulmck...@gmail.com> wrote:
> On Thu, Jul 13, 2017 at 11:49 PM, Wanpeng Li <kernel...@gmail.com> wrote:
>>
>> Ping for the merge window. :)
>> 2017-07-09 15:40 GMT+08:00 Wanpeng Li <kernel...@gmail.com>:
>&g
On Wed, Jul 19, 2017 at 4:38 PM, Paul McKenney wrote:
> On Thu, Jul 13, 2017 at 11:49 PM, Wanpeng Li wrote:
>>
>> Ping for the merge window. :)
>> 2017-07-09 15:40 GMT+08:00 Wanpeng Li :
>> > From: Wanpeng Li
>> >
>> > BUG: using smp_proces
On Wed, Feb 8, 2017 at 10:34 AM, Ingo Molnar wrote:
> So rcupdate.h is a pretty complex header, in particular it includes
> which includes - creating a
> dependency that includes in ,
> which prevents the isolation of from the derived
> header.
>
> Solve part of the problem
On Wed, Feb 8, 2017 at 10:34 AM, Ingo Molnar wrote:
> So rcupdate.h is a pretty complex header, in particular it includes
> which includes - creating a
> dependency that includes in ,
> which prevents the isolation of from the derived
> header.
>
> Solve part of the problem by decoupling
On Wed, Jan 18, 2017 at 7:34 PM, Stephen Rothwell wrote:
> Hi Paul,
>
> After merging the rcu tree, today's linux-next build (x86_64 allmodconfig)
> failed like this:
>
> net/smc/af_smc.c:102:16: error: 'SLAB_DESTROY_BY_RCU' undeclared here (not in
> a function)
>
On Wed, Jan 18, 2017 at 7:34 PM, Stephen Rothwell wrote:
> Hi Paul,
>
> After merging the rcu tree, today's linux-next build (x86_64 allmodconfig)
> failed like this:
>
> net/smc/af_smc.c:102:16: error: 'SLAB_DESTROY_BY_RCU' undeclared here (not in
> a function)
> .slab_flags =
(Trouble with VPN, so replying from gmail.)
On Thu, Jan 19, 2017 at 1:27 AM, Paolo Bonzini wrote:
>
>
> On 18/01/2017 23:15, Paul E. McKenney wrote:
>> On Wed, Jan 18, 2017 at 09:53:19AM +0100, Paolo Bonzini wrote:
>>>
>>>
>>> On 17/01/2017 21:34, Paul E. McKenney wrote:
(Trouble with VPN, so replying from gmail.)
On Thu, Jan 19, 2017 at 1:27 AM, Paolo Bonzini wrote:
>
>
> On 18/01/2017 23:15, Paul E. McKenney wrote:
>> On Wed, Jan 18, 2017 at 09:53:19AM +0100, Paolo Bonzini wrote:
>>>
>>>
>>> On 17/01/2017 21:34, Paul E. McKenney wrote:
Do any of your
On Wed, Jan 18, 2017 at 3:00 AM, Peter Zijlstra wrote:
> On Tue, Jan 17, 2017 at 12:53:21PM -0800, Paul E. McKenney wrote:
>> On Tue, Jan 17, 2017 at 04:55:22AM +0100, Frederic Weisbecker wrote:
>>
>> [ . . . ]
>>
>> > In fact due to the complexity involved, I have to ask
On Wed, Jan 18, 2017 at 3:00 AM, Peter Zijlstra wrote:
> On Tue, Jan 17, 2017 at 12:53:21PM -0800, Paul E. McKenney wrote:
>> On Tue, Jan 17, 2017 at 04:55:22AM +0100, Frederic Weisbecker wrote:
>>
>> [ . . . ]
>>
>> > In fact due to the complexity involved, I have to ask first if we
>> > really
> got this early-bootup crash on an SMP box:
>
> BUG: Unable to handle kernel NULL pointer dereference at virtual address
>
> printing eip:
> c0131aec
> *pde =
> Oops: 0002 [#1]
> PREEMPT SMP
> Modules linked in:
> CPU:1
> EIP:0060:[]Not tainted VLI
> EFLAGS:
got this early-bootup crash on an SMP box:
BUG: Unable to handle kernel NULL pointer dereference at virtual address
printing eip:
c0131aec
*pde =
Oops: 0002 [#1]
PREEMPT SMP
Modules linked in:
CPU:1
EIP:0060:[c0131aec]Not tainted VLI
EFLAGS: 00010293
> > But if you are suppressing preemption in all read-side critical
sections,
> > then wouldn't any already-preempted tasks be guaranteed to -not- be in
> > a read-side critical section, and therefore be guaranteed to be
unaffected
> > by the update (in other words, wouldn't such tasks not need
But if you are suppressing preemption in all read-side critical
sections,
then wouldn't any already-preempted tasks be guaranteed to -not- be in
a read-side critical section, and therefore be guaranteed to be
unaffected
by the update (in other words, wouldn't such tasks not need to be
> On Tue, 10 Apr 2001, Paul McKenney wrote:
> > The algorithms we have been looking at need to have absolute guarantees
> > that earlier activity has completed. The most straightforward way to
> > guarantee this is to have the critical-section activity run with
preemptio
> > As you've observed, with the approach of waiting for all pre-empted
tasks
> > to synchronize, the possibility of a task staying pre-empted for a long
> > time could affect the latency of an update/synchonize (though its hard
for
> > me to judge how likely that is).
>
> It's very unlikely on
As you've observed, with the approach of waiting for all pre-empted
tasks
to synchronize, the possibility of a task staying pre-empted for a long
time could affect the latency of an update/synchonize (though its hard
for
me to judge how likely that is).
It's very unlikely on a system
On Tue, 10 Apr 2001, Paul McKenney wrote:
The algorithms we have been looking at need to have absolute guarantees
that earlier activity has completed. The most straightforward way to
guarantee this is to have the critical-section activity run with
preemption
disabled. Most
> > > > 2. Isn't it possible to get in trouble even on a UP if a task
> > > > is preempted in a critical region? For example, suppose the
> > > > preempting task does a synchronize_kernel()?
> > >
> > > Ugly. I guess one way to solve it would be to readd the 2.2 scheduler
> > >
> > I see your point here, but need to think about it. One question:
> > isn't it the case that the alternative to using synchronize_kernel()
> > is to protect the read side with explicit locks, which will themselves
> > suppress preemption? If so, why not just suppress preemption on the
read
Andi, thank you for the background! More comments interspersed...
> On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
> > 1. On a busy system, isn't it possible for a preempted task
> > to stay preempted for a -long- time, especially if there are
> >
Andi, thank you for the background! More comments interspersed...
On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks
I see your point here, but need to think about it. One question:
isn't it the case that the alternative to using synchronize_kernel()
is to protect the read side with explicit locks, which will themselves
suppress preemption? If so, why not just suppress preemption on the
read
side
2. Isn't it possible to get in trouble even on a UP if a task
is preempted in a critical region? For example, suppose the
preempting task does a synchronize_kernel()?
Ugly. I guess one way to solve it would be to readd the 2.2 scheduler
taskqueue, and just queue
Please accept my apologies if I am missing something basic, but...
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks in the mix?
2. Isn't it possible to get in trouble even on a UP if a
Please accept my apologies if I am missing something basic, but...
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks in the mix?
2. Isn't it possible to get in trouble even on a UP if a
> Just a quick comment. Andrea, unless your machine has some hardware
> that imply pernode runqueues will help (nodelevel caches etc), I fail
> to understand how this is helping you ... here's a simple theory though.
> If your system is lightly loaded, your pernode queues are actually
>
Just a quick comment. Andrea, unless your machine has some hardware
that imply pernode runqueues will help (nodelevel caches etc), I fail
to understand how this is helping you ... here's a simple theory though.
If your system is lightly loaded, your pernode queues are actually
implementing
28 matches
Mail list logo