> > But if you are suppressing preemption in all read-side critical
sections,
> > then wouldn't any already-preempted tasks be guaranteed to -not- be in
> > a read-side critical section, and therefore be guaranteed to be
unaffected
> > by the update (in other words, wouldn't such tasks not need
But if you are suppressing preemption in all read-side critical
sections,
then wouldn't any already-preempted tasks be guaranteed to -not- be in
a read-side critical section, and therefore be guaranteed to be
unaffected
by the update (in other words, wouldn't such tasks not need to be
In message you write:
> > Already preempted tasks.
>
> But if you are suppressing preemption in all read-side critical sections,
> then wouldn't any already-preempted tasks be guaranteed to -not- be in
> a read-side critical section, and
In message OF42269F5F.CDF56B0F-ON88256A27.0083566F@LocalDomain you write:
Already preempted tasks.
But if you are suppressing preemption in all read-side critical sections,
then wouldn't any already-preempted tasks be guaranteed to -not- be in
a read-side critical section, and therefore be
On Tue, 10 Apr 2001 [EMAIL PROTECTED] wrote:
> On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
> > > Disabling preemption is a possible solution if the critical section is
> > short
> > > - less than 100us - otherwise preemption latencies become a problem.
> >
> > Seems like a
On Tue, 10 Apr 2001, Paul McKenney wrote:
> > Disabling preemption is a possible solution if the critical section
> > is
> short
> > - less than 100us - otherwise preemption latencies become a problem.
>
> Seems like a reasonable restriction. Of course, this same limit
> applies to locks and
On Tue, 10 Apr 2001, Paul McKenney wrote:
Disabling preemption is a possible solution if the critical section
is
short
- less than 100us - otherwise preemption latencies become a problem.
Seems like a reasonable restriction. Of course, this same limit
applies to locks and interrupt
On Tue, 10 Apr 2001 [EMAIL PROTECTED] wrote:
On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
Disabling preemption is a possible solution if the critical section is
short
- less than 100us - otherwise preemption latencies become a problem.
Seems like a reasonable
On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
> > Disabling preemption is a possible solution if the critical section is
> short
> > - less than 100us - otherwise preemption latencies become a problem.
>
> Seems like a reasonable restriction. Of course, this same limit applies
> On Tue, 10 Apr 2001, Paul McKenney wrote:
> > The algorithms we have been looking at need to have absolute guarantees
> > that earlier activity has completed. The most straightforward way to
> > guarantee this is to have the critical-section activity run with
preemption
> > disabled. Most of
On Tue, 10 Apr 2001, Paul McKenney wrote:
> The algorithms we have been looking at need to have absolute guarantees
> that earlier activity has completed. The most straightforward way to
> guarantee this is to have the critical-section activity run with preemption
> disabled. Most of these code
> > As you've observed, with the approach of waiting for all pre-empted
tasks
> > to synchronize, the possibility of a task staying pre-empted for a long
> > time could affect the latency of an update/synchonize (though its hard
for
> > me to judge how likely that is).
>
> It's very unlikely on
As you've observed, with the approach of waiting for all pre-empted
tasks
to synchronize, the possibility of a task staying pre-empted for a long
time could affect the latency of an update/synchonize (though its hard
for
me to judge how likely that is).
It's very unlikely on a system
On Tue, 10 Apr 2001, Paul McKenney wrote:
The algorithms we have been looking at need to have absolute guarantees
that earlier activity has completed. The most straightforward way to
guarantee this is to have the critical-section activity run with preemption
disabled. Most of these code
On Tue, 10 Apr 2001, Paul McKenney wrote:
The algorithms we have been looking at need to have absolute guarantees
that earlier activity has completed. The most straightforward way to
guarantee this is to have the critical-section activity run with
preemption
disabled. Most of these
On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
Disabling preemption is a possible solution if the critical section is
short
- less than 100us - otherwise preemption latencies become a problem.
Seems like a reasonable restriction. Of course, this same limit applies
to
On Mon, 9 Apr 2001 [EMAIL PROTECTED] wrote:
> As you've observed, with the approach of waiting for all pre-empted tasks
> to synchronize, the possibility of a task staying pre-empted for a long
> time could affect the latency of an update/synchonize (though its hard for
> me to judge how likely
>One question:
>isn't it the case that the alternative to using synchronize_kernel()
>is to protect the read side with explicit locks, which will themselves
>suppress preemption? If so, why not just suppress preemption on the read
>side in preemptible kernels, and thus gain the simpler
One question:
isn't it the case that the alternative to using synchronize_kernel()
is to protect the read side with explicit locks, which will themselves
suppress preemption? If so, why not just suppress preemption on the read
side in preemptible kernels, and thus gain the simpler
On Mon, 9 Apr 2001 [EMAIL PROTECTED] wrote:
As you've observed, with the approach of waiting for all pre-empted tasks
to synchronize, the possibility of a task staying pre-empted for a long
time could affect the latency of an update/synchonize (though its hard for
me to judge how likely that
> > > > 2. Isn't it possible to get in trouble even on a UP if a task
> > > > is preempted in a critical region? For example, suppose the
> > > > preempting task does a synchronize_kernel()?
> > >
> > > Ugly. I guess one way to solve it would be to readd the 2.2 scheduler
> > >
> > I see your point here, but need to think about it. One question:
> > isn't it the case that the alternative to using synchronize_kernel()
> > is to protect the read side with explicit locks, which will themselves
> > suppress preemption? If so, why not just suppress preemption on the
read
On Fri, Apr 06, 2001 at 06:25:36PM -0700, Paul McKenney wrote:
> I see your point here, but need to think about it. One question:
> isn't it the case that the alternative to using synchronize_kernel()
> is to protect the read side with explicit locks, which will themselves
> suppress preemption?
In message you write:
> > Priority inversion is not handled in Linux kernel ATM BTW, there
> > are already situations where a realtime task can cause a deadlock
> > with some lower priority system thread (I believe there is at least
> > one
Andi, thank you for the background! More comments interspersed...
> On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
> > 1. On a busy system, isn't it possible for a preempted task
> > to stay preempted for a -long- time, especially if there are
> > lots of real-time
Andi, thank you for the background! More comments interspersed...
On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks
In message OF37B0793C.6B15F182-ON88256A27.0007C3EF@LocalDomain you write:
Priority inversion is not handled in Linux kernel ATM BTW, there
are already situations where a realtime task can cause a deadlock
with some lower priority system thread (I believe there is at least
one case of this
On Fri, Apr 06, 2001 at 06:25:36PM -0700, Paul McKenney wrote:
I see your point here, but need to think about it. One question:
isn't it the case that the alternative to using synchronize_kernel()
is to protect the read side with explicit locks, which will themselves
suppress preemption? If
I see your point here, but need to think about it. One question:
isn't it the case that the alternative to using synchronize_kernel()
is to protect the read side with explicit locks, which will themselves
suppress preemption? If so, why not just suppress preemption on the
read
side
2. Isn't it possible to get in trouble even on a UP if a task
is preempted in a critical region? For example, suppose the
preempting task does a synchronize_kernel()?
Ugly. I guess one way to solve it would be to readd the 2.2 scheduler
taskqueue, and just queue
Hallo,
On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
> 1. On a busy system, isn't it possible for a preempted task
> to stay preempted for a -long- time, especially if there are
> lots of real-time tasks in the mix?
The problem you're describing is probably
Please accept my apologies if I am missing something basic, but...
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks in the mix?
2. Isn't it possible to get in trouble even on a UP if a
Please accept my apologies if I am missing something basic, but...
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks in the mix?
2. Isn't it possible to get in trouble even on a UP if a
Hallo,
On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks in the mix?
The problem you're describing is probably considered
In message <[EMAIL PROTECTED]> you write
:
> > Setting a running task's flags brings races, AFAICT, and checking
> > p->state is NOT sufficient, consider wait_event(): you need p->has_cpu
> > here I think.
>
> My thought here was that if p->state is anything other than TASK_RUNNING
> or
In message <[EMAIL PROTECTED]> you write:
> On a higher level, I think the scanning of the run list to set flags and
> counters is a bit off.
[snip standard refcnt scheme]
For most things, refcnts are great. I use them in connection
tracking. But when writes can be insanely slow (eg. once per
In message [EMAIL PROTECTED] you write
:
Setting a running task's flags brings races, AFAICT, and checking
p-state is NOT sufficient, consider wait_event(): you need p-has_cpu
here I think.
My thought here was that if p-state is anything other than TASK_RUNNING
or
In message [EMAIL PROTECTED] you write:
On a higher level, I think the scanning of the run list to set flags and
counters is a bit off.
[snip standard refcnt scheme]
For most things, refcnts are great. I use them in connection
tracking. But when writes can be insanely slow (eg. once per
Nigel Gamble wrote:
>
> On Sat, 31 Mar 2001, george anzinger wrote:
> > I think this should be:
> > if (p->has_cpu || p->state & TASK_PREEMPTED)) {
> > to catch tasks that were preempted with other states.
>
> But the other states are all part of the state change that happens at
Nigel Gamble wrote:
On Sat, 31 Mar 2001, george anzinger wrote:
I think this should be:
if (p-has_cpu || p-state TASK_PREEMPTED)) {
to catch tasks that were preempted with other states.
But the other states are all part of the state change that happens at a
On Sat, 31 Mar 2001, Rusty Russell wrote:
> > if (p->state == TASK_RUNNING ||
> > (p->state == (TASK_RUNNING|TASK_PREEMPTED))) {
> > p->flags |= PF_SYNCING;
>
> Setting a running task's flags brings races, AFAICT, and checking
>
On Sat, 31 Mar 2001, george anzinger wrote:
> I think this should be:
> if (p->has_cpu || p->state & TASK_PREEMPTED)) {
> to catch tasks that were preempted with other states.
But the other states are all part of the state change that happens at a
non-preemtive schedule() point,
On Sat, 31 Mar 2001, george anzinger wrote:
I think this should be:
if (p-has_cpu || p-state TASK_PREEMPTED)) {
to catch tasks that were preempted with other states.
But the other states are all part of the state change that happens at a
non-preemtive schedule() point, aren't
On Sat, 31 Mar 2001, Rusty Russell wrote:
if (p-state == TASK_RUNNING ||
(p-state == (TASK_RUNNING|TASK_PREEMPTED))) {
p-flags |= PF_SYNCING;
Setting a running task's flags brings races, AFAICT, and checking
p-state is NOT
Rusty Russell wrote:
>
> In message <[EMAIL PROTECTED]> you write:
> > Here is an attempt at a possible version of synchronize_kernel() that
> > should work on a preemptible kernel. I haven't tested it yet.
>
> It's close, but...
>
> Those who suggest that we don't do preemtion on SMP make
Rusty Russell wrote:
In message [EMAIL PROTECTED] you write:
Here is an attempt at a possible version of synchronize_kernel() that
should work on a preemptible kernel. I haven't tested it yet.
It's close, but...
Those who suggest that we don't do preemtion on SMP make this much
In message <[EMAIL PROTECTED]> you write:
> Here is an attempt at a possible version of synchronize_kernel() that
> should work on a preemptible kernel. I haven't tested it yet.
It's close, but...
Those who suggest that we don't do preemtion on SMP make this much
easier (synchronize_kernel()
In message [EMAIL PROTECTED] you write:
Here is an attempt at a possible version of synchronize_kernel() that
should work on a preemptible kernel. I haven't tested it yet.
It's close, but...
Those who suggest that we don't do preemtion on SMP make this much
easier (synchronize_kernel() is a
On Wed, 28 Mar 2001 12:51:02 -0800,
george anzinger <[EMAIL PROTECTED]> wrote:
>Dipankar Sarma wrote:
>> 1. Disable pre-emption during the time when references to data
>> structures
>> updated using such Two-phase updates are held.
>
>Doesn't this fly in the face of the whole Two-phase system?
On Tue, 20 Mar 2001, Nigel Gamble wrote:
> On Tue, 20 Mar 2001, Rusty Russell wrote:
> > Thoughts?
>
> Perhaps synchronize_kernel() could take the run_queue lock, mark all the
> tasks on it and count them. Any task marked when it calls schedule()
> voluntarily (but not if it is preempted) is
On Wed, Mar 28, 2001 at 12:51:02PM -0800, george anzinger wrote:
> Dipankar Sarma wrote:
> >
> > Also, a task could be preempted and then rescheduled on the same cpu
> > making
> > the depth counter 0 (right ?), but it could still be holding references
> > to data
> > structures to be updated
On Wed, Mar 28, 2001 at 12:51:02PM -0800, george anzinger wrote:
Dipankar Sarma wrote:
Also, a task could be preempted and then rescheduled on the same cpu
making
the depth counter 0 (right ?), but it could still be holding references
to data
structures to be updated using
On Tue, 20 Mar 2001, Nigel Gamble wrote:
On Tue, 20 Mar 2001, Rusty Russell wrote:
Thoughts?
Perhaps synchronize_kernel() could take the run_queue lock, mark all the
tasks on it and count them. Any task marked when it calls schedule()
voluntarily (but not if it is preempted) is unmarked
On Wed, 28 Mar 2001 12:51:02 -0800,
george anzinger [EMAIL PROTECTED] wrote:
Dipankar Sarma wrote:
1. Disable pre-emption during the time when references to data
structures
updated using such Two-phase updates are held.
Doesn't this fly in the face of the whole Two-phase system? It seems to
Dipankar Sarma wrote:
>
> Nigel Gamble wrote:
> >
> > On Wed, 21 Mar 2001, Keith Owens wrote:
> > > I misread the code, but the idea is still correct. Add a preemption
> > > depth counter to each cpu, when you schedule and the depth is zero then
> > > you know that the cpu is no longer holding
Hi George,
george anzinger wrote:
>
> Exactly so. The method does not depend on the sum of preemption being
> zip, but on each potential reader (writers take locks) passing thru a
> "sync point". Your notion of waiting for each task to arrive
> "naturally" at schedule() would work. It is, in
Nigel Gamble wrote:
>
> On Wed, 21 Mar 2001, Keith Owens wrote:
> > I misread the code, but the idea is still correct. Add a preemption
> > depth counter to each cpu, when you schedule and the depth is zero then
> > you know that the cpu is no longer holding any references to quiesced
> >
Nigel Gamble wrote:
On Wed, 21 Mar 2001, Keith Owens wrote:
I misread the code, but the idea is still correct. Add a preemption
depth counter to each cpu, when you schedule and the depth is zero then
you know that the cpu is no longer holding any references to quiesced
structures.
Hi George,
george anzinger wrote:
Exactly so. The method does not depend on the sum of preemption being
zip, but on each potential reader (writers take locks) passing thru a
"sync point". Your notion of waiting for each task to arrive
"naturally" at schedule() would work. It is, in
Dipankar Sarma wrote:
Nigel Gamble wrote:
On Wed, 21 Mar 2001, Keith Owens wrote:
I misread the code, but the idea is still correct. Add a preemption
depth counter to each cpu, when you schedule and the depth is zero then
you know that the cpu is no longer holding any references
On Thu, 22 Mar 2001, Rusty Russell wrote:
> Nigel's "traverse the run queue and mark the preempted" solution is
> actually pretty nice, and cheap. Since the runqueue lock is grabbed,
> it doesn't require icky atomic ops, either.
You'd have to mark both the preempted tasks, and the tasks
In message <[EMAIL PROTECTED]> you write:
>
> Keith Owens writes:
> > Or have I missed something?
>
> Nope, it is a fundamental problem with such kernel pre-emption
> schemes. As a result, it would also break our big-reader locks
> (see include/linux/brlock.h).
Good point: holding a brlock
In message <[EMAIL PROTECTED]> you write:
> Nigel Gamble wrote:
> >
> > On Wed, 21 Mar 2001, Keith Owens wrote:
> > > I misread the code, but the idea is still correct. Add a preemption
> > > depth counter to each cpu, when you schedule and the depth is zero then
> > > you know that the cpu is
In message [EMAIL PROTECTED] you write:
Nigel Gamble wrote:
On Wed, 21 Mar 2001, Keith Owens wrote:
I misread the code, but the idea is still correct. Add a preemption
depth counter to each cpu, when you schedule and the depth is zero then
you know that the cpu is no longer
On Thu, 22 Mar 2001, Rusty Russell wrote:
Nigel's "traverse the run queue and mark the preempted" solution is
actually pretty nice, and cheap. Since the runqueue lock is grabbed,
it doesn't require icky atomic ops, either.
You'd have to mark both the preempted tasks, and the tasks currently
In message [EMAIL PROTECTED] you write:
Keith Owens writes:
Or have I missed something?
Nope, it is a fundamental problem with such kernel pre-emption
schemes. As a result, it would also break our big-reader locks
(see include/linux/brlock.h).
Good point: holding a brlock has to
On Wed, 21 Mar 2001, Andrew Morton wrote:
> It's a problem for uniprocessors as well.
>
> Example:
>
> #define current_cpu_data boot_cpu_data
> #define pgd_quicklist (current_cpu_data.pgd_quick)
>
> extern __inline__ void free_pgd_fast(pgd_t *pgd)
> {
> *(unsigned long *)pgd =
On Wed, 21 Mar 2001, David S. Miller wrote:
> Basically, anything which uses smp_processor_id() would need to
> be holding some lock so as to not get pre-empted.
Not necessarily. Another solution for the smp_processor_id() case is
to ensure that the task can only be scheduled on the current CPU
On Wed, Mar 21, 2001 at 08:19:54PM +1100, Keith Owens wrote:
> Ouch. What about all the per cpu structures in the kernel, how do you
> handle them if a preempted task can be rescheduled on another cpu?
>
> int count[NR_CPUS], *p;
> p = count+smp_processor_id(); /* start on cpu 0, [0] */
> if
george anzinger writes:
> By the by, if a preemption lock is all that is needed the patch defines
> it and it is rather fast (an inc going in and a dec & test comming
> out). A lot faster than a spin lock with its "LOCK" access. A preempt
> lock does not need to be "LOCK"ed because the
"David S. Miller" wrote:
>
> Keith Owens writes:
> > Or have I missed something?
>
> Nope, it is a fundamental problem with such kernel pre-emption
> schemes. As a result, it would also break our big-reader locks
> (see include/linux/brlock.h).
>
> Basically, anything which uses
Keith Owens writes:
> Or have I missed something?
Nope, it is a fundamental problem with such kernel pre-emption
schemes. As a result, it would also break our big-reader locks
(see include/linux/brlock.h).
Basically, anything which uses smp_processor_id() would need to
be holding some lock
Nigel Gamble wrote:
> A task that has been preempted is on the run queue and can be
> rescheduled on a different CPU, so I can't see how a per-CPU counter
> would work. It seems to me that you would need a per run queue
> counter, like the example I gave in a previous posting.
Ouch. What about
On Wed, 21 Mar 2001 00:04:56 -0800,
george anzinger <[EMAIL PROTECTED]> wrote:
>Exactly so. The method does not depend on the sum of preemption being
>zip, but on each potential reader (writers take locks) passing thru a
>"sync point". Your notion of waiting for each task to arrive
Nigel Gamble wrote:
>
> On Wed, 21 Mar 2001, Keith Owens wrote:
> > I misread the code, but the idea is still correct. Add a preemption
> > depth counter to each cpu, when you schedule and the depth is zero then
> > you know that the cpu is no longer holding any references to quiesced
> >
Nigel Gamble wrote:
On Wed, 21 Mar 2001, Keith Owens wrote:
I misread the code, but the idea is still correct. Add a preemption
depth counter to each cpu, when you schedule and the depth is zero then
you know that the cpu is no longer holding any references to quiesced
structures.
On Wed, 21 Mar 2001 00:04:56 -0800,
george anzinger [EMAIL PROTECTED] wrote:
Exactly so. The method does not depend on the sum of preemption being
zip, but on each potential reader (writers take locks) passing thru a
"sync point". Your notion of waiting for each task to arrive
"naturally" at
Nigel Gamble wrote:
A task that has been preempted is on the run queue and can be
rescheduled on a different CPU, so I can't see how a per-CPU counter
would work. It seems to me that you would need a per run queue
counter, like the example I gave in a previous posting.
Ouch. What about all
On Wed, Mar 21, 2001 at 08:19:54PM +1100, Keith Owens wrote:
Ouch. What about all the per cpu structures in the kernel, how do you
handle them if a preempted task can be rescheduled on another cpu?
int count[NR_CPUS], *p;
p = count+smp_processor_id(); /* start on cpu 0, count[0] */
if
george anzinger writes:
By the by, if a preemption lock is all that is needed the patch defines
it and it is rather fast (an inc going in and a dec test comming
out). A lot faster than a spin lock with its "LOCK" access. A preempt
lock does not need to be "LOCK"ed because the only
On Wed, 21 Mar 2001, Andrew Morton wrote:
It's a problem for uniprocessors as well.
Example:
#define current_cpu_data boot_cpu_data
#define pgd_quicklist (current_cpu_data.pgd_quick)
extern __inline__ void free_pgd_fast(pgd_t *pgd)
{
*(unsigned long *)pgd = (unsigned long)
Keith Owens writes:
Or have I missed something?
Nope, it is a fundamental problem with such kernel pre-emption
schemes. As a result, it would also break our big-reader locks
(see include/linux/brlock.h).
Basically, anything which uses smp_processor_id() would need to
be holding some lock so
On Wed, 21 Mar 2001, David S. Miller wrote:
Basically, anything which uses smp_processor_id() would need to
be holding some lock so as to not get pre-empted.
Not necessarily. Another solution for the smp_processor_id() case is
to ensure that the task can only be scheduled on the current CPU
"David S. Miller" wrote:
Keith Owens writes:
Or have I missed something?
Nope, it is a fundamental problem with such kernel pre-emption
schemes. As a result, it would also break our big-reader locks
(see include/linux/brlock.h).
Basically, anything which uses smp_processor_id()
On Wed, 21 Mar 2001, Keith Owens wrote:
> I misread the code, but the idea is still correct. Add a preemption
> depth counter to each cpu, when you schedule and the depth is zero then
> you know that the cpu is no longer holding any references to quiesced
> structures.
A task that has been
On Tue, 20 Mar 2001 16:48:01 -0800 (PST),
Nigel Gamble <[EMAIL PROTECTED]> wrote:
>On Tue, 20 Mar 2001, Keith Owens wrote:
>> The preemption patch only allows preemption from interrupt and only for
>> a single level of preemption. That coexists quite happily with
>> synchronize_kernel() which
On Tue, 20 Mar 2001, Keith Owens wrote:
> The preemption patch only allows preemption from interrupt and only for
> a single level of preemption. That coexists quite happily with
> synchronize_kernel() which runs in user context. Just count user
> context schedules (preempt_count == 0), not
On Tue, 20 Mar 2001, Rusty Russell wrote:
> I can see three problems with this approach, only one of which
> is serious.
>
> The first is code which is already SMP unsafe is now a problem for
> everyone, not just the 0.1% of SMP machines. I consider this a good
> thing for 2.5 though.
So
Nigel Gamble wrote:
>
> On Tue, 20 Mar 2001, Roger Larsson wrote:
> > One little readability thing I found.
> > The prev->state TASK_ value is mostly used as a plain value
> > but the new TASK_PREEMPTED is or:ed together with whatever was there.
> > Later when we switch to check the state it is
On Tue, 20 Mar 2001, Roger Larsson wrote:
> One little readability thing I found.
> The prev->state TASK_ value is mostly used as a plain value
> but the new TASK_PREEMPTED is or:ed together with whatever was there.
> Later when we switch to check the state it is checked against TASK_PREEMPTED
>
Hi,
One little readability thing I found.
The prev->state TASK_ value is mostly used as a plain value
but the new TASK_PREEMPTED is or:ed together with whatever was there.
Later when we switch to check the state it is checked against TASK_PREEMPTED
only. Since TASK_RUNNING is 0 it works OK
On Tue, 20 Mar 2001 19:43:50 +1100,
Rusty Russell <[EMAIL PROTECTED]> wrote:
>The third is that preemtivity conflicts with the naive
>quiescent-period approach proposed for module unloading in 2.5, and
>useful for several other things (eg. hotplugging CPUs). This method
>relies on knowing that
In message <[EMAIL PROTECTED]> you write:
> Kernel preemption is not allowed while spinlocks are held, which means
> that this patch alone cannot guarantee low preemption latencies. But
> as long held locks (in particular the BKL) are replaced by finer-grained
> locks, this patch will enable
In message [EMAIL PROTECTED] you write:
Kernel preemption is not allowed while spinlocks are held, which means
that this patch alone cannot guarantee low preemption latencies. But
as long held locks (in particular the BKL) are replaced by finer-grained
locks, this patch will enable lower
On Tue, 20 Mar 2001 19:43:50 +1100,
Rusty Russell [EMAIL PROTECTED] wrote:
The third is that preemtivity conflicts with the naive
quiescent-period approach proposed for module unloading in 2.5, and
useful for several other things (eg. hotplugging CPUs). This method
relies on knowing that when a
Hi,
One little readability thing I found.
The prev-state TASK_ value is mostly used as a plain value
but the new TASK_PREEMPTED is or:ed together with whatever was there.
Later when we switch to check the state it is checked against TASK_PREEMPTED
only. Since TASK_RUNNING is 0 it works OK but...
Nigel Gamble wrote:
On Tue, 20 Mar 2001, Roger Larsson wrote:
One little readability thing I found.
The prev-state TASK_ value is mostly used as a plain value
but the new TASK_PREEMPTED is or:ed together with whatever was there.
Later when we switch to check the state it is checked
On Tue, 20 Mar 2001, Rusty Russell wrote:
I can see three problems with this approach, only one of which
is serious.
The first is code which is already SMP unsafe is now a problem for
everyone, not just the 0.1% of SMP machines. I consider this a good
thing for 2.5 though.
So do I.
On Tue, 20 Mar 2001, Keith Owens wrote:
The preemption patch only allows preemption from interrupt and only for
a single level of preemption. That coexists quite happily with
synchronize_kernel() which runs in user context. Just count user
context schedules (preempt_count == 0), not
On Tue, 20 Mar 2001 16:48:01 -0800 (PST),
Nigel Gamble [EMAIL PROTECTED] wrote:
On Tue, 20 Mar 2001, Keith Owens wrote:
The preemption patch only allows preemption from interrupt and only for
a single level of preemption. That coexists quite happily with
synchronize_kernel() which runs in
1 - 100 of 107 matches
Mail list logo