> > But if you are suppressing preemption in all read-side critical
sections,
> > then wouldn't any already-preempted tasks be guaranteed to -not- be in
> > a read-side critical section, and therefore be guaranteed to be
unaffected
> > by the update (in other words, wouldn't such tasks not need t
In message you write:
> > Already preempted tasks.
>
> But if you are suppressing preemption in all read-side critical sections,
> then wouldn't any already-preempted tasks be guaranteed to -not- be in
> a read-side critical section, and there
On Tue, 10 Apr 2001 [EMAIL PROTECTED] wrote:
> On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
> > > Disabling preemption is a possible solution if the critical section is
> > short
> > > - less than 100us - otherwise preemption latencies become a problem.
> >
> > Seems like a reas
On Tue, 10 Apr 2001, Paul McKenney wrote:
> > Disabling preemption is a possible solution if the critical section
> > is
> short
> > - less than 100us - otherwise preemption latencies become a problem.
>
> Seems like a reasonable restriction. Of course, this same limit
> applies to locks and int
On Tue, Apr 10, 2001 at 09:08:16PM -0700, Paul McKenney wrote:
> > Disabling preemption is a possible solution if the critical section is
> short
> > - less than 100us - otherwise preemption latencies become a problem.
>
> Seems like a reasonable restriction. Of course, this same limit applies
>
> On Tue, 10 Apr 2001, Paul McKenney wrote:
> > The algorithms we have been looking at need to have absolute guarantees
> > that earlier activity has completed. The most straightforward way to
> > guarantee this is to have the critical-section activity run with
preemption
> > disabled. Most of
On Tue, 10 Apr 2001, Paul McKenney wrote:
> The algorithms we have been looking at need to have absolute guarantees
> that earlier activity has completed. The most straightforward way to
> guarantee this is to have the critical-section activity run with preemption
> disabled. Most of these code
> > As you've observed, with the approach of waiting for all pre-empted
tasks
> > to synchronize, the possibility of a task staying pre-empted for a long
> > time could affect the latency of an update/synchonize (though its hard
for
> > me to judge how likely that is).
>
> It's very unlikely on a
On Mon, 9 Apr 2001 [EMAIL PROTECTED] wrote:
> As you've observed, with the approach of waiting for all pre-empted tasks
> to synchronize, the possibility of a task staying pre-empted for a long
> time could affect the latency of an update/synchonize (though its hard for
> me to judge how likely th
>One question:
>isn't it the case that the alternative to using synchronize_kernel()
>is to protect the read side with explicit locks, which will themselves
>suppress preemption? If so, why not just suppress preemption on the read
>side in preemptible kernels, and thus gain the simpler implement
> > > > 2. Isn't it possible to get in trouble even on a UP if a task
> > > > is preempted in a critical region? For example, suppose the
> > > > preempting task does a synchronize_kernel()?
> > >
> > > Ugly. I guess one way to solve it would be to readd the 2.2 scheduler
> > > taskq
> > I see your point here, but need to think about it. One question:
> > isn't it the case that the alternative to using synchronize_kernel()
> > is to protect the read side with explicit locks, which will themselves
> > suppress preemption? If so, why not just suppress preemption on the
read
>
On Fri, Apr 06, 2001 at 06:25:36PM -0700, Paul McKenney wrote:
> I see your point here, but need to think about it. One question:
> isn't it the case that the alternative to using synchronize_kernel()
> is to protect the read side with explicit locks, which will themselves
> suppress preemption?
In message you write:
> > Priority inversion is not handled in Linux kernel ATM BTW, there
> > are already situations where a realtime task can cause a deadlock
> > with some lower priority system thread (I believe there is at least
> > one cas
Andi, thank you for the background! More comments interspersed...
> On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
> > 1. On a busy system, isn't it possible for a preempted task
> > to stay preempted for a -long- time, especially if there are
> > lots of real-time t
Hallo,
On Fri, Apr 06, 2001 at 04:52:25PM -0700, Paul McKenney wrote:
> 1. On a busy system, isn't it possible for a preempted task
> to stay preempted for a -long- time, especially if there are
> lots of real-time tasks in the mix?
The problem you're describing is probably consider
Please accept my apologies if I am missing something basic, but...
1. On a busy system, isn't it possible for a preempted task
to stay preempted for a -long- time, especially if there are
lots of real-time tasks in the mix?
2. Isn't it possible to get in trouble even on a UP if a t
In message <[EMAIL PROTECTED]> you write
:
> > Setting a running task's flags brings races, AFAICT, and checking
> > p->state is NOT sufficient, consider wait_event(): you need p->has_cpu
> > here I think.
>
> My thought here was that if p->state is anything other than TASK_RUNNING
> or TASK_RUNN
In message <[EMAIL PROTECTED]> you write:
> On a higher level, I think the scanning of the run list to set flags and
> counters is a bit off.
[snip standard refcnt scheme]
For most things, refcnts are great. I use them in connection
tracking. But when writes can be insanely slow (eg. once per
Nigel Gamble wrote:
>
> On Sat, 31 Mar 2001, george anzinger wrote:
> > I think this should be:
> > if (p->has_cpu || p->state & TASK_PREEMPTED)) {
> > to catch tasks that were preempted with other states.
>
> But the other states are all part of the state change that happens at
On Sat, 31 Mar 2001, Rusty Russell wrote:
> > if (p->state == TASK_RUNNING ||
> > (p->state == (TASK_RUNNING|TASK_PREEMPTED))) {
> > p->flags |= PF_SYNCING;
>
> Setting a running task's flags brings races, AFAICT, and checking
> p->state
On Sat, 31 Mar 2001, george anzinger wrote:
> I think this should be:
> if (p->has_cpu || p->state & TASK_PREEMPTED)) {
> to catch tasks that were preempted with other states.
But the other states are all part of the state change that happens at a
non-preemtive schedule() point, a
Rusty Russell wrote:
>
> In message <[EMAIL PROTECTED]> you write:
> > Here is an attempt at a possible version of synchronize_kernel() that
> > should work on a preemptible kernel. I haven't tested it yet.
>
> It's close, but...
>
> Those who suggest that we don't do preemtion on SMP make thi
In message <[EMAIL PROTECTED]> you write:
> Here is an attempt at a possible version of synchronize_kernel() that
> should work on a preemptible kernel. I haven't tested it yet.
It's close, but...
Those who suggest that we don't do preemtion on SMP make this much
easier (synchronize_kernel() is
On Wed, 28 Mar 2001 12:51:02 -0800,
george anzinger <[EMAIL PROTECTED]> wrote:
>Dipankar Sarma wrote:
>> 1. Disable pre-emption during the time when references to data
>> structures
>> updated using such Two-phase updates are held.
>
>Doesn't this fly in the face of the whole Two-phase system? I
On Tue, 20 Mar 2001, Nigel Gamble wrote:
> On Tue, 20 Mar 2001, Rusty Russell wrote:
> > Thoughts?
>
> Perhaps synchronize_kernel() could take the run_queue lock, mark all the
> tasks on it and count them. Any task marked when it calls schedule()
> voluntarily (but not if it is preempted) is unm
On Wed, Mar 28, 2001 at 12:51:02PM -0800, george anzinger wrote:
> Dipankar Sarma wrote:
> >
> > Also, a task could be preempted and then rescheduled on the same cpu
> > making
> > the depth counter 0 (right ?), but it could still be holding references
> > to data
> > structures to be updated usi
Dipankar Sarma wrote:
>
> Nigel Gamble wrote:
> >
> > On Wed, 21 Mar 2001, Keith Owens wrote:
> > > I misread the code, but the idea is still correct. Add a preemption
> > > depth counter to each cpu, when you schedule and the depth is zero then
> > > you know that the cpu is no longer holding a
Hi George,
george anzinger wrote:
>
> Exactly so. The method does not depend on the sum of preemption being
> zip, but on each potential reader (writers take locks) passing thru a
> "sync point". Your notion of waiting for each task to arrive
> "naturally" at schedule() would work. It is, in
Nigel Gamble wrote:
>
> On Wed, 21 Mar 2001, Keith Owens wrote:
> > I misread the code, but the idea is still correct. Add a preemption
> > depth counter to each cpu, when you schedule and the depth is zero then
> > you know that the cpu is no longer holding any references to quiesced
> > struct
On Thu, 22 Mar 2001, Rusty Russell wrote:
> Nigel's "traverse the run queue and mark the preempted" solution is
> actually pretty nice, and cheap. Since the runqueue lock is grabbed,
> it doesn't require icky atomic ops, either.
You'd have to mark both the preempted tasks, and the tasks currentl
In message <[EMAIL PROTECTED]> you write:
>
> Keith Owens writes:
> > Or have I missed something?
>
> Nope, it is a fundamental problem with such kernel pre-emption
> schemes. As a result, it would also break our big-reader locks
> (see include/linux/brlock.h).
Good point: holding a brlock ha
In message <[EMAIL PROTECTED]> you write:
> Nigel Gamble wrote:
> >
> > On Wed, 21 Mar 2001, Keith Owens wrote:
> > > I misread the code, but the idea is still correct. Add a preemption
> > > depth counter to each cpu, when you schedule and the depth is zero then
> > > you know that the cpu is n
On Wed, 21 Mar 2001, Andrew Morton wrote:
> It's a problem for uniprocessors as well.
>
> Example:
>
> #define current_cpu_data boot_cpu_data
> #define pgd_quicklist (current_cpu_data.pgd_quick)
>
> extern __inline__ void free_pgd_fast(pgd_t *pgd)
> {
> *(unsigned long *)pgd = (unsigned
On Wed, 21 Mar 2001, David S. Miller wrote:
> Basically, anything which uses smp_processor_id() would need to
> be holding some lock so as to not get pre-empted.
Not necessarily. Another solution for the smp_processor_id() case is
to ensure that the task can only be scheduled on the current CPU
On Wed, Mar 21, 2001 at 08:19:54PM +1100, Keith Owens wrote:
> Ouch. What about all the per cpu structures in the kernel, how do you
> handle them if a preempted task can be rescheduled on another cpu?
>
> int count[NR_CPUS], *p;
> p = count+smp_processor_id(); /* start on cpu 0, &count[0] */
george anzinger writes:
> By the by, if a preemption lock is all that is needed the patch defines
> it and it is rather fast (an inc going in and a dec & test comming
> out). A lot faster than a spin lock with its "LOCK" access. A preempt
> lock does not need to be "LOCK"ed because the only
"David S. Miller" wrote:
>
> Keith Owens writes:
> > Or have I missed something?
>
> Nope, it is a fundamental problem with such kernel pre-emption
> schemes. As a result, it would also break our big-reader locks
> (see include/linux/brlock.h).
>
> Basically, anything which uses smp_processor
Keith Owens writes:
> Or have I missed something?
Nope, it is a fundamental problem with such kernel pre-emption
schemes. As a result, it would also break our big-reader locks
(see include/linux/brlock.h).
Basically, anything which uses smp_processor_id() would need to
be holding some lock so
Nigel Gamble wrote:
> A task that has been preempted is on the run queue and can be
> rescheduled on a different CPU, so I can't see how a per-CPU counter
> would work. It seems to me that you would need a per run queue
> counter, like the example I gave in a previous posting.
Ouch. What about
On Wed, 21 Mar 2001 00:04:56 -0800,
george anzinger <[EMAIL PROTECTED]> wrote:
>Exactly so. The method does not depend on the sum of preemption being
>zip, but on each potential reader (writers take locks) passing thru a
>"sync point". Your notion of waiting for each task to arrive
>"naturally"
Nigel Gamble wrote:
>
> On Wed, 21 Mar 2001, Keith Owens wrote:
> > I misread the code, but the idea is still correct. Add a preemption
> > depth counter to each cpu, when you schedule and the depth is zero then
> > you know that the cpu is no longer holding any references to quiesced
> > struct
On Wed, 21 Mar 2001, Keith Owens wrote:
> I misread the code, but the idea is still correct. Add a preemption
> depth counter to each cpu, when you schedule and the depth is zero then
> you know that the cpu is no longer holding any references to quiesced
> structures.
A task that has been preem
On Tue, 20 Mar 2001 16:48:01 -0800 (PST),
Nigel Gamble <[EMAIL PROTECTED]> wrote:
>On Tue, 20 Mar 2001, Keith Owens wrote:
>> The preemption patch only allows preemption from interrupt and only for
>> a single level of preemption. That coexists quite happily with
>> synchronize_kernel() which ru
On Tue, 20 Mar 2001, Keith Owens wrote:
> The preemption patch only allows preemption from interrupt and only for
> a single level of preemption. That coexists quite happily with
> synchronize_kernel() which runs in user context. Just count user
> context schedules (preempt_count == 0), not pree
On Tue, 20 Mar 2001, Rusty Russell wrote:
> I can see three problems with this approach, only one of which
> is serious.
>
> The first is code which is already SMP unsafe is now a problem for
> everyone, not just the 0.1% of SMP machines. I consider this a good
> thing for 2.5 though.
So
Nigel Gamble wrote:
>
> On Tue, 20 Mar 2001, Roger Larsson wrote:
> > One little readability thing I found.
> > The prev->state TASK_ value is mostly used as a plain value
> > but the new TASK_PREEMPTED is or:ed together with whatever was there.
> > Later when we switch to check the state it is c
On Tue, 20 Mar 2001, Roger Larsson wrote:
> One little readability thing I found.
> The prev->state TASK_ value is mostly used as a plain value
> but the new TASK_PREEMPTED is or:ed together with whatever was there.
> Later when we switch to check the state it is checked against TASK_PREEMPTED
> o
Hi,
One little readability thing I found.
The prev->state TASK_ value is mostly used as a plain value
but the new TASK_PREEMPTED is or:ed together with whatever was there.
Later when we switch to check the state it is checked against TASK_PREEMPTED
only. Since TASK_RUNNING is 0 it works OK but...
On Tue, 20 Mar 2001 19:43:50 +1100,
Rusty Russell <[EMAIL PROTECTED]> wrote:
>The third is that preemtivity conflicts with the naive
>quiescent-period approach proposed for module unloading in 2.5, and
>useful for several other things (eg. hotplugging CPUs). This method
>relies on knowing that w
In message <[EMAIL PROTECTED]> you write:
> Kernel preemption is not allowed while spinlocks are held, which means
> that this patch alone cannot guarantee low preemption latencies. But
> as long held locks (in particular the BKL) are replaced by finer-grained
> locks, this patch will enable lowe
Hi Pavel,
Thanks for you comments.
On Sat, 17 Mar 2001, Pavel Machek wrote:
> > diff -Nur 2.4.2/arch/i386/kernel/traps.c linux/arch/i386/kernel/traps.c
> > --- 2.4.2/arch/i386/kernel/traps.c Wed Mar 14 12:16:46 2001
> > +++ linux/arch/i386/kernel/traps.c Wed Mar 14 12:22:45 2001
> > @@ -973,7
Hi!
> Here is the latest preemptible kernel patch. It's much cleaner and
> smaller than previous versions, so I've appended it to this mail. This
> patch is against 2.4.2, although it's not intended for 2.4. I'd like
> comments from anyone interested in a low-latency Linux kernel solution
> fo
53 matches
Mail list logo