Alexander Viro wrote:
>
> On Fri, 8 Jun 2001, Bryan Henderson wrote:
>
> > >IMO preemptive kernel patches are an
> > >exercise in masturbation (bad algorithm that can be preempted at any
> > point
> > >is still a bad algorithm and should be fixed, not hidden)
> >
> > What does this mean? What is a preemptive kernel patch and what kind of
> > bad algorithm are you contemplating, and what does it mean to hide one?
> >
> > You're apparently referring back to some well known argument, but I'm not
> > familiar with it myself.
>
> Sigh... Long story. Basically, there was a bunch of patches floating
> around, starting with "low-latency" (aka. "let's stick schedule() in
> every place we see in profiles") and continued with "let's count
> spinlocks taken and allow to preempt whenever the counter is 0". All of
> them were advertised to solve the problems with high latency, but
> apparently people who were pushing that stuff completely missed a
> simple observation: when kernel spends too much in some loop it's a
> symptom of a problem, not the problem itself...
Offtopic, but not really accurate. In the great majority
of cases, the kernel is doing real, useful work while it's
showing poor latency. flush_dirty_buffers(). page_launder(),
generic_file_read/write(), sync_old_buffers(), invalidate_list(),
ext2_free_data(), etc.
Large amounts of work to do -> batch it up -> best throughput -> bad latency
Without radical algorithmic changes all over the place,
there are only three ways of reducing these sources of
latency:
1: Preemption
2: Batch the work up into little bits
3: Keep the work batched, but break it up on demand, by
polling need_resched.
I don't see any way in this world that we'll restructure these
parts of the kernel so they always execute in less than 500
microseconds (100x faster), so approaches 1) and 3) remain
legitimate.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]