RE: Success! critical_enter()/critical_exit() revamp (was Re: m

2002-02-25 Thread John Baldwin


On 24-Feb-02 Matthew Dillon wrote:
 Apart from all the assembly coding, there were two serious issues.
 fork_exit() assumes that interrupts are hard-disabled on entry.  I
 readjusted the code such that the trampoline assembly initialized
 the td_critnest count so it could STI prior to calling fork_exit().

Err, this is a feature.  It isn't safe for us to take an interrupt until we
have safeuly cleaned up and released sched_lock.

 The second issue is that cpu_switch() does not save/restore the 
 PSL_I (interrupt disablement mask).  I added a PSL word to the PCB
 structure to take care of the problem.  Without this if you have
 a thread switch away while holding interrupts hard-disabled, the
 next thread will inherit the hard disablement.  I saw the sti's
 you added in various places but I figured the best solution was
 to simply save/restore the state.  The original code didn't have
 this problem because interrupts were always hard-disabled while
 holding the sched_lock and, of course, cpu_switch() is always
 called while holding sched_lock.  (Similarly, icu_lock needed 
 hard-disablements wrapped around it for the same reason, because
 a level interrupt still needs to be able to get icu_lock in order to
 defer itself).

That's cause the state of PSL_I is saved in td_savecrit.  Note that the
only caller of cpu_switch() is mi_switch().  You need to look at the two
functions together.  Much of the stuff that used to be in asm is now in C to
make the code more portable so that supporting other arch's is easier.

While the state for pending interrupts should be per-CPU, you should still be
using some per-thread state since we can call switch with interrupts disabled
(this happens when we preempt with interrupts disabled.)

Speaking of preemption. :)  critical_enter/exit are specifically divided into
two portions: and MD portion and an MI portion.  The preemption patches grow
the MI functions a bit to handle deferred preemptions.  It would be best if
instead of getting rid of the MI functions you would work within the existing
framework and architecture and change the implementation of
cpu_critical_enter/exit.  Note that you have a fully opaque type critical_t to
allow you to store whatever per-thread state you wish.  Also, you are free to
use whatever per-CPU state as well.

 In anycase, I have successfully built the world in a -current 
 SMP + apic_vector system.  Tomorrow I will cleanup on the UP icu_vector
 code to match the apic_vector code and post the results.  I also
 have a sysctl that allows developers to toggle the mode for testing
 purposes (old way or new way).
 
 Finally, I took your suggestion in regards to not trying to combine
 the masks together.  I have a 'some sort of interrupt is pending'
 variable then I have fpending (for fast interrupts), ipending (for
 normal interrupt threads), and spending (which I use for the stat and
 hardclock forwarding).  They're all per-cpu entities, of course.
 unpend() prioritizes them.
 
 In anycase, I'll post it all tomorrow after I've got icu_vector cleaned
 up.  One of the best things about this patch set is that it is really
 flexible.  We should be able to really make interrupts fly.  In fact,
 it should even be possible for one cpu to steal another cpu's pending
 interrupt(s) fairly trivially, without having to resort to IPIs.

Eww, how does that work unless the other CPU uses an atomic op to clear the
bit, which means that now the local CPU needs to use atomic ops to ensure
consistent data.

   -Matt

-- 

John Baldwin [EMAIL PROTECTED]http://www.FreeBSD.org/~jhb/
Power Users Use the Power to Serve!  -  http://www.FreeBSD.org/

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: RE: Success! critical_enter()/critical_exit() revamp (was Re: m

2002-02-25 Thread Matthew Dillon


:
:
:On 24-Feb-02 Matthew Dillon wrote:
: Apart from all the assembly coding, there were two serious issues.
: fork_exit() assumes that interrupts are hard-disabled on entry.  I
: readjusted the code such that the trampoline assembly initialized
: the td_critnest count so it could STI prior to calling fork_exit().
:
:Err, this is a feature.  It isn't safe for us to take an interrupt until we
:have safeuly cleaned up and released sched_lock.

This and your other comments only apply to systems that have a hard
interrupt disablement side effect to critical_enter().  My critical_*()
patch set (partially based on BDE's work) removes this requirement and
exposes two flaws in the existing MI code (discussed on the lists
already).  One is a flaw in fork_exit() due to it making improper 
assumptions as to the nature of the trampoline code to close a 
td_critnest/sched_lock initialization race, and the other is a flaw
in ast() which uses cpu_critical_*() routines as a hack to tightly
couple it with MD doreti().  In fact, the assembly that calls ast()
should be responsible for placing the machine in the proper critical 
state.

These are not 'bugs' per say, because the flaws remain consistent
within their domain.  But they are flaws.

I am rather disturbed that you keep explaining the flaws and MD/MI
cross pollution away as being a 'feature'.  It most certainly is not
a feature.  I am also disturbed that these special interactions 
assumptions were not documented *anywhere* in the critical_*() or
cpu_critical_*() code.  And, finally, I am extremely disturbed about
the logic you use to justify both the two-level critical_*() API
and to justify the hacks in fork_exit() and ast().

In anycase, it's going to become moot very soon now as I clean it up.

-Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message