Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-26 Thread Terry Lambert

Matthew Dillon wrote:
 As it is, I have invested a great
 deal of time and effort on this patch and it is damn well going to go
 in so I can move on.

Too bad you don't have your own P4 branch... then this would
not hold you up.

(ducks)

;^)

-- Terry

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-25 Thread Bruce Evans

On Sun, 24 Feb 2002, Matthew Dillon wrote:

 ...

 :If so, I'm wondering if this is even possible on alpha, where we don't
 :have direct access to the hardware.  However, according to the psuedo
 :code for rti in the Brown book, munging with the saved PSL in
 :trapframe to set the IPL should work.  Hmm..
 :
 :Drew

 Well, on IA32 there are effectively two types of interrupts (edge
 and level triggered), and three interrupt masks (the processor
 interrupt disable, a master bit mask, and then the per-device
 masks), and two ways of handling interrupts (as a scheduled
 interrupt thread or as a 'fast' interrupt).

master bit mask seems to be an unusual name for the i386's ICU/or APIC
(PIC or APIC in full i386-speak).

 Note that the masking requirement is already part and parcel in
 -current because most interrupts are now scheduled.  The hardware
 interrupt routine must schedule the interrupt thread and then
 mask the interrupt since it is not dealing with it right then
 and there and must be able to return to whatever was running before.
 The interrupt thread will unmask the interrupt when it is through
 processing it.

Some more points related to this:
- a mask outside of the CPU is required for SMP.
- I believe interrupt masks and not levels are a fundamental requirement
  for proper scheduling as -current does it.  A set of levels like the
  alpha's IPL doesn't work right because it doesn't permit masking
  individual interrupts (that have occurred) so that the kernel can
  keep running a thread whose priority is higher than that of the
  thread that will handle the interrupt, all according to the kernel's
  idea of priorities (which might differ significantly from the IPL's
  idea.  There are also implementation problems.  The IPL must be
  treated like an ICU and not restored reentrantly like IPLs usually
  are.
- The worst case for priorities is when there is only a 2-state IPL,
  which is what an i386 without an ICU/APIC has.  Then there can be
  only 2 really different kernel interrupt priorities (high and low),
  and the kernel ithread scheduling is just a waste of time.  A 2-
  state IPL would obviously be bad for SMP, but for UP I often feel
  that it is the case that should be optimized -- this would force all
  critical regions and interrupt handlers to be short, which would be
  good for all cases.

Bruce


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-25 Thread Doug Rabson

On Sun, 24 Feb 2002, Andrew Gallatin wrote:


 I'm not fluent in x86 asm, so can you confirm for me what you're
 attempting to do here?

 I think you're making critical_enter()/critical_exit() cheaper by not
 actually messing with the interrupt hardware when they're called.
 Very much like what we do in 4-stable.

 Instead, you set/clear a per-cpu flag (or incr/decr a counter). If an
 interrupt comes in when you're inside a critical section, you make
 note of it  mask interrupts on that cpu.  In critical_exit(),
 you run the interrupt handler if there's a pending interrupt, and
 unmask interrupts.  If there's not a pending interrupt, you just clear
 the flag (or devrement a counter).

 Is that basically it?

 If so, I'm wondering if this is even possible on alpha, where we don't
 have direct access to the hardware.  However, according to the psuedo
 code for rti in the Brown book, munging with the saved PSL in
 trapframe to set the IPL should work.  Hmm..

I think we can tweak the saved PSL and it should be able to mask
interrupts in the same way as for x86. This should be quite easy for ia64
too.

-- 
Doug Rabson Mail:  [EMAIL PROTECTED]
Phone: +44 20 8348 6160



To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-25 Thread Matthew Dillon

Unless an unforseen problem arises, I am going to commit this tomorrow
and then start working on a cleanup patch.   I have decided to
keep the sysctl instrumentation intact for the moment so people who
experience panics or lockups can turn it off to see whether that was the
cause or not.

The cleanup patch will deal with moving the critical_*() functions
from MI to MD, inlining, and the removal of the (old) CRITICAL_FORK and
(new) MACHINE_CRITICAL_ENTER hacks.  I'll probably wind up adding
machine/critical.h (i.e. /usr/src/sys/arch/include/critical.h)
and for i386 sys/i386/i386/critical.c to hold unpend() and other
support routines.

I've looked at cleaning up cpu_critical_enter()  exit but its use 
pollutes a number of MI source files: kern/kern_ktr.c, kern/subr_trap.c,
and kern/subr_witness.c are all misusing the functions.  If these
functions can safely be changed to use critical_enter() and critical_exit()
I will rename the i386's cpu_critical*() functions into intr_disable()
and intr_restore() and then use the new names for internal i386 use.

Is there any reason why the above uses of cpu_critical*() cannot be
changed to use standard critical*()?  I believe we *always* have a 
curthread structure to play with so it should just work.

-Matt


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-25 Thread Bruce Evans

On Mon, 25 Feb 2002, Matthew Dillon wrote:

 Unless an unforseen problem arises, I am going to commit this tomorrow
 and then start working on a cleanup patch.   I have decided to

Please wait for jhb's opinion on it.  He seems to be offline again.
I think he has plans and maybe even code for more code in critical_enter().
I think we don't agree with these plans, but they are just as valid
as ours, and our versions undo many of his old changes.

 I've looked at cleaning up cpu_critical_enter()  exit but its use
 pollutes a number of MI source files: kern/kern_ktr.c, kern/subr_trap.c,
 and kern/subr_witness.c are all misusing the functions.  If these
 functions can safely be changed to use critical_enter() and critical_exit()
 I will rename the i386's cpu_critical*() functions into intr_disable()
 and intr_restore() and then use the new names for internal i386 use.

It is already one layer higher than the MD versions for i386's at least.
The MD versions are disable_intr() (should be cli() to inhibit use in MI
code), enable_intr() (should be sti() ...), read_eflags() and write_eflags().
cpu_critical*() is supposed to be the version with an MI interface and
semantics, but one level lower than critical*().  In my version,
critical*() essentially doesn't exist and cpu_critical*() is only used
by fast interrupt handlers and hacks.  The MI code in kern mainly depends
on the MD function (with an MI interface and semantics) unpend().

 Is there any reason why the above uses of cpu_critical*() cannot be
 changed to use standard critical*()?  I believe we *always* have a
 curthread structure to play with so it should just work.

I think this would mostly work on i386's.  It would more obviously work
if td_critness were a per-cpu global and not in the thread struce.  It
essentially is a per-cpu global, but we optimize it by putting it in
the thread struct.  But ktr, especially, is very low level and should
work in the middle of a thread switch.

According to jake, it would only work on sparc64's if critical*() does
something hardware-related.  Lazy masking of interrupts is partly in
hardware, so the hardware must be talked to to turn it on an off.  On
i386's, the masking is in software, so it can use td_critnest as a
flag.

The use of cpu_critical*() in subr_trap.c is a bit trickier/kludgier.
We need to return from the interrupt handler atomically with clearing
the lazy-masking flag.  The hardware interrupt enable flag must be
used for this on i386's.  The code for this is logically:

In ast:
/* Prevent changes to flags while we are deciding if they are set. */
critical_enter();

while ((ke-ke_flags  (KEF_ASTPENDING | KEF_NEEDRESCHED)) != 0) {
critical_exit();
...
/* As above. */
critical_enter();
}

In doreti:
/*
 * Must use MD code to prevent race window after critical_exit().
 * We switch from lazy sofware masking using td_critnest (or
 * whatever critical_enter() uses) to hardware masking using cli.
 */
cli();
critical_exit();
...
iretd();

but this is optimized in -current by using cpu_critical*() instead of
critical_enter*() in ast() and not doing the cli() and critical_exit()
in doreti.  This depends on cpu_critical_enter() being precisely cli().

Bruce


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-25 Thread Matthew Dillon


:
:On Mon, 25 Feb 2002, Matthew Dillon wrote:
:
: Unless an unforseen problem arises, I am going to commit this tomorrow
: and then start working on a cleanup patch.   I have decided to
:
:Please wait for jhb's opinion on it.  He seems to be offline again.
:I think he has plans and maybe even code for more code in critical_enter().
:I think we don't agree with these plans, but they are just as valid
:as ours, and our versions undo many of his old changes.

I am not going to predicate my every move on permission from JHB nor 
do I intend to repeat the last debacle which held-up (and is still holding
up) potential commits for a week and a half now.  JHB hasn't even 
committed *HIS* patches and I am beginning to wonder what the point is
when *NOTHING* goes in.  If he had code he damn well should have said
something on the lists two days ago.  As it is, I have invested a great
deal of time and effort on this patch and it is damn well going to go
in so I can move on.

-Matt

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-25 Thread Matthew Dillon

It kinda sounds like the interrupt/trap code that calls into
kern/subr_trap.c should be responsible for placing us in a 
critical section.  kern/subr_trap.c has no business making
assumptions in regards to cpu_critical_enter()/cpu_critical_exit()
(for the same reason that fork_exit() has no business making
those kinds of assumptions).

For the moment I'm not going to worry about it.  I'll just keep the
cpu_critical_*() API intact for i386 for now.

-Matt
Matthew Dillon 
[EMAIL PROTECTED]

:The use of cpu_critical*() in subr_trap.c is a bit trickier/kludgier.
:We need to return from the interrupt handler atomically with clearing
:the lazy-masking flag.  The hardware interrupt enable flag must be
:used for this on i386's.  The code for this is logically:
:
:In ast:
:   /* Prevent changes to flags while we are deciding if they are set. */
:   critical_enter();
:
:   while ((ke-ke_flags  (KEF_ASTPENDING | KEF_NEEDRESCHED)) != 0) {
:   critical_exit();
:   ...
:   /* As above. */
:   critical_enter();
:   }
:
:In doreti:
:   /*
:* Must use MD code to prevent race window after critical_exit().
:* We switch from lazy sofware masking using td_critnest (or
:* whatever critical_enter() uses) to hardware masking using cli.
:*/
:   cli();
:   critical_exit();
:   ...
:   iretd();
:
:but this is optimized in -current by using cpu_critical*() instead of
:critical_enter*() in ast() and not doing the cli() and critical_exit()
:in doreti.  This depends on cpu_critical_enter() being precisely cli().
:
:Bruce
:


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message



Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-24 Thread Bruce Evans

On Sun, 24 Feb 2002, Matthew Dillon wrote:

 NOTES:

 I would like to thank Bruce for supplying the sample code that allowed
 me to do this in a day instead of several days.

Thanks.

 * debug.critical_mode sysctl.  This will not be in the final commit,
   nor will any of the code that tests the variable.  The final commit
   will use code as if the critical_mode were '1'.

Won't it be needed for longer for non-i386's arches?

 * Additional cpu_critical_enter/exit() calls around icu_lock.  Since
   critical_enter() no longer disables interrupts, special care must
   be taken when dealing with the icu_lock spin mutex because it is
   the one thing the interrupt code needs to be able to defer the
   interrupt.

clock_lock needs these too, at least in my version where the soft
critical_enter() doesn't mask fast interrupts.

 * MACHINE_CRITICAL_ENTER define.   This exists to maintain compatibility
   with other architectures.  i386 defines this to cause fork_exit to
   use the new API and to allow the i386 MD code to supply the
   critical_enter() and critical_exit() procedures rather then
   kern_switch.c

   I would much prefer it if the other architectures were brought around
   to use this new mechanism.  The old mechanism makes assumptions
   in regards to hard disablement that is no longer correct for i386.

The old mechanism basically forces mtx_lock_spin() to do the hard disablement.
I never liked this.

 * Trampoline 'sti'.  In the final commit, the trampoline will simply
   'sti' after setting up td_critnest.  The other junk to handle the
   hard-disablement case will be gone.

Maybe all of the fiddling with sched_lock belongs in MD code.

IIRC, in my version, the hard interrupt enablement in fork_exit() is
only needed at boot time.  Something neglects to do it earlier in that
case, at least for some threads.  I was surprised by this -- hard
interrupts are enabled early in the boot, and everything later should
disable and enable them reentrantly.

 * Additional cpu_critical_enter()/exit() calls in CY and TIMER code.
   Bruce had additional hard interrupt disablements in these modules.

   I'm not sure why so if I need to do that as well I would like to
   know.

There are also some in sio.  The ones in sio an cy are needed because
my critical_enter() doesn't mask fast interrupts.  Something is needed
to mask them, and cpu_critical_enter() is used.  I think you don't need
these yet.  The ones in timer code are to make the timer code as hard-
real-time as possible.  I think this is important in
i8254_get_timecounter() but not elsewhere in clock.c.

 Index: i386/i386/machdep.c
 ===
 RCS file: /home/ncvs/src/sys/i386/i386/machdep.c,v
 retrieving revision 1.497
 diff -u -r1.497 machdep.c
 --- i386/i386/machdep.c   17 Feb 2002 17:40:28 -  1.497
 +++ i386/i386/machdep.c   24 Feb 2002 19:04:20 -
 ...
 @@ -270,6 +275,121 @@
  }

  /*
 + * Critical section handling.
 + *
 + *   Note that our interrupt code handles any interrupt race that occurs
 + *   after we decrement td_critnest.
 + */
 +void
 +critical_enter(void)

i386/machdep.c is a different wrong place for this and unpend().  I put it
in isa/ithread.c.  It is not very MD, and is even less isa-dependent.

 +void
 +critical_exit(void)
 +{
 + struct thread *td = curthread;
 + KASSERT(td-td_critnest  0, (bad td_critnest value!));
 + if (--td-td_critnest == 0) {
 + if (td-td_savecrit != (critical_t)-1) {
 + cpu_critical_exit(td-td_savecrit);
 + td-td_savecrit = (critical_t)-1;
 + } else {
 + /*
 +  * We may have to schedule pending interrupts.  Create
 +  * conditions similar to an interrupt context and call
 +  * unpend().
 +  */
 + if (PCPU_GET(int_pending)  td-td_intr_nesting_level == 0) {

I'm trying to kill td_intr_nesting_level.  I think/hope you don't need it.
In ithread_schedule(), we begin with interrupts hard-enabled but don't
have any other locks, so things are delicate -- an mtx_unlock_spin()
immediately after the mtx_lock_spin() would do too much without the above
test.  Perhaps add a critical_enter()/critical_exit() wrapper to sync the
software interrupt disablement with the hardware disablement.  Xfastintr*
in -current already does this.

 Index: i386/isa/apic_vector.s
 ===
 RCS file: /home/ncvs/src/sys/i386/isa/apic_vector.s,v
 retrieving revision 1.75
 diff -u -r1.75 apic_vector.s
 --- i386/isa/apic_vector.s5 Jan 2002 08:47:10 -   1.75
 +++ i386/isa/apic_vector.s24 Feb 2002 17:58:34 -

Please do this without so many changes to the assembly code.  Especially
not the reordering and style changes.  I think I see how to do this
(see 

Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-24 Thread Matthew Dillon


:
:On Sun, 24 Feb 2002, Matthew Dillon wrote:
:
: * debug.critical_mode sysctl.  This will not be in the final commit,
:   nor will any of the code that tests the variable.  The final commit
:   will use code as if the critical_mode were '1'.
:
:Won't it be needed for longer for non-i386's arches?

No, 99% of it is in 386-specific files, it won't effect other arches.

: * Additional cpu_critical_enter/exit() calls around icu_lock.  Since
:   critical_enter() no longer disables interrupts, special care must
:   be taken when dealing with the icu_lock spin mutex because it is
:   the one thing the interrupt code needs to be able to defer the
:   interrupt.
:
:clock_lock needs these too, at least in my version where the soft
:critical_enter() doesn't mask fast interrupts.

I wasn't sure whether it was due to that, or whether you just wanted
the two I8254 accesses to be right next to each other.  Since I am
msking fast interrupts I think we are safe.

: * MACHINE_CRITICAL_ENTER define.   This exists to maintain compatibility
:..
:
:   I would much prefer it if the other architectures were brought around
:   to use this new mechanism.  The old mechanism makes assumptions
:   in regards to hard disablement that is no longer correct for i386.
:
:The old mechanism basically forces mtx_lock_spin() to do the hard disablement.
:I never liked this.

In my case, the term 'dislike' would be an understatement.  Every time
I looked at what critical_enter() was doing I got angry.  Heh.

: * Trampoline 'sti'.  In the final commit, the trampoline will simply
:   'sti' after setting up td_critnest.  The other junk to handle the
:   hard-disablement case will be gone.
:
:Maybe all of the fiddling with sched_lock belongs in MD code.

I think it wound up outside of MD because the only other place to
put it is in cpu_switch() (assembly), and people are understandably
trying to avoid imposing more assembly on all the architectures.
I'm willing to leave it be for now but I agree that it is really
ugly.

:IIRC, in my version, the hard interrupt enablement in fork_exit() is
:only needed at boot time.  Something neglects to do it earlier in that
:case, at least for some threads.  I was surprised by this -- hard
:interrupts are enabled early in the boot, and everything later should
:disable and enable them reentrantly.

I couldn't figure out who was leaving hard interrupts enabled.  Did
you ever figure out who it was?  At the moment I am able to enable
interrupts in the trampoline code just before the call to 
fork_exit(), but it has to be disabled on entry to the trampoline
code because the task's td_critnest count is still 0 at that
point and we can't afford to execute a real (fast) interrupt's
service routine until we clean up the sched_lock.

: * Additional cpu_critical_enter()/exit() calls in CY and TIMER code.
:   Bruce had additional hard interrupt disablements in these modules.
:
:   I'm not sure why so if I need to do that as well I would like to
:   know.
:
:There are also some in sio.  The ones in sio an cy are needed because
:my critical_enter() doesn't mask fast interrupts.  Something is needed
:to mask them, and cpu_critical_enter() is used.  I think you don't need
:these yet.  The ones in timer code are to make the timer code as hard-
:real-time as possible.  I think this is important in
:i8254_get_timecounter() but not elsewhere in clock.c.

Ok, that's what I thought.  Since I am masking fast interrupts I
believe we are safe there too.

In regards to i8254_get_timecounter() the 8254 latches the full
count on the first access so we are theoretically safe.  I'll
change my timecounter to use the 8254 to see if it still works
as expected.

:...
:  /*
: + * Critical section handling.
: + *
: + *  Note that our interrupt code handles any interrupt race that occurs
: + *  after we decrement td_critnest.
: + */
: +void
: +critical_enter(void)
:
:i386/machdep.c is a different wrong place for this and unpend().  I put it
:in isa/ithread.c.  It is not very MD, and is even less isa-dependent.

Yah.  I'm all ears as to where to put it :-)  There's no good place
to put inline versions of critical_enter() or critical_exit() either.
machine/cpufunc.h doesn't have access to the struct thread.

p.s. all discussion here and below, except for the bintr/eintr cleanup, would
be for a second work/commit round after I commit the current stuff.

:I'm trying to kill td_intr_nesting_level.  I think/hope you don't need it.
:In ithread_schedule(), we begin with interrupts hard-enabled but don't
:have any other locks, so things are delicate -- an mtx_unlock_spin()
:immediately after the mtx_lock_spin() would do too much without the above
:test.  Perhaps add a critical_enter()/critical_exit() wrapper to sync the
:software interrupt disablement with the 

Re: Patch for critical_enter()/critical_exit() interrupt assemblyrevamp, please review!

2002-02-24 Thread Bruce Evans

On Sun, 24 Feb 2002, Matthew Dillon wrote:

 :On Sun, 24 Feb 2002, Matthew Dillon wrote:
 : * Additional cpu_critical_enter/exit() calls around icu_lock.  Since
 :   critical_enter() no longer disables interrupts, special care must
 :   be taken when dealing with the icu_lock spin mutex because it is
 :   the one thing the interrupt code needs to be able to defer the
 :   interrupt.
 :
 :clock_lock needs these too, at least in my version where the soft
 :critical_enter() doesn't mask fast interrupts.

 I wasn't sure whether it was due to that, or whether you just wanted
 the two I8254 accesses to be right next to each other.  Since I am
 msking fast interrupts I think we are safe.

Hmm, I thought I wrote more about this somewhere.  We are fairly safe,
but we may be interrupted for a few microseconds, and i8254_get_timecount()
has depends on the latency being at most 20 usec (the hard-coded 20
was correctly parametrized as TIMER0_LATCH_COUNT in the old assembler
version.  This has rotted to TIMER0_LATCH_COUNT being #defined,
documented but not used).  Actually, we have more serious latency
problems here.  i8254_get_timecount() also depends on clk0 interrupts
not being disabled for 20 usec _before_ it is called (interrupt handlers
can call it, but they must not have interrupts enabled at the time of
the call).  -current probably violates this by running with spinlocks
held for more than 20 usec and calling i8254_get_timecount() from at
least the witness code on old machines which have the least to spare.
Your changes help, but clk0 interrupts will probably be delayed by 20
usec in some cases.

 :IIRC, in my version, the hard interrupt enablement in fork_exit() is
 :only needed at boot time.  Something neglects to do it earlier in that
 :case, at least for some threads.  I was surprised by this -- hard
 :interrupts are enabled early in the boot, and everything later should
 :disable and enable them reentrantly.

 I couldn't figure out who was leaving hard interrupts enabled.  Did
 you ever figure out who it was?  At the moment I am able to enable

No; I don't remember even trying.

 interrupts in the trampoline code just before the call to
 fork_exit(), but it has to be disabled on entry to the trampoline
 code because the task's td_critnest count is still 0 at that
 point and we can't afford to execute a real (fast) interrupt's
 service routine until we clean up the sched_lock.

I think td_critnest should be set a little earlier.  Not sure why it
isn't already.

 In regards to i8254_get_timecounter() the 8254 latches the full
 count on the first access so we are theoretically safe.  I'll
 change my timecounter to use the 8254 to see if it still works
 as expected.

I use pcaudio for stress testing the i8254 timecounter.  It sets the
clock frequency to 16 KHz.  Setting HZ to this should work almost as
well in -current.  HZ can be set even faster.

 :  /*
 : + * Critical section handling.
 :...
 :i386/machdep.c is a different wrong place for this and unpend().  I put it
 :in isa/ithread.c.  It is not very MD, and is even less isa-dependent.

 Yah.  I'm all ears as to where to put it :-)  There's no good place
 to put inline versions of critical_enter() or critical_exit() either.
 machine/cpufunc.h doesn't have access to the struct thread.

I would keep it in isa/sched_ithd.c until the correct MI parts are found.

 p.s. all discussion here and below, except for the bintr/eintr cleanup, would
 be for a second work/commit round after I commit the current stuff.

I hope that leaves only 10-20K of changes :-).

 : Index: i386/isa/apic_vector.s
 : ===
 : RCS file: /home/ncvs/src/sys/i386/isa/apic_vector.s,v
 : retrieving revision 1.75
 : diff -u -r1.75 apic_vector.s
 : --- i386/isa/apic_vector.s 5 Jan 2002 08:47:10 -   1.75
 : +++ i386/isa/apic_vector.s 24 Feb 2002 17:58:34 -
 :
 :Please do this without so many changes to the assembly code.  Especially
 :not the reordering and style changes.  I think I see how to do this
 :(see call_fast_unpend())

 No such luck.  In order to be able to mask fast interrupts I
 had to move all the macros from between fast and slow ints to
 above fast ints.

Erm, macros don't need to be defined before macros that invoke them.

 I also decided to tab-out the backslashes in the icu_vector
 code, because I couldn't read the code easily with the
 backslashes slammed up against the instructions :-(

This uglyness has been maintained since 1992 to keep diffs somewhat
readable.  I really don't want to change it now.  Even the existence
of the macros is mostly a relic from 1992.

 : +void
 : +call_fast_unpend(int irq)
 : +{
 : +  fastunpend[irq]();
 : +}
 : +
 :
 :I think this can be done using with few changes to the assembler by
 :using INTREN(irq) followed by a call to the usual Xfastintr[irq]