: :On Sun, 24 Feb 2002, Matthew Dillon wrote: : :> * debug.critical_mode sysctl. This will not be in the final commit, :> nor will any of the code that tests the variable. The final commit :> will use code as if the critical_mode were '1'. : :Won't it be needed for longer for non-i386's arches?
No, 99% of it is in 386-specific files, it won't effect other arches. :> * Additional cpu_critical_enter/exit() calls around icu_lock. Since :> critical_enter() no longer disables interrupts, special care must :> be taken when dealing with the icu_lock spin mutex because it is :> the one thing the interrupt code needs to be able to defer the :> interrupt. : :clock_lock needs these too, at least in my version where the soft :critical_enter() doesn't mask fast interrupts. I wasn't sure whether it was due to that, or whether you just wanted the two I8254 accesses to be right next to each other. Since I am msking fast interrupts I think we are safe. :> * MACHINE_CRITICAL_ENTER define. This exists to maintain compatibility :.. :> :> I would much prefer it if the other architectures were brought around :> to use this new mechanism. The old mechanism makes assumptions :> in regards to hard disablement that is no longer correct for i386. : :The old mechanism basically forces mtx_lock_spin() to do the hard disablement. :I never liked this. In my case, the term 'dislike' would be an understatement. Every time I looked at what critical_enter() was doing I got angry. Heh. :> * Trampoline 'sti'. In the final commit, the trampoline will simply :> 'sti' after setting up td_critnest. The other junk to handle the :> hard-disablement case will be gone. : :Maybe all of the fiddling with sched_lock belongs in MD code. I think it wound up outside of MD because the only other place to put it is in cpu_switch() (assembly), and people are understandably trying to avoid imposing more assembly on all the architectures. I'm willing to leave it be for now but I agree that it is really ugly. :IIRC, in my version, the hard interrupt enablement in fork_exit() is :only needed at boot time. Something neglects to do it earlier in that :case, at least for some threads. I was surprised by this -- hard :interrupts are enabled early in the boot, and everything later should :disable and enable them reentrantly. I couldn't figure out who was leaving hard interrupts enabled. Did you ever figure out who it was? At the moment I am able to enable interrupts in the trampoline code just before the call to fork_exit(), but it has to be disabled on entry to the trampoline code because the task's td_critnest count is still 0 at that point and we can't afford to execute a real (fast) interrupt's service routine until we clean up the sched_lock. :> * Additional cpu_critical_enter()/exit() calls in CY and TIMER code. :> Bruce had additional hard interrupt disablements in these modules. :> :> I'm not sure why so if I need to do that as well I would like to :> know. : :There are also some in sio. The ones in sio an cy are needed because :my critical_enter() doesn't mask fast interrupts. Something is needed :to mask them, and cpu_critical_enter() is used. I think you don't need :these yet. The ones in timer code are to make the timer code as hard- :real-time as possible. I think this is important in :i8254_get_timecounter() but not elsewhere in clock.c. Ok, that's what I thought. Since I am masking fast interrupts I believe we are safe there too. In regards to i8254_get_timecounter() the 8254 latches the full count on the first access so we are theoretically safe. I'll change my timecounter to use the 8254 to see if it still works as expected. :... :> /* :> + * Critical section handling. :> + * :> + * Note that our interrupt code handles any interrupt race that occurs :> + * after we decrement td_critnest. :> + */ :> +void :> +critical_enter(void) : :i386/machdep.c is a different wrong place for this and unpend(). I put it :in isa/ithread.c. It is not very MD, and is even less isa-dependent. Yah. I'm all ears as to where to put it :-) There's no good place to put inline versions of critical_enter() or critical_exit() either. machine/cpufunc.h doesn't have access to the struct thread. p.s. all discussion here and below, except for the bintr/eintr cleanup, would be for a second work/commit round after I commit the current stuff. :I'm trying to kill td_intr_nesting_level. I think/hope you don't need it. :In ithread_schedule(), we begin with interrupts hard-enabled but don't :have any other locks, so things are delicate -- an mtx_unlock_spin() :immediately after the mtx_lock_spin() would do too much without the above :test. Perhaps add a critical_enter()/critical_exit() wrapper to sync the :software interrupt disablement with the hardware disablement. Xfastintr* :in -current already does this. I noticed that it didn't seem to be used much. I am currently leaving interrupts disabled in both fast and normal interrupts, and we both bump td_critnest in the fast interrupt code already. So I agree that a simple bump of td_critnest in the scheduled interrupt code ought to be sufficient to allow us to kill td_intr_nesting_level. There are also a few places where I can use movl instead of incl and decl. :> Index: i386/isa/apic_vector.s :> =================================================================== :> RCS file: /home/ncvs/src/sys/i386/isa/apic_vector.s,v :> retrieving revision 1.75 :> diff -u -r1.75 apic_vector.s :> --- i386/isa/apic_vector.s 5 Jan 2002 08:47:10 -0000 1.75 :> +++ i386/isa/apic_vector.s 24 Feb 2002 17:58:34 -0000 : :Please do this without so many changes to the assembly code. Especially :not the reordering and style changes. I think I see how to do this :(see call_fast_unpend()) No such luck. In order to be able to mask fast interrupts I had to move all the macros from between fast and slow ints to above fast ints. I also decided to tab-out the backslashes in the icu_vector code, because I couldn't read the code easily with the backslashes slammed up against the instructions :-( :> @@ -417,6 +508,41 @@ :> INTR(30,intr30,) :> INTR(31,intr31,) :> MCOUNT_LABEL(eintr) :> + :> +MCOUNT_LABEL(bfunpend) :> + FAST_UNPEND(0,fastunpend0) :>... :> + FAST_UNPEND(31,fastunpend31) :> +MCOUNT_LABEL(efunpend) : :THe interrupt routines must go between the special labels bintr and eintr :so that mcount understands them. Ok, I'll get rid of bfunpend and efunpend and just put the whole mess around bintr and eintr. :> +void :> +call_fast_unpend(int irq) :> +{ :> + fastunpend[irq](); :> +} :> + : :I think this can be done using with few changes to the assembler by :using INTREN(irq) followed by a call to the usual Xfastintr[irq] entry :point, or (perhaps after a few adjustments) even to the driver's entry :point. Efficiency is not very important, since most entries are handled :directly and the overhead for INTREN() is dominant anyway. We just :need to be careful to provide a normal interrupt frame for clock :interrupt handlers (it needs to have %cs in it so that hardclock() can :tell if the interrupt was from user mode). OTOH, non-clock fast :interrupt handlers don't care about the frame at all, so just calling :them (with interrupts disabled) works right. I was going to do this but the trap frame is passed to the fast interrupt function. I could either do an inline 'int' instruction or simulate the trap frame. For the slow interrupt unpend code I of course just call sched_ithd() directly. :RELENG_4 still has complications related to this and I was pleased to :see them go away in -current. See vec0, vec8 and BUILD_VEC() in :icu_ipl.s (the vecN routines are stubs to avoid duplication of Xintr*). :-current made things simpler for normal interrupt handlers, and fast :interrupt handlers are simpler at this level except for the problem :with the frame. : :Bruce Yup! -Matt Matthew Dillon <[EMAIL PROTECTED]> To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message