On Sat, Jan 11, 2020 at 05:41:00PM +0100, Martin Pieuchot wrote:
> Before converting network timeouts to run in a thread context they were
> executed in a soft-interrupt handler.  This design implied that timeouts
> were serialized.

Yep.

> The current "softclock" thread runs on CPU0 to limit border effects due
> to the conversion from soft-interrupt to thread context.

Define "border effects".  Do you mean limiting the delay between when
softclock() yields and softclock_thread() runs?

> However we should raise the IPL level of this thread to ensure no other
> timeout can run in the middle of another one.

This makes sense, though I have a few questions.  Sorry in advance if
some of this is obvious, my understanding of the "rules" governing the
interrupt context is still limited.

1. If softclock() takes more than a full tick to complete and is
   interrupted by hardclock(9), which schedules another softclock(),
   but *before* softclock() returns it wakes up softclock_thread()...
   which of those will run first?

   The softclock() interrupt or the softclock_thread()?

2. If both process and vanilla timeouts run at IPL_SOFTCLOCK, what
   really is the difference between them?  Is there still a reason to
   distinguish between them?  Would it make sense to run *all* timeouts
   from the softclock thread once we have a chance to fork it from
   process 0?

3. Is there a way to prioritize the softclock thread over other
   threads on the primary CPU so that the scheduler makes it runnable
   as soon as possible after softclock() returns to reduce latency?

> Index: kern/kern_timeout.c
> ===================================================================
> RCS file: /cvs/src/sys/kern/kern_timeout.c,v
> retrieving revision 1.70
> diff -u -p -r1.70 kern_timeout.c
> --- kern/kern_timeout.c       3 Jan 2020 20:11:11 -0000       1.70
> +++ kern/kern_timeout.c       11 Jan 2020 16:33:40 -0000
> @@ -554,6 +554,7 @@ softclock_thread(void *arg)
>       struct cpu_info *ci;
>       struct sleep_state sls;
>       struct timeout *to;
> +     int s;
>  
>       KERNEL_ASSERT_LOCKED();
>  
> @@ -565,6 +566,7 @@ softclock_thread(void *arg)
>       KASSERT(ci != NULL);
>       sched_peg_curproc(ci);
>  
> +     s = splsoftclock();
>       for (;;) {
>               sleep_setup(&sls, &timeout_proc, PSWP, "bored");
>               sleep_finish(&sls, CIRCQ_EMPTY(&timeout_proc));
> @@ -579,6 +581,7 @@ softclock_thread(void *arg)
>               tostat.tos_thread_wakeups++;
>               mtx_leave(&timeout_mutex);
>       }
> +     splx(s);
>  }
>  
>  #ifndef SMALL_KERNEL
> 

Reply via email to