On 28.01.16 20:31, Maxim Uvarov wrote:
I have some thoughts and questions about timer implementation in linux-generic. Current implementation: sigev.sigev_notify = SIGEV_THREAD; sigev.sigev_notify_function = timer_notify; sigev.sigev_value.sival_ptr = tp; timer_create(CLOCK_MONOTONIC, &sigev, &tp->timerid);
timer create is usually called when timer pool is created. That is mean on the main thread, or control thread. So it can consume only time of a CPU allowed to be used by control thread. The notify function aggregates all scheduled timers from all worker threads and if some is expired then handles it. So, if main thread (control) is assigned only one CPU, the notify function can be run only on this one main thread CPU, the timer_create is executed only in main thread. It's one of the reason I've sent patch series to show how it can be done. See: https://lists.linaro.org/pipermail/lng-odp/2016-January/019734.html [lng-odp] [PATCH 0/2] linux-generic: main control thread on CPU0 [lng-odp] [PATCH 1/2] linux-generic: cpumask_task: use cpumask got at init [lng-odp] [PATCH 2/2] linux-generic: init: assign affinity for main thread The notify function cannot run as signal for each thread as it's handler of all timers in one pool and receives signals one by one. It on system notify function doesn't have enough time to finish notify function, the resolution is set incorrectly.
then: timer_settime(tp->timerid, 0, &ispec, NULL); where: timer_notify(sigval_t sigval) { uint64_t prev_tick = odp_atomic_fetch_inc_u64(&tp->cur_tick); /* Attempt to acquire the lock, check if the old value was clear */ if (odp_spinlock_trylock(&tp->itimer_running)) { /* Scan timer array, looking for timers to expire */ (void)odp_timer_pool_expire(tp, prev_tick); odp_spinlock_unlock(&tp->itimer_running); } } Now what I see from our test case. 1. We have bunch of workers. 2. Each worker starts timer. 3. Because it's SIGEV_THREAD on timer action new thread for notifier function started. Usually it works well. Until there is load on cpu. (something like busy loop app.) There a lot of threads just created by kernel. I.e. execution clone() call. Based that I have question I have questions which is not quite clear for me: 1. Why SIGEV_THREAD was used? 2. When each worker will run bunch of threads (timer handler), they will fight for cpu time for context switches between all that threads. Is there significant slowdown compare to one thread or signal usage? 3. What is priority of timer handler against to worker? Cpu affinity of handler thread? Should it be SHED_FIFO? I.e. do we need to specify that thread attrs? I think that creation thread each time only for increasing atomic counter is very expensive. So we can rewrite that code to use SIGEV_SIGNAL or start thread manually and SIGEV_THREAD_ID + semaphore. If we will think about core isolation, than probably we have to work with signals. Don't know if core isolation supports several threads on one core. Or even move all timer actions to separate core to not disturb worker cores. Thank you, Maxim. _______________________________________________ lng-odp mailing list lng-odp@lists.linaro.org https://lists.linaro.org/mailman/listinfo/lng-odp
-- Regards, Ivan Khoronzhuk _______________________________________________ lng-odp mailing list lng-odp@lists.linaro.org https://lists.linaro.org/mailman/listinfo/lng-odp