I have some thoughts and questions about timer implementation in linux-generic.

Current implementation:

    sigev.sigev_notify          = SIGEV_THREAD;
    sigev.sigev_notify_function = timer_notify;
    sigev.sigev_value.sival_ptr = tp;
   timer_create(CLOCK_MONOTONIC, &sigev, &tp->timerid);
then:
 timer_settime(tp->timerid, 0, &ispec, NULL);

where:
timer_notify(sigval_t sigval)
{
    uint64_t prev_tick = odp_atomic_fetch_inc_u64(&tp->cur_tick);
    /* Attempt to acquire the lock, check if the old value was clear */
    if (odp_spinlock_trylock(&tp->itimer_running)) {
        /* Scan timer array, looking for timers to expire */
        (void)odp_timer_pool_expire(tp, prev_tick);
        odp_spinlock_unlock(&tp->itimer_running);
    }

}

Now what I see from our test case.
1. We have bunch of workers.
2. Each worker starts timer.
3. Because it's SIGEV_THREAD on timer action new thread for notifier function started.

Usually it works well. Until there is load on cpu. (something like busy loop app.) There a lot of threads
just created by kernel. I.e. execution clone() call.

Based that I have question I have questions which is not quite clear for me:
1. Why SIGEV_THREAD was used?

2. When each worker will run bunch of threads (timer handler), they will fight for cpu time for context switches between all that threads. Is there significant slowdown compare to one thread or signal usage?

3. What is priority of timer handler against to worker? Cpu affinity of handler thread? Should it
be SHED_FIFO? I.e. do we need to specify that thread attrs?

I think that creation thread each time only for increasing atomic counter is very expensive. So we can rewrite that code to use SIGEV_SIGNAL or start thread manually and SIGEV_THREAD_ID + semaphore.

If we will think about core isolation, than probably we have to work with signals. Don't know if core isolation supports several threads on one core. Or even move all timer actions to separate core to not disturb worker
cores.

Thank you,
Maxim.








_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to