On 29 January 2016 at 05:16, Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
wrote:

>
>
> On 29.01.16 00:54, Bill Fischofer wrote:
>
>> This is how you implement timers in HW as well.
>>
> A separate HW block operates a scan loop that constantly searches for
> timers to expire and creates events for those who do.
> The rest of the system operates undisturbed.  For a SW analog in manycore
> systems you'd have service thread(s) running on dedicated core(s) doing the
> same.
>
> Actually it can be emulated for linux-generic, but instead of HW block the
> pool of timers should be handled on one of control CPUS.
> Each timer pool, no matter, created on main thread or worker thread has to
> be created with CPU affinity according to control cpumask.
> Question is only which one and who decides which one, on linux-generic,
> let it be always CPU0, but with warn that CPU0 can be
> shared with a worker thread (or maybe exclude it? it was proposed several
> times already, but rejected).
>

The choice of which cores etc to run the tests on should maybe come from
the platform side and for the validation tests etc be a possibly generated
file mapping the cores to Linux, workers and control so that it is easy to
change the mapping used.

For linux generic we could even map it out at configure time to generate
that definition file based on the number of cores found as some sort of
default.


>
>
>> On Thu, Jan 28, 2016 at 12:41 PM, Stuart Haslam <stuart.has...@linaro.org
>> <mailto:stuart.has...@linaro.org>> wrote:
>>
>>     On Thu, Jan 28, 2016 at 09:31:52PM +0300, Maxim Uvarov wrote:
>>      > I have some thoughts and questions about timer implementation in
>>      > linux-generic.
>>      >
>>      > Current implementation:
>>      >
>>      >     sigev.sigev_notify          = SIGEV_THREAD;
>>      >     sigev.sigev_notify_function = timer_notify;
>>      >     sigev.sigev_value.sival_ptr = tp;
>>      >    timer_create(CLOCK_MONOTONIC, &sigev, &tp->timerid);
>>      > then:
>>      >  timer_settime(tp->timerid, 0, &ispec, NULL);
>>      >
>>      > where:
>>      > timer_notify(sigval_t sigval)
>>      > {
>>      >     uint64_t prev_tick = odp_atomic_fetch_inc_u64(&tp->cur_tick);
>>      >     /* Attempt to acquire the lock, check if the old value was
>> clear */
>>      >     if (odp_spinlock_trylock(&tp->itimer_running)) {
>>      >         /* Scan timer array, looking for timers to expire */
>>      >         (void)odp_timer_pool_expire(tp, prev_tick);
>>      >         odp_spinlock_unlock(&tp->itimer_running);
>>      >     }
>>      >
>>      > }
>>      >
>>      > Now what I see from our test case.
>>      > 1. We have bunch of workers.
>>      > 2. Each worker starts timer.
>>      > 3. Because it's SIGEV_THREAD on timer action new thread for
>> notifier
>>      > function started.
>>      >
>>      > Usually it works well. Until there is load on cpu. (something like
>>      > busy loop app.) There a lot of threads
>>      > just created by kernel. I.e. execution clone() call.
>>      >
>>      > Based that I have question I have questions which is not quite
>> clear for me:
>>      > 1. Why SIGEV_THREAD was used?
>>      >
>>      > 2. When each worker will run bunch of threads (timer handler), they
>>      > will fight for cpu time for context
>>      > switches between all that threads. Is there significant slowdown
>>      > compare to one thread or signal usage?
>>      >
>>      > 3. What is priority of timer handler against to worker? Cpu
>> affinity
>>      > of handler thread? Should it
>>      > be SHED_FIFO? I.e. do we need to specify that thread attrs?
>>      >
>>      > I think that creation thread each time only for increasing atomic
>>      > counter is very expensive. So we can
>>      > rewrite that code to use SIGEV_SIGNAL or start thread manually and
>>      > SIGEV_THREAD_ID  + semaphore.
>>      >
>>      > If we will think about core isolation, than probably we have to
>> work
>>      > with signals. Don't know if core isolation
>>      > supports several threads on one core. Or even move all timer
>> actions
>>      > to separate core to not disturb worker
>>      > cores.
>>      >
>>      > Thank you,
>>      > Maxim.
>>
>>     +1
>>
>>     This is basically what was suggested here:
>>
>>     https://bugs.linaro.org/show_bug.cgi?id=1615#c18
>>
>>     --
>>     Stuart.
>>     _______________________________________________
>>     lng-odp mailing list
>>     lng-odp@lists.linaro.org <mailto:lng-odp@lists.linaro.org>
>>     https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
>>
>>
>> _______________________________________________
>> lng-odp mailing list
>> lng-odp@lists.linaro.org
>> https://lists.linaro.org/mailman/listinfo/lng-odp
>>
>>
> --
> Regards,
> Ivan Khoronzhuk
>
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>



-- 
Mike Holmes
Technical Manager - Linaro Networking Group
Linaro.org <http://www.linaro.org/> *│ *Open source software for ARM SoCs
"Work should be fun and collborative, the rest follows"
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to