On Tue, Oct 21, 2014 at 10:24 AM, Gilles Chanteperdrix <
[email protected]> wrote:

> On 10/21/2014 10:14 AM, Philippe Gerum wrote:
> > On 10/20/2014 10:24 PM, Ronny Meeus wrote:
> >> Hello
> >>
> >> we are using the xenomai-forge mercury core together with the pSOS
> >> interface.
> >> What I observe is that our application is spending a lot of time in the
> >> timer-handling.
> >> Please note that the application consists of a lot of threads (>150)
> that
> >> use things like pSOS event timers (periodic and one-shot) etc.
> >>
> >> When a timer is used by the application 2 things basically happen:
> >> - a Linux timer is started that sends a signal to the internal Timer
> thread.
> >> - the timer is added to a sorted list (sorted on timeout value).
> >>
> >> When the internal timer thread is receiving the signal generated by the
> >> Linux kernel, it scans through the list and expires all the timers for
> >> which the timeout is elapsed. In case of a periodic timer, it is added
> >> again to the sorted list at the correct location.
> >>
> >> In this implementation, in fact a double timer processing is needed:
> once
> >> in the timerlist (inside Xenomai) and once in Linux.
> >>
> >> Just as a test I have changed the code so that the internal timer
> process
> >> ticks with a fixed rate of 10ms (based on 1 Linux timer) and it just
> >> performs the same action as before: handle all timers that are expired.
> In
> >> this way the timer handling is done only once: inserting in the sorted
> >> list. By doing this the load of our application is reduced a lot.
> >> I understand that this approach has also disadvantages:
> >> - less accurate (10ms precision)
> >> - in case there is a low amount of timers with a long timeout, the
> overhead
> >> of the polling might also be big.
> >>
> >> Questions:
> >> - are any improvement planned in this area?
> >> - do other people also observe this high load?
> >>
> >> Any thoughts/ideas for improvements are welcome.
> >>
> >
> > To observe the same load, one would have to run 150 timers in parallel,
> > each serving a separate thread, which is may not be the best approach
> > performance-wise, although I understand that coping with the original
> > design is likely a prerequisite anyway.
> >
> > We have to keep threaded timers, since application handlers may run
> > async-unsafe code indirectly through calls to the emulator. So the
> > options I can see involve processing timers less frequently as you
> > experimented, or decreasing the insertion time of each timer.
> >
> > Although the first option cannot be generalized, we could provide a way
> > to tune the internal timer thread to wake up periodically as you did
> > instead of running event-driven. This would work for applications
> > running over emulators with tick-based clocks, which only need coarse
> > timing. This "batching" option would be enabled on a per-application
> > basis, using a command line switch for instance.
>

I think this still a valid option since it is also what for example pSOS
provides.
If you want I can clean-up my patch and send it to the list.


> > The second option which is already queued to the todo list, is about
> > introducing a smarter algorithm for dealing with a large number of
> > outstanding timers in Copperplate, instead of a plain list as currently.
> > We have this code already for dealing with Cobalt timers, we still have
> > to move it to userland.
>

Is it here still the idea that Linux timers are used and that you just
replace
the plain list by a heap for example?


> I believe the bheap.h binary heaps implementation can be used directly
> in user-space. You may want to add realloc of the bheap at insertion to
> cope with different number of timers instead of having a hard limit,
> this should require minimal changes (allocate the buckets array
> separately from the bheap object, which would also remove the weird
> macros wrapping).
>
>
Another option would be to have the bheap combined with the fixed ticking
(under command line option like you proposed).  This would be the ideal
solution in our case I think.


> --
>                                                                 Gilles.
>
_______________________________________________
Xenomai mailing list
[email protected]
http://www.xenomai.org/mailman/listinfo/xenomai

Reply via email to