Philippe Gerum schrieb:
Jan Kiszka wrote:
BTW, I'm still having difficulties to understand the motivation that
the nucleus on the one hand accepts nanoseconds as time(out) values in
periodic mode and converts them back internally but, on the other
hand, the users are forced to transform periodic ticks to ns on their
own. Did I ask you about this issue before? Then accept my forehanded
excuse, I don't remember it anymore. ;)
In periodic mode, the nucleus accepts jiffies/ticks directly; only the
aperiodic mode accepts nanoseconds as time inputs, which is consistent
if you consider that the tick value in aperiodic mode is the most
precise one the system can give you, i.e. 1 ns. I'm pretty sure I did
not answer your question, though :o> Could you be a bit more specific?
Ok, here is my scenario: I want to provide the RTDM driver developers
two functions that return the current time
a) in some internal tick units (TSC or wallclock, mode-dependent) and
b) in nanoseconds.
Then the developer should be able to call timed API functions (e.g.
task_sleep_until or sem_wait_with_timeout) either by providing
nanoseconds or internal ticks. The first variant (nanoseconds) would not
require any further internal conversions of RTDM for aperiodic mode, the
latter would save it in periodic mode. Both would be inefficient for one
of the modes, because values will be converted uselessly back and forth.
Both variants would require on-the-fly checks of the currently active
timer mode. And that's all because of the varying meaning of nucleus
timeout parameters. Or am I wrong?
I would personally vote for a more consistent way: either ticks or
nanoseconds - in both modes. Actually, I would prefer the tick variant
at nucleus level. This would open the possibility at least for driver
development to pass pre-converted timeout values to the nucleus - saves
a few cycles in critical paths, but that can do a lot (some
microseconds) on low-end systems. I think you already heard this
argument from me some time ago... ;)
Jan