Jan Kiszka wrote:
Philippe Gerum schrieb:

Jan Kiszka wrote:


BTW, I'm still having difficulties to understand the motivation that the nucleus on the one hand accepts nanoseconds as time(out) values in periodic mode and converts them back internally but, on the other hand, the users are forced to transform periodic ticks to ns on their own. Did I ask you about this issue before? Then accept my forehanded excuse, I don't remember it anymore. ;)


In periodic mode, the nucleus accepts jiffies/ticks directly; only the aperiodic mode accepts nanoseconds as time inputs, which is consistent if you consider that the tick value in aperiodic mode is the most precise one the system can give you, i.e. 1 ns. I'm pretty sure I did not answer your question, though :o> Could you be a bit more specific?


Ok, here is my scenario: I want to provide the RTDM driver developers two functions that return the current time

  a) in some internal tick units (TSC or wallclock, mode-dependent) and
  b) in nanoseconds.

Then the developer should be able to call timed API functions (e.g. task_sleep_until or sem_wait_with_timeout) either by providing nanoseconds or internal ticks. The first variant (nanoseconds) would not require any further internal conversions of RTDM for aperiodic mode, the latter would save it in periodic mode.

I still don't get why you would bother providing two different time specs? Is this really needed?

 Both would be inefficient for one
of the modes, because values will be converted uselessly back and forth. Both variants would require on-the-fly checks of the currently active timer mode. And that's all because of the varying meaning of nucleus timeout parameters. Or am I wrong?


The current spec regarding timeouts has been chosen on a simple observation: applications tend to use the time representation of the RTOS they rely on as their internal time base too, so if the RTOS deals with ticks, the app does the same for its own housekeeping. In this respect, most of the traditional RTOS APIs take ticks as input.

Nanoseconds have been chosen for the aperiodic mode because TSC values are inherently non-portable architecture-wise, and because theoretically, CPU clock speed may even differ across CPUs on some SMP systems.

I would personally vote for a more consistent way: either ticks or nanoseconds - in both modes. Actually, I would prefer the tick variant at nucleus level.

What would such tick represent in aperiodic mode, time-wise?

 This would open the possibility at least for driver
development to pass pre-converted timeout values to the nucleus - saves a few cycles in critical paths, but that can do a lot (some microseconds) on low-end systems. I think you already heard this argument from me some time ago... ;)

Yes, I remember our conversation, and it has an immediate consequence: Gilles completely reworked the conversion routines, and provided far better replacement for them in fusion.

IOW, I'm not sure that the actual optimization you would get from such approach would be worth the effort.

--

Philippe.

Reply via email to