Philippe Gerum schrieb:
Jan Kiszka wrote:

Philippe Gerum schrieb:

Jan Kiszka wrote:


BTW, I'm still having difficulties to understand the motivation that the nucleus on the one hand accepts nanoseconds as time(out) values in periodic mode and converts them back internally but, on the other hand, the users are forced to transform periodic ticks to ns on their own. Did I ask you about this issue before? Then accept my forehanded excuse, I don't remember it anymore. ;)


In periodic mode, the nucleus accepts jiffies/ticks directly; only the aperiodic mode accepts nanoseconds as time inputs, which is consistent if you consider that the tick value in aperiodic mode is the most precise one the system can give you, i.e. 1 ns. I'm pretty sure I did not answer your question, though :o> Could you be a bit more specific?


Ok, here is my scenario: I want to provide the RTDM driver developers two functions that return the current time

  a) in some internal tick units (TSC or wallclock, mode-dependent) and
  b) in nanoseconds.

Then the developer should be able to call timed API functions (e.g. task_sleep_until or sem_wait_with_timeout) either by providing nanoseconds or internal ticks. The first variant (nanoseconds) would not require any further internal conversions of RTDM for aperiodic mode, the latter would save it in periodic mode.


I still don't get why you would bother providing two different time specs? Is this really needed?

I think it's time to repeat the measuring, e.g. rt_timer_read() vs. rt_timer_read_tsc(). It will certainly not make a big difference on GHz machines, but on anything around Pentium I, we once measured a significant overhead. Will try to organise some more concrete numbers tomorrow.


 Both would be inefficient for one

of the modes, because values will be converted uselessly back and forth. Both variants would require on-the-fly checks of the currently active timer mode. And that's all because of the varying meaning of nucleus timeout parameters. Or am I wrong?


The current spec regarding timeouts has been chosen on a simple observation: applications tend to use the time representation of the RTOS they rely on as their internal time base too, so if the RTOS deals with ticks, the app does the same for its own housekeeping. In this respect, most of the traditional RTOS APIs take ticks as input.

Nanoseconds have been chosen for the aperiodic mode because TSC values are inherently non-portable architecture-wise, and because theoretically, CPU clock speed may even differ across CPUs on some SMP systems.

Simple example, based on native skin:
Call rt_sem_p with a timeout provided in nanoseconds.

aperiodic mode
--------------
timeout = 100000000; // 1 ms
...
rt_sem_p(&sem, timeout);

periodic mode
-------------
timeout = 100000000; // 1 ms
...
rt_sem_p(&sem, rt_timer_ns2ticks(timeout));

Now you have two different code fragments serving the same purpose, only performing under different timer modes. No big problem for the application that knows how it configured the timer, but anything at lower levels always has to check for the current mode.


I do not understand yet the portability problem of TSC values, but the inconsistency issue on SMP systems was new to me and make your choice a bit more comprehensible. Does moving to nanoseconds really solve the problem? The clocks used in aperiodic mode will not run synchronised nor will they be started at (almost) the same time, will they? So, all absolute times (taken on CPU A and applied on CPU B) might still be inconsistent.


I would personally vote for a more consistent way: either ticks or nanoseconds - in both modes. Actually, I would prefer the tick variant at nucleus level.


What would such tick represent in aperiodic mode, time-wise?

For me, ticks are what the system timer uses internally in the respective mode: TSCs?! Then it becomes more understandable to the developer that one always has to convert human-processable time to internal units.

Well, I just read through xntimer_do_timers again and I remembered it correctly: you don't actually make use of nanoseconds as tick abstraction in aperiodic mode. Otherwise, you should store all timer dates as nanoseconds and convert the current time to ns before starting to compare it with the pending timers. This way it is now should not provide any advantage on SMP boxes, should it?


 This would open the possibility at least for driver

development to pass pre-converted timeout values to the nucleus - saves a few cycles in critical paths, but that can do a lot (some microseconds) on low-end systems. I think you already heard this argument from me some time ago... ;)


Yes, I remember our conversation, and it has an immediate consequence: Gilles completely reworked the conversion routines, and provided far better replacement for them in fusion.

IOW, I'm not sure that the actual optimization you would get from such approach would be worth the effort.


Ok, let's wait for the numbers. Maybe all my energy for arguing would have better be spent on coding. ;)

Jan

Reply via email to