On 5/17/2023 4:21 AM, Jukka Laitinen wrote:

Hi,

I just observed the behaviour mentioned in the subject;

I tried just calling in a loop:

"

    sem_t sem =SEM_INITIALIZER(0);

    int ret;

    ret = nxsem_tickwait_uninterruptible(&sem, 1);

"

, and never posting the sem from anywhere. The function return -ETIMEDOUT properly on every call.

But when measuring the time spent in the wait, I see randomly that sometimes the sleep time was less than one systick.

If I set systick to 10ms, I see typical (correct) sleep time between 10000 - 20000us. But sometimes (very randomly) between 0 - 10000us. Also in these error cases the return value is correct (-110, -ETIMEDOUT).

When sleeping for 2 ticks, I see randomly sleep times between 10000-20000us, for 3 ticks 20000-30000us. So, randomly it is exactly one systick too small.

I looked through the implementation of the "nxsem_tickwait_uninterruptible" itself, and didn't saw problem there. (Actually, I think there is a bug if -EINTR occurs; in that case it should always sleep at least one tick more - now it doesn't. But it is not related to this, in my test there was no -EINTR).

I believe the problem might be somewhere in sched/wdog/ , but so far couldn't track down what causes it.

Has anyone else seen the same issue?

Br,

Jukka


If I understand what you are seeing properly, then it is normal and correct behavior for a arbitrary  (asynchonous) timer.  See https://cwiki.apache.org/confluence/display/NUTTX/Short+Time+Delays for an explanation.

NuttX timers have always worked that way and has confused people that use the timers near the limits of their resolution.  A solution is to use a very high resolution timer in tickless mode.


Reply via email to