anchao commented on code in PR #15044:
URL: https://github.com/apache/nuttx/pull/15044#discussion_r1870473293
##########
drivers/timers/arch_alarm.c:
##########
@@ -46,6 +46,7 @@ static FAR struct oneshot_lowerhalf_s *g_oneshot_lower;
#ifndef CONFIG_SCHED_TICKLESS
static clock_t g_current_tick;
+static clock_t g_base_tick;
Review Comment:
This is related to the implementation of wd_timer. In the latest wd_timer
implementation, REALTIME's clock is not compatible with the hardware clock that
does not start from 0.
1. After adapting up_timer_gettime() in clock_systime_timespec(), the system
time system will directly obtain the current hardware timer:
```
struct timespec time;
time = clock_gettime(CLOCK_REALTIME)
|
->nxclock_gettime()
|
->clock_systime_timespec()
|
->up_timer_gettime()
```
If the wall clock of time is not 0, there will be a large deviation from the
current system tick, so when setting the timeout of CLOCK_REALTIME for
wd_timer, the tick calculation will be wrong:
```
time.tv_sec += 2;
pthread_mutex_timedlock(time)
|
->pthread_mutex_take
|
->mutex_clocklock
|
->nxmutex_clocklock
|
->nxsem_clockwait
|
->wd_start_realtime
|
->clock_realtime2absticks
```
https://github.com/apache/nuttx/blob/master/include/nuttx/wdog.h#L275-L278

https://github.com/apache/nuttx/blob/3e3701b2721c216cea47b72835a13966da52b555/sched/clock/clock_realtime2absticks.c#L63

The clock_realtime2absticks() implementation only calculates the offset of
g_basetime, ignoring the hardware wall clock. After clock_time2ticks(), a very
large tick deviation will be obtained.
The correct approach is to remove the hardware walltime further and use
relative offset to wait for the tick, but if this is implemented,
wd_start_abstick() will not have any advantage in performance and all timers
using wd_start_abstick() need to be changed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]