Le 09/04/2021 à 16:12, Rasmus Villemoes a écrit :
On powerpc, time as measured by get_timer() ceases to pass when
interrupts are disabled (since on powerpc get_timer() returns the
value of a volatile variable that gets updated via a timer
interrupt). That in turn means the watchdog_reset() function provided
by CONFIG_WDT ceases to work due to the ratelimiting it imposes.

Normally, interrupts are just disabled very briefly. However, during
bootm, they are disabled for good prior to decompressing the kernel
image, which can be a somewhat time-consuming operation. Even when we
manage to decompress the kernel and do the other preparation steps and
hand over control to the kernel, the kernel also takes some time
before it is ready to assume responsibility for handling the
watchdog. The end result is that the board gets reset prematurely.

The ratelimiting isn't really strictly needed (prior to DM WDT, no
such thing existed), so just disable it when we know that time no
longer passes and have watchdog_reset() (e.g. called from
decompression loop) unconditionally reset the watchdog timer.


Do we need to make it that complicated ? I think before the generic implementation, powerpc didn't have a rate limitation at all for pinging the watchdog, why not go back this direction, all the time ?

I mean we could simply set reset_period to 0 at all time for powerpc ( and change the test to time_after_eq() instead of time_after() ).



Signed-off-by: Rasmus Villemoes <rasmus.villem...@prevas.dk>
---

I previously sent a patch to change the ratelimiting to be based on
get_ticks() instead of get_timer(), but that has gone nowhere
[1]. This is an alternative which only affects powerpc (and only
boards that have enabled CONFIG_WDT). I hope the watchdog maintainers
will accept at least one of these, or suggest a third alternative, so
I don't have to keep some out-of-tree patch applied without knowing if
that's the direction upstream will take.

[1] 
https://patchwork.ozlabs.org/project/uboot/patch/20200605111657.28773-1-rasmus.villem...@prevas.dk/


Reply via email to