On Tue, May 11, 2021 at 11:31 AM Greg Gallagher <g...@embeddedgreg.com> wrote:
> > > On Tue, May 11, 2021 at 11:25 AM Jan Kiszka <jan.kis...@siemens.com> > wrote: > >> On 11.05.21 16:57, Grau, Gunter via Xenomai wrote: >> >> -----Original Message----- >> >> From: Gunter Grau <gunter.g...@philips.com> >> >> Sent: Freitag, 5. März 2021 09:22 >> >> To: xenomai@xenomai.org >> >> Cc: r...@xenomai.org; Grau, Gunter <gunter.g...@philips.com> >> >> Subject: [PATCH] ipipe_tsc: Use WRTIE_ONCE for updating last_tsc >> >> >> >> The gcc 8.3 compiler generated for the update of the last_tsc >> >> 8 byte value two store commands (see __ipipe_update_tsc). >> >> >> >> e2500001 subs r0, r0, #1 >> >> e5830000 str r0, [r3] >> >> e2c11000 sbc r1, r1, #0 >> >> e5831004 str r1, [r3, #4] >> >> >> >> This may lead to wrong reading of the counter in the assembler part >> >> __ipipe_tsc_get if a CPU core just reads this value when only half of >> the >> >> last_tsc field is updated. >> >> >> >> __ipipe_freerunning_32: >> >> ldr r0, .LCfr32_cntr_addr >> >> /* User-space entry-point: r0 is the hardware counter virtual >> address */ >> >> myldrd r2, r3, r1, .LCfr32_last_tsc >> >> #ifndef CONFIG_CPU_BIG_ENDIAN >> >> /* Little endian */ >> >> ldr r0, [r0] >> >> cmp r2, r0 >> >> adc r1, r3, #0 >> >> >> >> The issue can be seen when enabling DEBUG_TIMEKEEPING option in the >> >> kernel and may lead to a jump backwards in time for the monotonic >> timer in >> >> Linux. >> >> >> >> WARNING: Underflow in clocksource 'ipipe_tsc' observed, time update >> >> ignored. >> >> Please report this, consider using a different clocksource, >> if possible. >> >> Your kernel is probably still fine. >> >> >> >> Fix this by using the WRITE_ONCE macro to prevent the compiler from >> >> tearing the write since this fields are accessed from different CPU >> cores. >> >> After this change the resulting assembler is using a double store >> command, >> >> which should fix the issue. >> >> >> >> e2506001 subs r6, r0, #1 >> >> e2c17000 sbc r7, r1, #0 >> >> e1c360f0 strd r6, [r3] >> >> >> >> Signed-off-by: Gunter Grau <gunter.g...@philips.com> >> >> --- >> >> arch/arm/kernel/ipipe_tsc.c | 2 +- >> >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> >> >> diff --git a/arch/arm/kernel/ipipe_tsc.c b/arch/arm/kernel/ipipe_tsc.c >> index >> >> 7e58118254b8..6d9e1cefa964 100644 >> >> --- a/arch/arm/kernel/ipipe_tsc.c >> >> +++ b/arch/arm/kernel/ipipe_tsc.c >> >> @@ -199,7 +199,7 @@ void __ipipe_tsc_update(void) >> >> >> >> /* Update last_tsc, in order to remain compatible with legacy >> >> user-space 32 bits free-running counter implementation */ >> >> - ipipe_tsc_value->last_tsc = __ipipe_tsc_get() - 1; >> >> + WRITE_ONCE(ipipe_tsc_value->last_tsc, __ipipe_tsc_get() - 1); >> >> } >> >> EXPORT_SYMBOL(__ipipe_tsc_get); >> >> >> >> -- >> >> 2.17.1 >> >> >> >> >> > >> > Hi, >> > >> > I wanted to ask if someone could have a look on this patch. >> > I sent 2 patches for the ipipe at this time but did not see any follow >> ups regarding this. >> > >> > The other one was just a goodie I wondered about while investigating >> this above one. >> > >> > This was really a pain and I wanted to save others from having the same >> issues. >> > The observed issues where haning processes (looping in gettimeofday) or >> rcu_stalls on different cores. >> > >> > If I see this correct this issue is present in any ipipe patch from >> xenomai 2.6 to 3.1. I haven't checked older ones. >> > If the compiler decides to generate such code (which it is allowed to) >> you will see sporadic issues of the same type. Not really reproducible. >> > >> > So please have a look and take into account to accept this. >> > >> > Thanks in advance, >> > Gunter >> > >> >> Sorry, this may have fallen through the cracks. >> >> Greg, did you have a look at this, or is this already in your queue? >> >> Jan >> >> -- >> Siemens AG, T RDA IOT >> Corporate Competence Center Embedded Linux > > > Sorry, this one is my queue, I’ve tested it and it looks good. I’ll apply > it tonight, and get new patches out. > > Thanks > > Greg > >> >> Applied, thanks for the patch. Sorry it took so long, new patches will be pushed out this weekend. -Greg