Hi Paul, On Fri, Oct 21, 2016 at 08:04:17PM +1100, Paul Mackerras wrote: > This fixes a race condition where one thread that is entering or > leaving a power-saving state can inadvertently ignore the lock bit > that was set by another thread, and potentially also clear it. > The core_idle_lock_held function is called when the lock bit is > seen to be set. It polls the lock bit until it is clear, then > does a lwarx to load the word containing the lock bit and thread > idle bits so it can be updated. However, it is possible that the > value loaded with the lwarx has the lock bit set, even though an > immediately preceding lwz loaded a value with the lock bit clear. > If this happens then we go ahead and update the word despite the > lock bit being set, and when called from pnv_enter_arch207_idle_mode, > we will subsequently clear the lock bit. > > No identifiable misbehaviour has been attributed to this race. > > This fixes it by checking the lock bit in the value loaded by the > lwarx. If it is set then we just go back and keep on polling. > > Fixes: b32aadc1a8ed
This fixes the code which has been around since 4.2 kernel. Should this be marked to stable as well ? > Signed-off-by: Paul Mackerras <pau...@ozlabs.org> > --- > arch/powerpc/kernel/idle_book3s.S | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/arch/powerpc/kernel/idle_book3s.S > b/arch/powerpc/kernel/idle_book3s.S > index 0d8712a..72dac0b 100644 > --- a/arch/powerpc/kernel/idle_book3s.S > +++ b/arch/powerpc/kernel/idle_book3s.S > @@ -90,6 +90,7 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_ARCH_300) > * Threads will spin in HMT_LOW until the lock bit is cleared. > * r14 - pointer to core_idle_state > * r15 - used to load contents of core_idle_state > + * r9 - used as a temporary variable > */ > > core_idle_lock_held: > @@ -99,6 +100,8 @@ core_idle_lock_held: > bne 3b > HMT_MEDIUM > lwarx r15,0,r14 > + andi. r9,r15,PNV_CORE_IDLE_LOCK_BIT > + bne core_idle_lock_held > blr > > /* > -- > 2.7.4 > -- Thanks and Regards gautham.