Hi Greg, On 2018/10/13 15:30, Greg Kroah-Hartman wrote: > On Sat, Oct 13, 2018 at 03:22:08PM +0800, Gao Xiang wrote: >> Hi Greg, >> >> On 2018/10/13 15:04, Greg Kroah-Hartman wrote: >>> On Sat, Oct 13, 2018 at 02:47:29PM +0800, Gao Xiang wrote: >>>> It is better to use smp_cond_load_relaxed instead >>>> of busy waiting for bit_spinlock. >>> >>> Why? I think we need some kind of "proof" that this is true before >>> being able to accept a patch like this, don't you agree? >> >> There are some materials which discuss smp_cond_load_* earlier. >> https://patchwork.kernel.org/patch/10335991/ >> https://patchwork.kernel.org/patch/10325057/ >> >> In ARM64, they implements a function called "cmpwait", which uses >> hardware instructions to monitor a value change, I think it is more >> energy efficient than just do a open-code busy loop... >> >> And it seem smp_cond_load_* is already used in the current kernel, such as: >> ./kernel/locking/mcs_spinlock.h >> ./kernel/locking/qspinlock.c >> ./kernel/sched/core.c >> ./kernel/smp.c >> >> For other architectures like x86/arm64, I think they could implement >> smp_cond_load_* later. > > And have you benchmarked this change to show that it provides any > benifit? > > You need to do that...
OK, it is my responsibility to test this patch in ARM64 indeed. I will test it later in ARM64 to see if it has any performance difference after I handled EROFS product landing stuffs (perhaps weeks later, many urgent stuffs for the current product that I need to solve...) Or if some warm-hearted folks interest in it, I'm very happy to see other implementations or comments about that. :) Thanks, Gao Xiang > > thanks, > > greg k-h >