From: Tal Zilcer <t...@ezchip.com> Since the CTOP is SMT hardware multi-threaded, we need to hint the HW that now will be a very good time to do a hardware thread context switching. This is done by issuing the schd.rw instruction (binary coded here so as to not require specific revision of GCC to build the kernel). sched.rw means that Thread becomes eligible for execution by the threads scheduler after all pending read/write transactions were completed.
Implementing cpu_relax_lowlatency() with barrier() Since with current semantics of cpu_relax() it may take a while till yielded CPU will get back. Signed-off-by: Noam Camus <no...@ezchip.com> Cc: Peter Zijlstra <pet...@infradead.org> Acked-by: Vineet Gupta <vgu...@synopsys.com> --- arch/arc/include/asm/processor.h | 9 +++++++++ 1 files changed, 9 insertions(+), 0 deletions(-) diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h index 7266ede..50f9bae 100644 --- a/arch/arc/include/asm/processor.h +++ b/arch/arc/include/asm/processor.h @@ -58,12 +58,21 @@ struct task_struct; * get optimised away by gcc */ #ifdef CONFIG_SMP +#ifndef CONFIG_EZNPS_MTM_EXT #define cpu_relax() __asm__ __volatile__ ("" : : : "memory") #else +#define cpu_relax() \ + __asm__ __volatile__ (".word %0" : : "i"(CTOP_INST_SCHD_RW) : "memory") +#endif +#else #define cpu_relax() do { } while (0) #endif +#ifndef CONFIG_EZNPS_MTM_EXT #define cpu_relax_lowlatency() cpu_relax() +#else +#define cpu_relax_lowlatency() barrier() +#endif #define copy_segments(tsk, mm) do { } while (0) #define release_segments(mm) do { } while (0) -- 1.7.1 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/