Re: [PATCH 2/4] powerpc/64: context switch avoid reservation-clearing instruction

2017-06-14 Thread Michael Ellerman
Nicholas Piggin  writes:

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 803c3bc274c4..1f0688ad09d7 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2875,6 +2875,12 @@ context_switch(struct rq *rq, struct task_struct *prev,
>   rq_unpin_lock(rq, rf);
>   spin_release(>lock.dep_map, 1, _THIS_IP_);
>  
> + /*
> +  * Some architectures require that a spin lock is taken before
> +  * _switch. The rq_lock satisfies this condition. See powerpc
> +  * _switch for details.
> +  */
> +
>   /* Here we just switch the register state and the stack. */
>   switch_to(prev, next, prev);
>   barrier();

I dropped this hunk, if you want to merge it you can resend it and get
an ack from Peterz.

cheers


[PATCH 2/4] powerpc/64: context switch avoid reservation-clearing instruction

2017-06-08 Thread Nicholas Piggin
There is no need to break reservation in _switch, because we are
guranteed that context switch path will include a larx/stcx.

Comment the guarantee and remove the reservation clear from _switch.

This is worth 1-2% in context switch performance.

Signed-off-by: Nicholas Piggin 
---
 arch/powerpc/kernel/entry_64.S | 11 +++
 kernel/sched/core.c|  6 ++
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 91f9fdc2d027..273a35926534 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -521,15 +521,10 @@ _GLOBAL(_switch)
 #endif /* CONFIG_SMP */
 
/*
-* If we optimise away the clear of the reservation in system
-* calls because we know the CPU tracks the address of the
-* reservation, then we need to clear it here to cover the
-* case that the kernel context switch path has no larx
-* instructions.
+* The kernel context switch path must contain a spin_lock,
+* which contains larx/stcx, which will clear any reservation
+* of the task being switched.
 */
-BEGIN_FTR_SECTION
-   ldarx   r6,0,r1
-END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
 
 BEGIN_FTR_SECTION
 /*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 803c3bc274c4..1f0688ad09d7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2875,6 +2875,12 @@ context_switch(struct rq *rq, struct task_struct *prev,
rq_unpin_lock(rq, rf);
spin_release(>lock.dep_map, 1, _THIS_IP_);
 
+   /*
+* Some architectures require that a spin lock is taken before
+* _switch. The rq_lock satisfies this condition. See powerpc
+* _switch for details.
+*/
+
/* Here we just switch the register state and the stack. */
switch_to(prev, next, prev);
barrier();
-- 
2.11.0