Nicholas Piggin wrote:
---
 arch/powerpc/platforms/pseries/lpar.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/lpar.c 
b/arch/powerpc/platforms/pseries/lpar.c
index 835e7f661a05..a961a7ebeab3 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -1828,8 +1828,11 @@ void hcall_tracepoint_unregfunc(void)

 /*
  * Since the tracing code might execute hcalls we need to guard against
- * recursion. H_CONFER from spin locks must be treated separately though
- * and use _notrace plpar_hcall variants, see yield_to_preempted().
+ * recursion, but this always seems risky -- __trace_hcall_entry might be
+ * ftraced, for example. So warn in this case.

__trace_hcall_[entry|exit] aren't traced anymore since they now have the 'notrace' annotation.

+ *
+ * H_CONFER from spin locks must be treated separately though and use _notrace
+ * plpar_hcall variants, see yield_to_preempted().
  */
 static DEFINE_PER_CPU(unsigned int, hcall_trace_depth);

@@ -1843,7 +1846,7 @@ notrace void __trace_hcall_entry(unsigned long opcode, 
unsigned long *args)

        depth = this_cpu_ptr(&hcall_trace_depth);

-       if (*depth)
+       if (WARN_ON_ONCE(*depth))
                goto out;

I don't think this will be helpful. The hcall trace depth tracking is for the tracepoint and I suspect that this warning will be triggered quite easily. Since we have recursion protection, I don't think we should warn here.


- Naveen

Reply via email to