Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-23 Thread Ingo Molnar

* Peter Zijlstra  wrote:

> diff --git a/kernel/events/internal.h b/kernel/events/internal.h
> index 486fd78eb8d5..981e4163e16c 100644
> --- a/kernel/events/internal.h
> +++ b/kernel/events/internal.h
> @@ -206,16 +206,15 @@ static inline unsigned long perf_aux_size(struct 
> ring_buffer *rb)
>  
>  static inline int get_recursion_context(int *recursion)
>  {
> - int rctx;
> -
> - if (in_nmi())
> - rctx = 3;
> - else if (in_irq())
> - rctx = 2;
> - else if (in_softirq())
> - rctx = 1;
> - else
> - rctx = 0;
> + unsigned int pc = preempt_count();
> + int rctx = 0;
> +
> + if (pc & SOFTIRQ_OFFSET)
> + rctx++;
> + if (pc & HARDIRQ_MASK)
> + rctx++;
> + if (pc & NMI_MASK)
> + rctx++;

Just a nit: if this ever gets beyond the proof of concent stage please rename 
'pc' 
to something like 'count', because 'pc' stands for so many other things 
(program 
counter, etc.) which makes it all look a bit weird ...

Thanks,

Ingo


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-23 Thread Ingo Molnar

* Peter Zijlstra  wrote:

> diff --git a/kernel/events/internal.h b/kernel/events/internal.h
> index 486fd78eb8d5..981e4163e16c 100644
> --- a/kernel/events/internal.h
> +++ b/kernel/events/internal.h
> @@ -206,16 +206,15 @@ static inline unsigned long perf_aux_size(struct 
> ring_buffer *rb)
>  
>  static inline int get_recursion_context(int *recursion)
>  {
> - int rctx;
> -
> - if (in_nmi())
> - rctx = 3;
> - else if (in_irq())
> - rctx = 2;
> - else if (in_softirq())
> - rctx = 1;
> - else
> - rctx = 0;
> + unsigned int pc = preempt_count();
> + int rctx = 0;
> +
> + if (pc & SOFTIRQ_OFFSET)
> + rctx++;
> + if (pc & HARDIRQ_MASK)
> + rctx++;
> + if (pc & NMI_MASK)
> + rctx++;

Just a nit: if this ever gets beyond the proof of concent stage please rename 
'pc' 
to something like 'count', because 'pc' stands for so many other things 
(program 
counter, etc.) which makes it all look a bit weird ...

Thanks,

Ingo


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Peter Zijlstra
On Tue, Aug 22, 2017 at 07:00:39PM +0200, Jesper Dangaard Brouer wrote:
> >  static inline int get_recursion_context(int *recursion)
> >  {
> > +   unsigned int pc = preempt_count();
> > int rctx;
> >  
> > -   if (in_nmi())
> > +   if (pc & NMI_MASK)
> > rctx = 3;
> > -   else if (in_irq())
> > +   else if (pc & HARDIRQ_MASK)
> > rctx = 2;
> > -   else if (in_softirq())
> > +   else if (pc & SOFTIRQ_OFFSET)
> 
> Hmmm... shouldn't this be SOFTIRQ_MASK?

No, that was actually broken, this is correct. See the comment near
__local_bh_disable_ip(). We use the low SOFTIRQ bit to indicate if we're
in softirq and the rest of the bits to 'disable' softirq.

The code should've been using in_serving_softirq().

> perf_swevent_get_recursion_context  /proc/kcore
>│
>│
>│Disassembly of section load0:
>│
>│811465c0 :
>  13.32 │  push   %rbp
>   1.43 │  mov$0x14d20,%rax
>   5.12 │  mov%rsp,%rbp
>   6.56 │  add%gs:0x7eec3b5d(%rip),%rax
>   0.72 │  lea0x34(%rax),%rdx
>   0.31 │  mov%gs:0x7eec5db2(%rip),%eax
>   2.46 │  mov%eax,%ecx
>   6.86 │  and$0x7fff,%ecx
>   0.72 │  test   $0x10,%eax
>│↓ jne40
>│  test   $0xf,%eax
>   0.41 │↓ je 5b
>│  mov$0x8,%ecx
>│  mov$0x2,%eax
>│↓ jmp4a
>│40:   mov$0xc,%ecx
>│  mov$0x3,%eax
>   2.05 │4a:   add%rcx,%rdx
>  16.60 │  mov(%rdx),%ecx
>   2.66 │  test   %ecx,%ecx
>│↓ jne6d
>   1.33 │  movl   $0x1,(%rdx)
>   1.54 │  pop%rbp
>   4.51 │← retq
>   3.89 │5b:   shr$0x8,%ecx
>   9.53 │  and$0x1,%ecx
>   0.61 │  movzbl %cl,%eax
>   0.92 │  movzbl %cl,%ecx
>   4.30 │  shl$0x2,%rcx
>  14.14 │↑ jmp4a
>│6d:   mov$0x,%eax
>│  pop%rbp
>│← retq
>│  xchg   %ax,%ax
> 


Bah, that's fairly disgusting code but I'm failing to come up with
anything significantly better with the current rules.

However, if we change the rules on what *_{enter,exit} do, such that we
always have all the bits set, we can. The below patch (100% untested)
gives me the following asm, which looks better (of course, given how
things go today, it'll both explode and be slower once you get it to
work).

Also, its not strictly correct in that while we _should_ not have nested
hardirqs, they _could_ happen and push the SOFTIRQ count out of the 2
bit 'serving' range.

In any case, something to play with...

$ objdump -dr defconfig-build/kernel/events/core.o | awk '/<[^>]*>:$/ {p=0} 
/:/ {p=1} {if (p) print $0}'
4450 :
4450:   55  push   %rbp
4451:   48 c7 c6 00 00 00 00mov$0x0,%rsi
4454: R_X86_64_32S  .data..percpu+0x20
4458:   48 89 e5mov%rsp,%rbp
445b:   65 48 03 35 00 00 00add%gs:0x0(%rip),%rsi# 4463 

4462:   00 
445f: R_X86_64_PC32 this_cpu_off-0x4
4463:   65 8b 15 00 00 00 00mov%gs:0x0(%rip),%edx# 446a 

4466: R_X86_64_PC32 __preempt_count-0x4
446a:   89 d0   mov%edx,%eax
446c:   c1 e8 08shr$0x8,%eax
446f:   83 e0 01and$0x1,%eax
4472:   89 c1   mov%eax,%ecx
4474:   83 c0 01add$0x1,%eax
4477:   f7 c2 00 00 0f 00   test   $0xf,%edx
447d:   0f 44 c1cmove  %ecx,%eax
4480:   81 e2 00 00 10 00   and$0x10,%edx
4486:   83 fa 01cmp$0x1,%edx
4489:   83 d8 ffsbb$0x,%eax
448c:   48 63 d0movslq %eax,%rdx
448f:   48 8d 54 96 2c  lea0x2c(%rsi,%rdx,4),%rdx
4494:   8b 0a   mov(%rdx),%ecx
4496:   85 c9   test   %ecx,%ecx
4498:   75 08   jne44a2 

449a:   c7 02 01 00 00 00   movl   $0x1,(%rdx)
44a0:   5d  pop%rbp
44a1:   c3  retq   
44a2:   b8 ff ff ff ff  mov$0x,%eax
44a7:   5d  pop%rbp
44a8:   c3  retq   
44a9:   0f 1f 80 00 00 00 00nopl   0x0(%rax)

---
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
index c683996110b1..480babbbc2a5 100644
--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -35,7 +35,7 @@ static inline void rcu_nmi_exit(void)
 #define __irq_enter()  \
do {

Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Peter Zijlstra
On Tue, Aug 22, 2017 at 07:00:39PM +0200, Jesper Dangaard Brouer wrote:
> >  static inline int get_recursion_context(int *recursion)
> >  {
> > +   unsigned int pc = preempt_count();
> > int rctx;
> >  
> > -   if (in_nmi())
> > +   if (pc & NMI_MASK)
> > rctx = 3;
> > -   else if (in_irq())
> > +   else if (pc & HARDIRQ_MASK)
> > rctx = 2;
> > -   else if (in_softirq())
> > +   else if (pc & SOFTIRQ_OFFSET)
> 
> Hmmm... shouldn't this be SOFTIRQ_MASK?

No, that was actually broken, this is correct. See the comment near
__local_bh_disable_ip(). We use the low SOFTIRQ bit to indicate if we're
in softirq and the rest of the bits to 'disable' softirq.

The code should've been using in_serving_softirq().

> perf_swevent_get_recursion_context  /proc/kcore
>│
>│
>│Disassembly of section load0:
>│
>│811465c0 :
>  13.32 │  push   %rbp
>   1.43 │  mov$0x14d20,%rax
>   5.12 │  mov%rsp,%rbp
>   6.56 │  add%gs:0x7eec3b5d(%rip),%rax
>   0.72 │  lea0x34(%rax),%rdx
>   0.31 │  mov%gs:0x7eec5db2(%rip),%eax
>   2.46 │  mov%eax,%ecx
>   6.86 │  and$0x7fff,%ecx
>   0.72 │  test   $0x10,%eax
>│↓ jne40
>│  test   $0xf,%eax
>   0.41 │↓ je 5b
>│  mov$0x8,%ecx
>│  mov$0x2,%eax
>│↓ jmp4a
>│40:   mov$0xc,%ecx
>│  mov$0x3,%eax
>   2.05 │4a:   add%rcx,%rdx
>  16.60 │  mov(%rdx),%ecx
>   2.66 │  test   %ecx,%ecx
>│↓ jne6d
>   1.33 │  movl   $0x1,(%rdx)
>   1.54 │  pop%rbp
>   4.51 │← retq
>   3.89 │5b:   shr$0x8,%ecx
>   9.53 │  and$0x1,%ecx
>   0.61 │  movzbl %cl,%eax
>   0.92 │  movzbl %cl,%ecx
>   4.30 │  shl$0x2,%rcx
>  14.14 │↑ jmp4a
>│6d:   mov$0x,%eax
>│  pop%rbp
>│← retq
>│  xchg   %ax,%ax
> 


Bah, that's fairly disgusting code but I'm failing to come up with
anything significantly better with the current rules.

However, if we change the rules on what *_{enter,exit} do, such that we
always have all the bits set, we can. The below patch (100% untested)
gives me the following asm, which looks better (of course, given how
things go today, it'll both explode and be slower once you get it to
work).

Also, its not strictly correct in that while we _should_ not have nested
hardirqs, they _could_ happen and push the SOFTIRQ count out of the 2
bit 'serving' range.

In any case, something to play with...

$ objdump -dr defconfig-build/kernel/events/core.o | awk '/<[^>]*>:$/ {p=0} 
/:/ {p=1} {if (p) print $0}'
4450 :
4450:   55  push   %rbp
4451:   48 c7 c6 00 00 00 00mov$0x0,%rsi
4454: R_X86_64_32S  .data..percpu+0x20
4458:   48 89 e5mov%rsp,%rbp
445b:   65 48 03 35 00 00 00add%gs:0x0(%rip),%rsi# 4463 

4462:   00 
445f: R_X86_64_PC32 this_cpu_off-0x4
4463:   65 8b 15 00 00 00 00mov%gs:0x0(%rip),%edx# 446a 

4466: R_X86_64_PC32 __preempt_count-0x4
446a:   89 d0   mov%edx,%eax
446c:   c1 e8 08shr$0x8,%eax
446f:   83 e0 01and$0x1,%eax
4472:   89 c1   mov%eax,%ecx
4474:   83 c0 01add$0x1,%eax
4477:   f7 c2 00 00 0f 00   test   $0xf,%edx
447d:   0f 44 c1cmove  %ecx,%eax
4480:   81 e2 00 00 10 00   and$0x10,%edx
4486:   83 fa 01cmp$0x1,%edx
4489:   83 d8 ffsbb$0x,%eax
448c:   48 63 d0movslq %eax,%rdx
448f:   48 8d 54 96 2c  lea0x2c(%rsi,%rdx,4),%rdx
4494:   8b 0a   mov(%rdx),%ecx
4496:   85 c9   test   %ecx,%ecx
4498:   75 08   jne44a2 

449a:   c7 02 01 00 00 00   movl   $0x1,(%rdx)
44a0:   5d  pop%rbp
44a1:   c3  retq   
44a2:   b8 ff ff ff ff  mov$0x,%eax
44a7:   5d  pop%rbp
44a8:   c3  retq   
44a9:   0f 1f 80 00 00 00 00nopl   0x0(%rax)

---
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
index c683996110b1..480babbbc2a5 100644
--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -35,7 +35,7 @@ static inline void rcu_nmi_exit(void)
 #define __irq_enter()  \
do {\
account_irq_enter_time(current);\
-   

Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Jesper Dangaard Brouer
On Tue, 22 Aug 2017 19:00:39 +0200
Jesper Dangaard Brouer  wrote:

> On Tue, 22 Aug 2017 17:20:25 +0200
> Peter Zijlstra  wrote:
> 
> > On Tue, Aug 22, 2017 at 05:14:10PM +0200, Peter Zijlstra wrote:  
> > > On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:   
> > >  
> > > > In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> > > > diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> > > > was consuming 2% CPU. This was reduced to 1.6% with this simple
> > > > change.
> > > 
> > > It is also incorrect. What do you suppose it now returns when the NMI
> > > hits a hard IRQ which hit during a Soft IRQ?
> > 
> > Does this help any? I can imagine the compiler could struggle to CSE
> > preempt_count() seeing how its an asm thing.  
> 
> Nope, it does not help (see assembly below, with perf percentages).
> 
> But I think I can achieve that I want by a simple unlikely(in_nmi()) 
> annotation.

Like:

diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78eb8d5..e1a7ac7bd686 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -208,7 +208,7 @@ static inline int get_recursion_context(int *recursion)
 {
int rctx;
 
-   if (in_nmi())
+   if (unlikely(in_nmi()))
rctx = 3;
else if (in_irq())
rctx = 2;

Testing this show I get the expected result. Although, the 2% is
reduced to 1.85% (and not 1.6% as before).  


perf_swevent_get_recursion_context  /proc/kcore
   │
   │Disassembly of section load0:
   │
   │811465c0 :
  4.94 │  push   %rbp
  2.56 │  mov$0x14d20,%rax
 14.81 │  mov%rsp,%rbp
  3.47 │  add%gs:0x7eec3b5d(%rip),%rax
  0.91 │  lea0x34(%rax),%rdx
  1.46 │  mov%gs:0x7eec5db2(%rip),%eax
  8.04 │  test   $0x10,%eax
   │↓ jne59
  3.11 │  test   $0xf,%eax
   │↓ jne4d
  0.37 │  test   $0xff,%ah
  1.83 │  setne  %cl
  9.87 │  movzbl %cl,%eax
  2.01 │  movzbl %cl,%ecx
  1.65 │  shl$0x2,%rcx
  4.39 │3c:   add%rcx,%rdx
 29.62 │  mov(%rdx),%ecx
  2.93 │  test   %ecx,%ecx
   │↓ jne65
  0.55 │  movl   $0x1,(%rdx)
  2.56 │  pop%rbp
  4.94 │← retq
   │4d:   mov$0x8,%ecx
   │  mov$0x2,%eax
   │↑ jmp3c
   │59:   mov$0xc,%ecx
   │  mov$0x3,%eax
   │↑ jmp3c
   │65:   mov$0x,%eax
   │  pop%rbp
   │← retq



> > ---
> >  kernel/events/internal.h | 7 ---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/kernel/events/internal.h b/kernel/events/internal.h
> > index 486fd78eb8d5..e0b5b8fa83a2 100644
> > --- a/kernel/events/internal.h
> > +++ b/kernel/events/internal.h
> > @@ -206,13 +206,14 @@ perf_callchain(struct perf_event *event, struct 
> > pt_regs *regs);
> >  
> >  static inline int get_recursion_context(int *recursion)
> >  {
> > +   unsigned int pc = preempt_count();
> > int rctx;
> >  
> > -   if (in_nmi())
> > +   if (pc & NMI_MASK)
> > rctx = 3;
> > -   else if (in_irq())
> > +   else if (pc & HARDIRQ_MASK)
> > rctx = 2;
> > -   else if (in_softirq())
> > +   else if (pc & SOFTIRQ_OFFSET)  
> 
> Hmmm... shouldn't this be SOFTIRQ_MASK?
> 
> > rctx = 1;
> > else
> > rctx = 0;  
> 
> perf_swevent_get_recursion_context  /proc/kcore
>│
>│
>│Disassembly of section load0:
>│
>│811465c0 :
>  13.32 │  push   %rbp
>   1.43 │  mov$0x14d20,%rax
>   5.12 │  mov%rsp,%rbp
>   6.56 │  add%gs:0x7eec3b5d(%rip),%rax
>   0.72 │  lea0x34(%rax),%rdx
>   0.31 │  mov%gs:0x7eec5db2(%rip),%eax
>   2.46 │  mov%eax,%ecx
>   6.86 │  and$0x7fff,%ecx
>   0.72 │  test   $0x10,%eax
>│↓ jne40
>│  test   $0xf,%eax
>   0.41 │↓ je 5b
>│  mov$0x8,%ecx
>│  mov$0x2,%eax
>│↓ jmp4a
>│40:   mov$0xc,%ecx
>│  mov$0x3,%eax
>   2.05 │4a:   add%rcx,%rdx
>  16.60 │  mov(%rdx),%ecx
>   2.66 │  test   %ecx,%ecx
>│↓ jne6d
>   1.33 │  movl   $0x1,(%rdx)
>   1.54 │  pop%rbp
>   4.51 │← retq
>   3.89 │5b:   shr$0x8,%ecx
>   9.53 │  and$0x1,%ecx
>   0.61 │  movzbl %cl,%eax
>   0.92 │  movzbl %cl,%ecx
>   4.30 │  shl$0x2,%rcx
>  14.14 │↑ jmp4a
>│6d:   mov$0x,%eax
>│  pop%rbp
>│← retq
>│  xchg   %ax,%ax
> 
> 
> 



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Jesper Dangaard Brouer
On Tue, 22 Aug 2017 19:00:39 +0200
Jesper Dangaard Brouer  wrote:

> On Tue, 22 Aug 2017 17:20:25 +0200
> Peter Zijlstra  wrote:
> 
> > On Tue, Aug 22, 2017 at 05:14:10PM +0200, Peter Zijlstra wrote:  
> > > On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:   
> > >  
> > > > In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> > > > diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> > > > was consuming 2% CPU. This was reduced to 1.6% with this simple
> > > > change.
> > > 
> > > It is also incorrect. What do you suppose it now returns when the NMI
> > > hits a hard IRQ which hit during a Soft IRQ?
> > 
> > Does this help any? I can imagine the compiler could struggle to CSE
> > preempt_count() seeing how its an asm thing.  
> 
> Nope, it does not help (see assembly below, with perf percentages).
> 
> But I think I can achieve that I want by a simple unlikely(in_nmi()) 
> annotation.

Like:

diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78eb8d5..e1a7ac7bd686 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -208,7 +208,7 @@ static inline int get_recursion_context(int *recursion)
 {
int rctx;
 
-   if (in_nmi())
+   if (unlikely(in_nmi()))
rctx = 3;
else if (in_irq())
rctx = 2;

Testing this show I get the expected result. Although, the 2% is
reduced to 1.85% (and not 1.6% as before).  


perf_swevent_get_recursion_context  /proc/kcore
   │
   │Disassembly of section load0:
   │
   │811465c0 :
  4.94 │  push   %rbp
  2.56 │  mov$0x14d20,%rax
 14.81 │  mov%rsp,%rbp
  3.47 │  add%gs:0x7eec3b5d(%rip),%rax
  0.91 │  lea0x34(%rax),%rdx
  1.46 │  mov%gs:0x7eec5db2(%rip),%eax
  8.04 │  test   $0x10,%eax
   │↓ jne59
  3.11 │  test   $0xf,%eax
   │↓ jne4d
  0.37 │  test   $0xff,%ah
  1.83 │  setne  %cl
  9.87 │  movzbl %cl,%eax
  2.01 │  movzbl %cl,%ecx
  1.65 │  shl$0x2,%rcx
  4.39 │3c:   add%rcx,%rdx
 29.62 │  mov(%rdx),%ecx
  2.93 │  test   %ecx,%ecx
   │↓ jne65
  0.55 │  movl   $0x1,(%rdx)
  2.56 │  pop%rbp
  4.94 │← retq
   │4d:   mov$0x8,%ecx
   │  mov$0x2,%eax
   │↑ jmp3c
   │59:   mov$0xc,%ecx
   │  mov$0x3,%eax
   │↑ jmp3c
   │65:   mov$0x,%eax
   │  pop%rbp
   │← retq



> > ---
> >  kernel/events/internal.h | 7 ---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/kernel/events/internal.h b/kernel/events/internal.h
> > index 486fd78eb8d5..e0b5b8fa83a2 100644
> > --- a/kernel/events/internal.h
> > +++ b/kernel/events/internal.h
> > @@ -206,13 +206,14 @@ perf_callchain(struct perf_event *event, struct 
> > pt_regs *regs);
> >  
> >  static inline int get_recursion_context(int *recursion)
> >  {
> > +   unsigned int pc = preempt_count();
> > int rctx;
> >  
> > -   if (in_nmi())
> > +   if (pc & NMI_MASK)
> > rctx = 3;
> > -   else if (in_irq())
> > +   else if (pc & HARDIRQ_MASK)
> > rctx = 2;
> > -   else if (in_softirq())
> > +   else if (pc & SOFTIRQ_OFFSET)  
> 
> Hmmm... shouldn't this be SOFTIRQ_MASK?
> 
> > rctx = 1;
> > else
> > rctx = 0;  
> 
> perf_swevent_get_recursion_context  /proc/kcore
>│
>│
>│Disassembly of section load0:
>│
>│811465c0 :
>  13.32 │  push   %rbp
>   1.43 │  mov$0x14d20,%rax
>   5.12 │  mov%rsp,%rbp
>   6.56 │  add%gs:0x7eec3b5d(%rip),%rax
>   0.72 │  lea0x34(%rax),%rdx
>   0.31 │  mov%gs:0x7eec5db2(%rip),%eax
>   2.46 │  mov%eax,%ecx
>   6.86 │  and$0x7fff,%ecx
>   0.72 │  test   $0x10,%eax
>│↓ jne40
>│  test   $0xf,%eax
>   0.41 │↓ je 5b
>│  mov$0x8,%ecx
>│  mov$0x2,%eax
>│↓ jmp4a
>│40:   mov$0xc,%ecx
>│  mov$0x3,%eax
>   2.05 │4a:   add%rcx,%rdx
>  16.60 │  mov(%rdx),%ecx
>   2.66 │  test   %ecx,%ecx
>│↓ jne6d
>   1.33 │  movl   $0x1,(%rdx)
>   1.54 │  pop%rbp
>   4.51 │← retq
>   3.89 │5b:   shr$0x8,%ecx
>   9.53 │  and$0x1,%ecx
>   0.61 │  movzbl %cl,%eax
>   0.92 │  movzbl %cl,%ecx
>   4.30 │  shl$0x2,%rcx
>  14.14 │↑ jmp4a
>│6d:   mov$0x,%eax
>│  pop%rbp
>│← retq
>│  xchg   %ax,%ax
> 
> 
> 



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Jesper Dangaard Brouer
On Tue, 22 Aug 2017 17:20:25 +0200
Peter Zijlstra  wrote:

> On Tue, Aug 22, 2017 at 05:14:10PM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:  
> > > In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> > > diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> > > was consuming 2% CPU. This was reduced to 1.6% with this simple
> > > change.  
> > 
> > It is also incorrect. What do you suppose it now returns when the NMI
> > hits a hard IRQ which hit during a Soft IRQ?  
> 
> Does this help any? I can imagine the compiler could struggle to CSE
> preempt_count() seeing how its an asm thing.

Nope, it does not help (see assembly below, with perf percentages).

But I think I can achieve that I want by a simple unlikely(in_nmi()) annotation.

> ---
>  kernel/events/internal.h | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/events/internal.h b/kernel/events/internal.h
> index 486fd78eb8d5..e0b5b8fa83a2 100644
> --- a/kernel/events/internal.h
> +++ b/kernel/events/internal.h
> @@ -206,13 +206,14 @@ perf_callchain(struct perf_event *event, struct pt_regs 
> *regs);
>  
>  static inline int get_recursion_context(int *recursion)
>  {
> + unsigned int pc = preempt_count();
>   int rctx;
>  
> - if (in_nmi())
> + if (pc & NMI_MASK)
>   rctx = 3;
> - else if (in_irq())
> + else if (pc & HARDIRQ_MASK)
>   rctx = 2;
> - else if (in_softirq())
> + else if (pc & SOFTIRQ_OFFSET)

Hmmm... shouldn't this be SOFTIRQ_MASK?

>   rctx = 1;
>   else
>   rctx = 0;

perf_swevent_get_recursion_context  /proc/kcore
   │
   │
   │Disassembly of section load0:
   │
   │811465c0 :
 13.32 │  push   %rbp
  1.43 │  mov$0x14d20,%rax
  5.12 │  mov%rsp,%rbp
  6.56 │  add%gs:0x7eec3b5d(%rip),%rax
  0.72 │  lea0x34(%rax),%rdx
  0.31 │  mov%gs:0x7eec5db2(%rip),%eax
  2.46 │  mov%eax,%ecx
  6.86 │  and$0x7fff,%ecx
  0.72 │  test   $0x10,%eax
   │↓ jne40
   │  test   $0xf,%eax
  0.41 │↓ je 5b
   │  mov$0x8,%ecx
   │  mov$0x2,%eax
   │↓ jmp4a
   │40:   mov$0xc,%ecx
   │  mov$0x3,%eax
  2.05 │4a:   add%rcx,%rdx
 16.60 │  mov(%rdx),%ecx
  2.66 │  test   %ecx,%ecx
   │↓ jne6d
  1.33 │  movl   $0x1,(%rdx)
  1.54 │  pop%rbp
  4.51 │← retq
  3.89 │5b:   shr$0x8,%ecx
  9.53 │  and$0x1,%ecx
  0.61 │  movzbl %cl,%eax
  0.92 │  movzbl %cl,%ecx
  4.30 │  shl$0x2,%rcx
 14.14 │↑ jmp4a
   │6d:   mov$0x,%eax
   │  pop%rbp
   │← retq
   │  xchg   %ax,%ax



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Jesper Dangaard Brouer
On Tue, 22 Aug 2017 17:20:25 +0200
Peter Zijlstra  wrote:

> On Tue, Aug 22, 2017 at 05:14:10PM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:  
> > > In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> > > diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> > > was consuming 2% CPU. This was reduced to 1.6% with this simple
> > > change.  
> > 
> > It is also incorrect. What do you suppose it now returns when the NMI
> > hits a hard IRQ which hit during a Soft IRQ?  
> 
> Does this help any? I can imagine the compiler could struggle to CSE
> preempt_count() seeing how its an asm thing.

Nope, it does not help (see assembly below, with perf percentages).

But I think I can achieve that I want by a simple unlikely(in_nmi()) annotation.

> ---
>  kernel/events/internal.h | 7 ---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/events/internal.h b/kernel/events/internal.h
> index 486fd78eb8d5..e0b5b8fa83a2 100644
> --- a/kernel/events/internal.h
> +++ b/kernel/events/internal.h
> @@ -206,13 +206,14 @@ perf_callchain(struct perf_event *event, struct pt_regs 
> *regs);
>  
>  static inline int get_recursion_context(int *recursion)
>  {
> + unsigned int pc = preempt_count();
>   int rctx;
>  
> - if (in_nmi())
> + if (pc & NMI_MASK)
>   rctx = 3;
> - else if (in_irq())
> + else if (pc & HARDIRQ_MASK)
>   rctx = 2;
> - else if (in_softirq())
> + else if (pc & SOFTIRQ_OFFSET)

Hmmm... shouldn't this be SOFTIRQ_MASK?

>   rctx = 1;
>   else
>   rctx = 0;

perf_swevent_get_recursion_context  /proc/kcore
   │
   │
   │Disassembly of section load0:
   │
   │811465c0 :
 13.32 │  push   %rbp
  1.43 │  mov$0x14d20,%rax
  5.12 │  mov%rsp,%rbp
  6.56 │  add%gs:0x7eec3b5d(%rip),%rax
  0.72 │  lea0x34(%rax),%rdx
  0.31 │  mov%gs:0x7eec5db2(%rip),%eax
  2.46 │  mov%eax,%ecx
  6.86 │  and$0x7fff,%ecx
  0.72 │  test   $0x10,%eax
   │↓ jne40
   │  test   $0xf,%eax
  0.41 │↓ je 5b
   │  mov$0x8,%ecx
   │  mov$0x2,%eax
   │↓ jmp4a
   │40:   mov$0xc,%ecx
   │  mov$0x3,%eax
  2.05 │4a:   add%rcx,%rdx
 16.60 │  mov(%rdx),%ecx
  2.66 │  test   %ecx,%ecx
   │↓ jne6d
  1.33 │  movl   $0x1,(%rdx)
  1.54 │  pop%rbp
  4.51 │← retq
  3.89 │5b:   shr$0x8,%ecx
  9.53 │  and$0x1,%ecx
  0.61 │  movzbl %cl,%eax
  0.92 │  movzbl %cl,%ecx
  4.30 │  shl$0x2,%rcx
 14.14 │↑ jmp4a
   │6d:   mov$0x,%eax
   │  pop%rbp
   │← retq
   │  xchg   %ax,%ax



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Peter Zijlstra
On Tue, Aug 22, 2017 at 05:14:10PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:
> > In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> > diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> > was consuming 2% CPU. This was reduced to 1.6% with this simple
> > change.
> 
> It is also incorrect. What do you suppose it now returns when the NMI
> hits a hard IRQ which hit during a Soft IRQ?

Does this help any? I can imagine the compiler could struggle to CSE
preempt_count() seeing how its an asm thing.

---
 kernel/events/internal.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78eb8d5..e0b5b8fa83a2 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -206,13 +206,14 @@ perf_callchain(struct perf_event *event, struct pt_regs 
*regs);
 
 static inline int get_recursion_context(int *recursion)
 {
+   unsigned int pc = preempt_count();
int rctx;
 
-   if (in_nmi())
+   if (pc & NMI_MASK)
rctx = 3;
-   else if (in_irq())
+   else if (pc & HARDIRQ_MASK)
rctx = 2;
-   else if (in_softirq())
+   else if (pc & SOFTIRQ_OFFSET)
rctx = 1;
else
rctx = 0;


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Peter Zijlstra
On Tue, Aug 22, 2017 at 05:14:10PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:
> > In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> > diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> > was consuming 2% CPU. This was reduced to 1.6% with this simple
> > change.
> 
> It is also incorrect. What do you suppose it now returns when the NMI
> hits a hard IRQ which hit during a Soft IRQ?

Does this help any? I can imagine the compiler could struggle to CSE
preempt_count() seeing how its an asm thing.

---
 kernel/events/internal.h | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78eb8d5..e0b5b8fa83a2 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -206,13 +206,14 @@ perf_callchain(struct perf_event *event, struct pt_regs 
*regs);
 
 static inline int get_recursion_context(int *recursion)
 {
+   unsigned int pc = preempt_count();
int rctx;
 
-   if (in_nmi())
+   if (pc & NMI_MASK)
rctx = 3;
-   else if (in_irq())
+   else if (pc & HARDIRQ_MASK)
rctx = 2;
-   else if (in_softirq())
+   else if (pc & SOFTIRQ_OFFSET)
rctx = 1;
else
rctx = 0;


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Peter Zijlstra
On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:
> In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> was consuming 2% CPU. This was reduced to 1.6% with this simple
> change.

It is also incorrect. What do you suppose it now returns when the NMI
hits a hard IRQ which hit during a Soft IRQ?

> @@ -208,12 +208,12 @@ static inline int get_recursion_context(int *recursion)
>  {
>   int rctx;
>  
> + if (in_softirq())
> + rctx = 1;
>   else if (in_irq())
>   rctx = 2;
> + else if (in_nmi())
> + rctx = 3;
>   else
>   rctx = 0;
>  
> 


Re: [PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Peter Zijlstra
On Tue, Aug 22, 2017 at 04:40:24PM +0200, Jesper Dangaard Brouer wrote:
> In an XDP redirect applications using tracepoint xdp:xdp_redirect to
> diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
> was consuming 2% CPU. This was reduced to 1.6% with this simple
> change.

It is also incorrect. What do you suppose it now returns when the NMI
hits a hard IRQ which hit during a Soft IRQ?

> @@ -208,12 +208,12 @@ static inline int get_recursion_context(int *recursion)
>  {
>   int rctx;
>  
> + if (in_softirq())
> + rctx = 1;
>   else if (in_irq())
>   rctx = 2;
> + else if (in_nmi())
> + rctx = 3;
>   else
>   rctx = 0;
>  
> 


[PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Jesper Dangaard Brouer
In an XDP redirect applications using tracepoint xdp:xdp_redirect to
diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
was consuming 2% CPU. This was reduced to 1.6% with this simple
change.

Looking at the annotated asm code, it was clear that the unlikely case
in_nmi() test was chosen (by the compiler) as the most likely
event/branch.  This small adjustment makes the compiler (gcc version
7.1.1 20170622 (Red Hat 7.1.1-3)) put in_nmi() as an unlikely branch.

Signed-off-by: Jesper Dangaard Brouer 
---
 kernel/events/internal.h |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78eb8d5..56aa462760fa 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -208,12 +208,12 @@ static inline int get_recursion_context(int *recursion)
 {
int rctx;
 
-   if (in_nmi())
-   rctx = 3;
+   if (in_softirq())
+   rctx = 1;
else if (in_irq())
rctx = 2;
-   else if (in_softirq())
-   rctx = 1;
+   else if (in_nmi())
+   rctx = 3;
else
rctx = 0;
 



[PATCH] trace: adjust code layout in get_recursion_context

2017-08-22 Thread Jesper Dangaard Brouer
In an XDP redirect applications using tracepoint xdp:xdp_redirect to
diagnose TX overrun, I noticed perf_swevent_get_recursion_context()
was consuming 2% CPU. This was reduced to 1.6% with this simple
change.

Looking at the annotated asm code, it was clear that the unlikely case
in_nmi() test was chosen (by the compiler) as the most likely
event/branch.  This small adjustment makes the compiler (gcc version
7.1.1 20170622 (Red Hat 7.1.1-3)) put in_nmi() as an unlikely branch.

Signed-off-by: Jesper Dangaard Brouer 
---
 kernel/events/internal.h |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/events/internal.h b/kernel/events/internal.h
index 486fd78eb8d5..56aa462760fa 100644
--- a/kernel/events/internal.h
+++ b/kernel/events/internal.h
@@ -208,12 +208,12 @@ static inline int get_recursion_context(int *recursion)
 {
int rctx;
 
-   if (in_nmi())
-   rctx = 3;
+   if (in_softirq())
+   rctx = 1;
else if (in_irq())
rctx = 2;
-   else if (in_softirq())
-   rctx = 1;
+   else if (in_nmi())
+   rctx = 3;
else
rctx = 0;