https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115500

            Bug ID: 115500
           Summary: RISC-V: Performance regression on 1bit test
           Product: gcc
           Version: 13.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: target
          Assignee: unassigned at gcc dot gnu.org
          Reporter: syq at gcc dot gnu.org
  Target Milestone: ---

```x.c
#include <stdio.h>

int f32(int);

int main() {
        for(int i=0; i<1e9; i++) {
                f32(i);
        }
}
```

```f32.c
int f32(int x) {
        if (x & 0x80000)
                return 100;
        return 1000;
}
```

I test it on 
isa             : rv64imafdc_zicntr_zicsr_zifencei_zihpm
mmu             : sv39
uarch           : sifive,bullet0
mvendorid       : 0x489
marchid         : 0x8000000000000007
mimpid          : 0x20181004
hart isa        : rv64imafdc_zicntr_zicsr_zifencei_zihpm

With GCC12, the time cost is
   real    0m7.140s
   user    0m7.134s
   sys     0m0.005s

With GCC13, the time cost is
   real    0m9.298s
   user    0m9.291s
   sys     0m0.005s


The problem is about
   0:   814d                    srli    a0,a0,0x13
   2:   8905                    andi    a0,a0,1
   4:   e501                    bnez    a0,c <.L3>
vs 
   0:   02c51793                slli    a5,a0,0x2c
   4:   0007c563                bltz    a5,e <.L3>

Reply via email to