https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87728

            Bug ID: 87728
           Summary: inline asm not optimized on GIMPLE
           Product: gcc
           Version: 9.0
            Status: UNCONFIRMED
          Keywords: missed-optimization
          Severity: normal
          Priority: P3
         Component: tree-optimization
          Assignee: unassigned at gcc dot gnu.org
          Reporter: amonakov at gcc dot gnu.org
  Target Milestone: ---

PR 63900 points out a case where RTL CSE fails to clean up redundant loads in
presence of BLKmode accesses, but there really isn't anything in that testcase
that GCC shouldn't be able to clean up in GIMPLE. It seems GIMPLE optimizations
are too conservative with regards to asm statements.

For the simple testcase

int f()
{
    int a=0, b;
    asm("# %0 %1" : "=m"(b) : "m"(a));
    return a;
}

the asm is dead and this should become 'return 0;', but in .optimized dump we
still have

f ()
{
  int b;
  int a;
  int _4;

  <bb 2> :
  a = 0;
  __asm__("# %0 %1" : "=m" b : "m" a);
  _4 = a;
  a ={v} {CLOBBER};
  b ={v} {CLOBBER};
  return _4;
}

Reply via email to