https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95663

--- Comment #20 from Jan Hubicka <hubicka at gcc dot gnu.org> ---
I really think with -fdelete-null-pointer-checks we should optimize away the
pointer adjustment relying on the fact that program will segfault.

I was wondering, -fdelete-null-pointer-checks currently requires pointer to be
precisely 0.  We are already iffy here since the access is at non-0 offset, but
since infer_nonnull_range_by_dereference uses check_loadstore:

static bool
check_loadstore (gimple *, tree op, tree, void *data)
{
  if (TREE_CODE (op) == MEM_REF || TREE_CODE (op) == TARGET_MEM_REF)
    {
      /* Some address spaces may legitimately dereference zero.  */
      addr_space_t as = TYPE_ADDR_SPACE (TREE_TYPE (op));
      if (targetm.addr_space.zero_address_valid (as))
        return false;

      return operand_equal_p (TREE_OPERAND (op, 0), (tree)data, 0);
    }
  return false;
}

which completely ignores MEM_REF_OFFSET we actually turn into trap accesses
that are arbitrarily far from NULL.  We also ignore handled components so we
miss this for example for variable array accesses I think.

However if we had --param null-pointer-zone defaulting to say 4k (a page size)
we could optimize all accesses that are near the NULL pointer.  This would let
us to optimize this correctly and also i.e. simplify ipa-pta that currently
special cases 0 as null but thinks that any other constat may point to any
global variable.  Small constants are common so this should optimize
noticeably.

For clang binary there are really many traps added by this logic that makes
code noticeably uglier than what clang generates on its own sources.

Reply via email to