https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104475

--- Comment #16 from Richard Biener <rguenth at gcc dot gnu.org> ---
The odd thing is that we do

      /* Pointer constants other than null smaller than param_min_pagesize
         might be the result of erroneous null pointer addition/subtraction.
         Unless zero is a valid address set size to zero.  For null pointers,
         set size to the maximum for now since those may be the result of
         jump threading.  Similarly, for values >= param_min_pagesize in
         order to support (type *) 0x7cdeab00.  */
      if (integer_zerop (ptr)
          || wi::to_widest (ptr) >= param_min_pagesize)
        pref->set_max_size_range ();

so if we plain dereference nullptr we will not diagnose the access but if
we dereference at an address between zero and param_min_pagesize we will.

The machinery unfortunately doesn't propagate this decision so the
diagnostic itself is quite unhelpful (or would have to replicate the
above).  The code also doesn't catch upcasting of nullptr which would
result in small "negative" pointers.

I have a patch improving the diagnostic by means of printing a note like

/home/tjmaciei/dev/gcc/include/c++/13.0.0/bits/atomic_base.h:655:34: warning:
'unsigned int __atomic_fetch_and_4(volatile void*, unsigned int, int)' writing
4 bytes into a region of size 0 overflows the destination
[-Wstringop-overflow=]
In member function 'void QFutureInterfaceBase::setThrottled(bool)':
cc1plus: note: destination object is likely at address zero

amending each and every "into a region of size 0" case would be tedious since
the API used there doesn't pass down the object I amended.

Reply via email to