https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121936
Richard Smith <richard-gccbugzilla at metafoo dot co.uk> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |richard-gccbugzilla@metafoo
| |.co.uk
--- Comment #9 from Richard Smith <richard-gccbugzilla at metafoo dot co.uk> ---
(In reply to Iain Sandoe from comment #8)
> Richard Smith coined the term "Refinements" which (AFAIU) describes the idea
> that one can have ODR-compatible versions of a function with different
> behaviours (e.g. optimised differently).
FWIW, while I've tried to popularize this term and idea, it's not originally
mine. I think the term was originally coined by Sanjoy Das:
https://www.playingwithpointers.com/blog/ipo-and-derefinement.html
> We are, perhaps, used to the idea that "behaviour" is what we see at an ABI
> boundary and that the sum of ODR + ABI is enough to guarantee equivalence.
>
> However, early in the optimiser - perhaps we are looking across an ABI
> boundary (to another function) but at a point where the ABI mandates have
> not been applied?
Yes. In general, it's not safe to use a property of symbol A when determining
properties of symbol B, *except* when you know that selecting that definition
for symbol B means you select a definition of symbol A with the same property.
It's tempting to think that the ODR gives that guarantee (in cases where it
applies), but it doesn't, at least for properties that can differ for different
translations of ODR-equivalent token sequences, perhaps ones performed by
vastly different implementations of C++. This includes things dependent on
evaluation order, contract checking mode, or properties that become true after
optimization (eg, after DCE you might find that a function doesn't store
through memory, but the unoptimized version does).