https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121936
--- Comment #12 from Richard Smith <richard-gccbugzilla at metafoo dot co.uk> --- (In reply to Andrew Pinski from comment #10) > So if I understand this, the general jist is valgue linkage functions needs > to be treated the same as weak functions when it comes to any IPA > optimizations except for inlining? Not exactly the same: you can inline the function, or you can clone it into a local version if you'd prefer not to inline. And then it's correct to use properties of the local code because you know that's what you'll actually use. > I don't see how that can be scalable at > all or even better yet seems like this makes things worse for optimization > in general for C++ code. Worse for optimization than miscompiling in some cases (especially when combining code produced by different compilers or at different optimization levels, etc)? I suppose that depends on how you prioritize correctness in corner cases versus execution speed. But LLVM has (to the best of my knowledge) fully addressed this and it doesn't seem to have been a problem at scale. > Especially when it comes to say nothrow detection > or pure/const detection. If you can be sure that the property you detected is a property of the original program, and not a property introduced by refinement, I think it's correct to make use of that. Eg, if the source function only calls non-throwing functions and doesn't itself contain any throwing operations, it seems correct to me to mark it nothrow. Determining that after optimization seems incredibly difficult to me, though.
