https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101923

--- Comment #5 from Petar Ivanov <dartdart26 at gmail dot com> ---
(In reply to Andrew Pinski from comment #4)
> Hmm
> 
>   __tmp = MEM[(union _Any_data & {ref-all})&f];
>   MEM[(union _Any_data * {ref-all})&f] = MEM[(union _Any_data &
> {ref-all})&moved];
>   MEM[(union _Any_data * {ref-all})&moved] = __tmp;
>   __tmp ={v} {CLOBBER};
>   _13 = MEM[(void (*type) (const union _Any_data & {ref-all}, const struct
> Car &) &)&f + 24];
>   _14 = MEM[(void (*type) (const union _Any_data & {ref-all}, const struct
> Car &) &)&moved + 24];
>   MEM[(void (*<Te9f8>) (const union _Any_data & {ref-all}, const struct Car
> &) &)&f + 24] = _14;
>   MEM[(void (*<Te9f8>) (const union _Any_data & {ref-all}, const struct Car
> &) &)&moved + 24] = _13;
> 
> So a missed optimization at the gimple level.
> But note the arm64 compiler on godbolt is a few months old, 20210528.  There
> might have been some fixes which improve this already.

I see, thank you.

Do you think the x86 results on quick bench are something worth improving? From
a user's perspective, I assume the expectation is that moves are at least as
fast as copies.

Could you please advise on how I can proceed with this report? Can a change be
made in libstdc++ or should it be considered a compiler issue?

Thank you!

Reply via email to