Hi Richard, >> That tune is only used by an obsolete core. I ran the memcpy and memset >> benchmarks from Optimized Routines on xgene-1 with and without LDP/STP. >> There is no measurable penalty for using LDP/STP. I'm not sure why it was >> ever added given it does not do anything useful. I'll post a separate patch >> to remove it to reduce the maintenance overhead.
Patch: https://gcc.gnu.org/pipermail/gcc-patches/2024-January/644442.html > Is that enough to justify removing it though? It sounds from: > > https://gcc.gnu.org/pipermail/gcc-patches/2018-June/500017.html > > like the problem was in more balanced code, rather than memory-limited > things like memset/memcpy. > > But yeah, I'm not sure if the intuition was supported by numbers > in the end. If SPEC also shows no change then we can probably drop it > (unless someone objects). SPECINT didn't show any difference either, so LDP doesn't have a measurable penalty. It doesn't look like the original commit was ever backed up by benchmarks... > Let's leave this patch until that's resolved though, since I think as it > stands the patch does leave -Os -mtune=xgene1 worse off (bigger code). > Handling the tune in the meantime would also be OK. Note it was incorrectly handling -Os, it should still form LDP in that case and take advantage of longer and faster inlined memcpy/memset instead of calling a library function. > /* Default the maximum to 256-bytes when considering only libcall vs > SIMD broadcast sequence. */ > ...this comment should be deleted along with the code it's describing. > Don't respin just for that though :) I've fixed that locally. Cheers, Wilco