[Bug tree-optimization/81540] tree-switch-conversion leads to code bloat
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81540 --- Comment #4 from Georg-Johann Lay --- (In reply to Jakub Jelinek from comment #3) > Estimating the size for non-switch converted switches is going to be really > hard. The point is that with -fno-tree-switch-conversion there is no switch at all. As the switch is sparse, just a few comparisons or conditional loads can handle it. > Anyway, isn't -Os about code size (which is shorter when switch converted) > rather than total size of all sections? Well, .rodata is something that has to be allocated just like .text. It's not executable, but it consumes target memory. If you'd replace computation of sin() by an insanely big lookup-table and advertise it as "code size shrunk", users still won't like it ;-)
[Bug tree-optimization/81540] tree-switch-conversion leads to code bloat
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81540 Jakub Jelinek changed: What|Removed |Added CC||jakub at gcc dot gnu.org --- Comment #3 from Jakub Jelinek --- Estimating the size for non-switch converted switches is going to be really hard. Anyway, isn't -Os about code size (which is shorter when switch converted) rather than total size of all sections?
[Bug tree-optimization/81540] tree-switch-conversion leads to code bloat
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81540 Georg-Johann Lay changed: What|Removed |Added Target|avr |avr,x86_64 --- Comment #2 from Georg-Johann Lay --- Similar on x86_64: $ gcc-8 cswtch.c -Os -c && size -A cswtch.o cswtch.o : section size addr .text 20 0 .rodata200 0 [snipped irrelevant sections] $ gcc-8 cswtch.c -Os -c -fno-tree-switch-conversion && size -A cswtch.o cswtch.o : section size addr .text 35 0 So it grows by a factor of > 5 there.
[Bug tree-optimization/81540] tree-switch-conversion leads to code bloat
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81540 Richard Biener changed: What|Removed |Added Status|UNCONFIRMED |NEW Last reconfirmed||2017-07-25 Ever confirmed|0 |1 --- Comment #1 from Richard Biener --- I think it lacks a cost model for cases like this. Or applying perfect hashing to get the size of the lookup array down.