http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48696
Summary: Horrible bitfield code generation on x86
Product: gcc
Version: unknown
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: other
AssignedTo: [email protected]
ReportedBy: [email protected]
gcc (tried 4.5.1 and 4.6.0) generates absolutely horrid code for some common
bitfield accesses due to minimizing the access size.
Trivial test case:
struct bad_gcc_code_generation {
unsigned type:6,
pos:16,
stream:10;
};
int show_bug(struct bad_gcc_code_generation *a)
{
a->type = 0;
return a->pos;
}
will generate code like this on x86-64 with -O2:
andb $-64, (%rdi)
movl (%rdi), %eax
shrl $6, %eax
movzwl %ax, %eax
ret
where the problem is that byte access write, followed by a word access read.
Most (all?) modern x86 CPU's will come to a screeching halt when they see a
read that hits a store buffer entry, but cannot be fully forwarded from it. The
penalty can be quite severe, and this is _very_ noticeable in profiles.
This code would be _much_ faster either using an "andl" (making the store size
match the next load, and thus forwarding through the store buffer), or by
having the load be done first.
(The above code snippet is not the real code I noticed it on, obviously, but
real code definitely sees this, and profiling shows very clearly how the 32-bit
load from memory basically stops cold due to the partial store buffer hit)
Using non-native accesses to memory is fine for loads (so narrowing the access
for a pure load is fine), but for store or read-modify-write instructions it's
almost always a serious performance problem to try to "optimize" the memory
operand size to something smaller.
Yes, the constants often shrink, but the code becomes *much* slower unless you
can guarantee that there are no loads of the original access size that follow
the write.