http://gcc.gnu.org/bugzilla/show_bug.cgi?id=54571
Bug #: 54571 Summary: Missed optimization converting between bit sets Classification: Unclassified Product: gcc Version: unknown Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end AssignedTo: unassig...@gcc.gnu.org ReportedBy: r...@gcc.gnu.org When converting a bit set from one domain to another,code such as if (old & OLD_X) new |= NEW_X; if (old & OLD_Y) new |= NEW_Y; is common. If OLD_X and NEW_X are single bits, then this conversion need not include any conditional code. One can mask out OLD_X and shift it left or right to become NEW_X. Or, vice versa, shift left or right and then mask out NEW_X. Indeed, it's probably preferable to perform the mask with the smaller of OLD_X and NEW_X in order to maximize the possibility of having a valid immediate operand to the logical insn. A test case that would seem to cover the all the cases, including converting logical not to bitwise not, would seem to be int f1(int x, int y) { if (x & 1) y |= 1; return y; } int f2(int x, int y) { if (x & 1) y |= 2; return y; } int f3(int x, int y) { if (x & 2) y |= 1; return y; } int g1(int x, int y) { if (!(x & 1)) y |= 1; return y; } int g2(int x, int y) { if (!(x & 1)) y |= 2; return y; } int g3(int x, int y) { if (!(x & 2)) y |= 1; return y; } I'll also note that on the (presumably) preferred alternatives: int h1(int x, int y) { return (x & 1) | y; } int h2(int x, int y) { return ((x & 1) << 1) | y; } int h3(int x, int y) { return ((x & 2) >> 1) | y; } int k1(int x, int y) { return (~x & 1) | y; } int k2(int x, int y) { return ((~x & 1) << 1) | y; } int k3(int x, int y) { return ((~x >> 1) & 1) | y; } there's some less-than-optimial code generated for k1 and k2.