I think there might be a problem in the way gcc-3.4.0 handles constants
with one bit set within a test.
When I compile the following code (with -O0):
if (a & 0x40)
{
function();
}
I end up with:
mov &a, r15
clrc
rrc r15
rra r15
rra r15
rra r15
rra r15
rra r15
and #llo(1), r15
jeq .L2
call #function
.L2:
which works, but is very inefficient.
If I use a constant with more than one bit set I get:
mov &fred, r15
and #llo(72), r15
jeq .L2
call #function
.L2:
which is closer to what I would expect, though I would have thought it
could use the bit instruction to produce smaller/faster code.
(Incidentally I've done a search through a fairly large chunk of
compiler output and not found one 'bit' instruction. It makes me
wonder if the compiler is generating them at all.)
I have tried turning optimisation on, this doesn't make any difference
at level 1,2 or 3.
I have observed the same code generation for a test in a while loop.
The same pattern occurs even with constants that can be generated from
r2/r3.
Has anyone else observed these problems or developed a fix/workaround?
Regards
Phil.