uint32_t r;
void foo(uint16_t u1, uint16_t u2)
{
r = (uint32_t)u1 * (uint32_t)u2;
}
I've been looking at the code for the case above:
1 foo:
2 push r11
3 push r10
4 mov r15, r10
5 mov r14, r12
6 clr r11
7 clr r13
8 call #__umulhisi3
9 mov r14, &_r
10 mov r15, &_r+2
11 pop r10
12 pop r11
13 ret
I've noticed that even though a 16x16 multiply is being called,
the two operands are being passed as 32-bit values. Why pass
two 32 bit values to a library function who's purpose is to
multiply two 16 bit values? IOW why are lines 6 and 7 there?
--
Grant Edwards grante Yow! .. are the STEWED
at PRUNES still in the HAIR
visi.com DRYER?