Hi,

Richard wrote:
> However, inside the compiler we really want to represent this as a 
>shift.
...
> Ideally this would be handled inside the mid-end expansion of an 
> extract, but in the absence of that I think this is best done inside the 
> extv expansion so that we never end up with a real extract in that case.

Yes the mid-end could be improved - it turns out it is due to expansion of
bitfields, all variations of (x & mask) >> N are optimized into shifts early on.

However it turns out Combine can already transform these zero/sign_extends
into shifts, so we do end up with good code. With the latest patch I get:

typedef struct { int x : 6, y : 6, z : 20; } X;

int f (int x, X *p) { return x + p->z; }

        ldr     w1, [x1]
        add     w0, w0, w1, asr 12
        ret

So this case looks alright.

> Sounds good. I'll get those setup and running and will report back on 
> findings. What's
> the preferred way to measure codesize? I'm assuming by default the code pages 
> are 
> aligned so smaller differences would need to trip over the boundary to 
> actually show up.

You can use the size command on the binaries:

>size /bin/ls
   text    data     bss     dec     hex filename
 107271    2024    3472  112767   1b87f /bin/ls

As you can see it shows the text size in bytes. It is not rounded up to a page, 
so it is an 
accurate measure of the codesize. Generally -O2 size is most useful to check 
(since that
is what most applications build with), but -Ofast -flto can be useful as well 
(the global 
inlining means you get instruction combinations which appear less often with 
-O2).

Cheers,
Wilco

Reply via email to