Richard Kenner wrote:
That doesn't explain why the bit value isn't normalized to be smaller
than BITS_PER_UNIT; any whole bytes could be incorporated into the
variably sized offset.
It can't be normalized to BITS_PER_UNIT, but to DECL_OFFSET_ALIGN since
we are asserting that DECL_FIELD_OFFSET is aligned to DECL_OFFSET_ALIGN.
That doesn't make sense to me. It seems to me that we can normalize it
however we please; ultimately, all these representations just give us a
way of computing the first bit of the field. We presently choose to
normalize to DECL_OFFSET_ALIGN, but we could just as well choose to
normalize to BITS_PER_UNIT. So long as we can compute the starting
offset of the field, why does it matter what the normalization constant is?
--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713