On Thursday, 2 August 2012 at 14:52:58 UTC, Andrei Alexandrescu wrote:
On 8/2/12 9:48 AM, monarch_dodra wrote:
By forcing the developer to chose the bitfield size (32 or 64), you ARE forcing him to make a choice dependent on the machine's characteristics.

I think that's backwards.

I think specifying bitfields, you're already going really low level; meaning you need explicit and full control. As for portability, if the size is specified, it comes down to endinness or order of the fields.

Zlib header: http://tools.ietf.org/html/rfc1950

A zlib stream has the following structure: CMF, FLaG

CMF:
    bits 0 to 3  CM      Compression method
    bits 4 to 7  CINFO   Compression info
FLaG:
    bits 0 to 4  FCHECK  (check bits for CMF and FLG)
    bit  5       FDICT   (preset dictionary)
    bits 6 to 7  FLEVEL  (compression level)

This easily becomes:

//assumes small-endian
mixin(bitfields(
    ubyte, "CM", 4,
    ubyte, "CINFO", 4,
    ubyte, "FCHECK", 5,
    ubyte, "FDICT", 1,
    ubyte, "FLEVEL", 2)); //should be exactly 16 bits

Now if a typo was done (wrong number of bits) or endianness is questionable, this becomes completely different as the fields could be as much as 3-7 bytes off (24 bits, or worse if it assumes 64bits, 56 bits off), or cover a much larger area than it should for the bits. This is unacceptable, especially if you need to get low level with the code.

Reply via email to