Jeff King <p...@peff.net> writes:

>> > You could also write the second line like:
>> > 
>> >   bufno %= ARRAY_SIZE(hexbuffer);
>> > 
>> > which is less magical (right now the set of buffers must be a power of
>> > 2). I expect the compiler could turn that into a bitmask itself.
>> ...
>
> I think it would be preferable to just fix it inline in each place.

Yeah, probably.

My initial reaction to this was

 char *sha1_to_hex(const unsigned char *sha1)
 {
-       static int bufno;
+       static unsigned int bufno;
        static char hexbuffer[4][GIT_SHA1_HEXSZ + 1];
        return sha1_to_hex_r(hexbuffer[3 & ++bufno], sha1);

"ah, we do not even need bufno as uint; it could be ushort or even
uchar".  If this were a 256 element ring buffer and the index were
uchar, we wouldn't even be having this discussion, and "3 &" is a
way to get a fake type that is a 2-bit unsigned integer that wraps
around when incremented.

But being explicit, especially when we know that we can rely on the
fact that the compilers are usually intelligent enough, is a good
idea, I would think.

Isn't size_t often wider than uint, by the way?  It somehow makes me
feel dirty to use it when we know we only care about the bottom two
bit, especially with the explicit "bufno %= ARRAY_SIZE(hexbuffer)",
but I may be simply superstitious in this case.  I dunno.

Reply via email to