On Mon, May 03, 2021 at 06:45:04PM +0100, James Murray wrote:
> >     Prefer 'unsigned int' to 'int'.
> 
> Maybe this has been covered before, but is there a good rationale for
> this?
> 
> I have often found that I make more mistakes when using unsigned types
> for general variables. In my own work I have been preferring to use
> signed variables for safety unless I specifically need it to be
> unsigned such as addresses or bitfields.

The two schools are :

1/ unsigned is preferred to signed when the behavior on overflow must
be well defined (e.g. a ring buffer). C says unsigned integers cannot
overflow. There's also the obvious case of a bit-field that uses the
sign bit.

Note that a good way to exploit this is when an unsigned integer type,
e.g. size_t, is used to store a size :

// In ascending order.
for (size_t i = 0; i < size; i++) { ... }

// In descending order.
for (size_t i = size - 1; i < size; i--) { ... }

Though confusing at first, it is actually the semantically correct way
to deal with such cases.

2/ signed is preferred to unsigned in order to exploit the advantage
of undefined behavior. Signed integers may overflow, which is undefined
behavior, but because of this, the compiler doesn't have to handle such
cases, and can further optimize some operations on the assumption that
the resulting behavior is defined.

-- 
Richard Braun

Reply via email to