Late to the party but….
Avoiding unsigned is usually recommended to avoid inadvertent underflow:
unsigned big = 200;
unsigned small = 100;
auto result = small - big; // underflow
This is particularly bad when doing math on buffer offsets and sizes, and can
result in OOB bugs. I believe Apple’s media frameworks code has a “no unsigned
usage” rule because of that. I’m surprised that no-one has raised it in this
discussion.
Simon
> On Jan 24, 2023, at 2:00 AM, Myles Maxfield via webkit-dev
> <[email protected]> wrote:
>
> Hello!
>
> I recently learned that the C++ core guidelines recommend against using
> unsigned to avoid negative values. Section 4.4 on page 73 of The C++
> Programming Language says unsigned types should be used for bitfields and not
> in an attempt to ensure values are positive. Some talks by people on the C++
> standards committee (e.g., Herb Sutter) recommend against using unsigned
> types simply because the value is expected to by positive.
>
> Should we be avoiding unsigneds for these purposes? WebKit uses unsigneds all
> over the place, and I’m assuming a fair many of them are there to indicate
> that negative values are avoided. The C++ recommendation goes against my
> intuition that the type is there for clarity, to indicate expectations about
> the meaning and behavior of its value. But if it’s standard practice to just
> use int instead, perhaps we should update the style guide?
>
> What do you think?
>
> —Myles
> _______________________________________________
> webkit-dev mailing list
> [email protected]
> https://lists.webkit.org/mailman/listinfo/webkit-dev
_______________________________________________
webkit-dev mailing list
[email protected]
https://lists.webkit.org/mailman/listinfo/webkit-dev