[Sorry, I forgot to respond to this]

On 2023-10-06 16:01, Martin Uecker wrote:
Am Freitag, dem 06.10.2023 um 06:50 -0400 schrieb Siddhesh Poyarekar:
On 2023-10-06 01:11, Martin Uecker wrote:
Am Donnerstag, dem 05.10.2023 um 15:35 -0700 schrieb Kees Cook:
On Thu, Oct 05, 2023 at 04:08:52PM -0400, Siddhesh Poyarekar wrote:
2. How would you handle signedness of the size field?  The size gets
converted to sizetype everywhere it is used and overflows/underflows may
produce interesting results.  Do you want to limit the types to unsigned or
do you want to add a disclaimer in the docs?  The former seems like the
*right* thing to do given that it is a new feature; best to enforce the
cleaner habit at the outset.

The Linux kernel has a lot of "int" counters, so the goal is to catch
negative offsets just like too-large offsets at runtime with the sanitizer
and report 0 for __bdos. Refactoring all these to be unsigned is going
to take time since at least some of them use the negative values as
special values unrelated to array indexing. :(

So, perhaps if unsigned counters are worth enforcing, can this be a
separate warning the kernel can turn off initially?


I think unsigned counters are much more problematic than signed ones
because wraparound errors are more difficult to find.

With unsigned you could potentially diagnose wraparound, but only if we
add -fsanitize=unsigned-overflow *and* add mechanism to mark intentional
wraparound *and* everybody adds this annotation after carefully screening
their code *and* rewriting all operations such as (counter - 3) + 5
where the wraparound in the intermediate expression is harmless.

For this reason, I do not think we should ever enforce some rule that
the counter has to be unsigned.

What we could do, is detect *storing* negative values into the
counter at run-time using UBSan. (but if negative values are
used for special cases, one also should be able to turn this
off).

All of the object size detection relies on object sizes being sizetype.
The closest we could do with that is detect (sz != SIZE_MAX && sz >
size_t / 2), since allocators typically cannot allocate more than
SIZE_MAX / 2.

I was talking about the counter in:

struct {
   int counter;
   char buf[] __counted_by__((counter))
};

which could be checked to be positive either when stored to or
when buf is used.

And yes, we could also check the size of buf.  Not sure what is
done for VLAs now, but I guess it could be similar.

Right now all object sizes are cast to sizetype and the generated dynamic expressions are such that overflows will result in the computed object size being zero. Non-generated expressions (like we could get with __counted_by__) will simply be cast; there's probably scope for improvement here, where we wrap that with an expression that returns 0 if the size exceeds SIZE_MAX / 2 since that's typically the limit for allocators. We use that heuristic elsewhere in the __bos/__bdos logic too.

Thanks,
Sid

Reply via email to