Re: [openssl-dev] [openssl.org #4094] Nonsensical pointer comparison in PACKET_buf_init
On 2015-10-17 01:46, Ben Laurie via RT wrote: > On Fri, 16 Oct 2015 at 01:32 Matt Caswell via RT <r...@openssl.org> wrote: >> On 15/10/15 20:53, Alexander Cherepanov via RT wrote: >>> On 2015-10-15 15:41, Matt Caswell via RT wrote: >>>> The purpose of the sanity check is not then for security, but to guard >>>> against programmer error. For a correctly functioning program this test >>>> should never fail. For an incorrectly functioning program it may do. It >>>> is not guaranteed to fail because the test could be compiled away but, >>>> most likely, it will. We can have some degree of confidence that the >>>> test works and does not get compiled away in most instances because, as >>>> you point out, an explicit check for it appears in packettest.c and, to >>>> date, we have had no reported failures. >>> >>> What was not entirely clear from the original bug report is that, while >>> the check is not compiled away, it's compiled into something completely >>> different from what is written in the source. Specifically, the check >>> "buf + len < buf" is optimized into "len >> 63" on 64-bit platform, i.e. >>> "(ssize_t)len < 0" or "len > SIZE_MAX / 2". This is not a check for >>> overflow at all, it doesn't even depend on the value of "buf". >>> >>> If this is what was intended then it's better to write it explicitly. If >>> this is not what was intended then some other approach is required. >> >> I'd say that is an instance of the compiler knowing better than us how >> big |len| would have to be in order to trigger an overflow. Those rules >> are going to be platform specific so we should not attempt to second >> guess them, but instead let the optimiser do its job. Matt, I'm confused. In your previous email you yourself (correctly) explained why this check does not guard against the pointer overflowing. AIUI this check is not some clever trick, it's just ordinary simplification of "a + b < a" into "b < 0" by subtracting a common term from both sides (which is correct only if there is no overflow) with an additional twist that an unsigned integer are treated as signed. (IMHO this is a bug in compilers and I've just reported it in gcc -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999. But it doesn't really matter for our discussion.) On 2015-10-17 01:46, Ben Laurie via RT wrote: > If it is, then the compiler is wrong, surely? e.g. if buf is 0xfff...fff, > and len is 1, you get an overflow, which the optimised version does not > catch. Right. > What I'm not understanding from this thread is what the argument is against > avoiding undefined behaviour? I guess it's not clear what is the best way to fix it. You cannot check for a pointer overflow directly. There is no such notion in the C standards. Perhaps it's possible with casts to uintptr_t -- Alexander Cherepanov ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev
Re: [openssl-dev] [openssl.org #4094] Nonsensical pointer comparison in PACKET_buf_init
On 2015-10-15 15:41, Matt Caswell via RT wrote: > The purpose of the sanity check is not then for security, but to guard > against programmer error. For a correctly functioning program this test > should never fail. For an incorrectly functioning program it may do. It > is not guaranteed to fail because the test could be compiled away but, > most likely, it will. We can have some degree of confidence that the > test works and does not get compiled away in most instances because, as > you point out, an explicit check for it appears in packettest.c and, to > date, we have had no reported failures. What was not entirely clear from the original bug report is that, while the check is not compiled away, it's compiled into something completely different from what is written in the source. Specifically, the check "buf + len < buf" is optimized into "len >> 63" on 64-bit platform, i.e. "(ssize_t)len < 0" or "len > SIZE_MAX / 2". This is not a check for overflow at all, it doesn't even depend on the value of "buf". If this is what was intended then it's better to write it explicitly. If this is not what was intended then some other approach is required. > The biggest packet that I can think of that we will ever have to deal > with within libssl would be a handshake message. This has a maximum > length of 0xff plus 5 bytes of message header, i.e. 0x104. There > could be an argument to say we should test against this to ensure that > |len| is not too big. > > However that does not necessarily guard against the pointer overflowing. > In an incorrect program where len is just a bit bigger than the actual > size of buf this could, theoretically, be sufficient to overflow the > pointer. Right. That's exactly one of the problem with the current code the way it is compiled by optimizing compilers. A couple of other remarks. 1. The mere fact that the (sub)expression "buf + len" is evaluated unconditionally permits a compiler to assume that this is a valid pointer (it's UB otherwise) and hence that it points inside an array. This, in turn, permits the compiler to remove some other checks against "buf + len" or "len" even if they are well-written. I'm not saying that any of current compilers can do it, I'm just saying that undefined behavior is a very delicate thing and it's better not to have it in your program. 2. If you want to simplify checking openssl for undefined behavior it's better to fix even non-critical cases of it. Otherwise everybody have to research it, to maintain their blacklists of things to ignore etc. It's similar to compiler warnings: if you want to get more value from compiler warnings you have to keep your project warnings-free fixing even minor and false ones. HTH -- Alexander Cherepanov ___ openssl-dev mailing list To unsubscribe: https://mta.openssl.org/mailman/listinfo/openssl-dev