On Feb 1, 2026, at 1:11 AM, Amit <[email protected]> wrote:
> The description of the CVE is: A heap-based buffer overflow was found > in the __vsyslog_internal function of the glibc library. This function > is called by the syslog and vsyslog functions. This issue occurs when > the openlog function was not called, or called with the ident argument > set to NULL, > """"and the program name (the basename of argv[0]) is bigger than 1024 > bytes,"""" > resulting in an application crash or local privilege escalation. This > issue affects glibc 2.36 and newer. > > So, the function didn't check that the length of the input exceeds > 1024 bytes That is completely false. The problem isn't that *an argument to the program* exceeded a limit. The problem is that *the string pointed to by the global variable __progname* exceeded a limit. > So, the point is that we should not focus on how it was fixed. No, we *should* focus on that, because *you* are making a suggestion about how to fix it, and we need to determine what's the best way to fix it. > And it was fixed the way it was fixed because you can't put a limit in > the current code, > because if you do then many existing code/software can break. Were you the one who fixed it? If not, how do you know the reason why it was fixed? *I* suspect it was fixed because the code was *intended* to be able to handle arbitrary-length input, and there was a *bug* in that code. > """"So, we need a totally new security oriented standard."""" That wouldn't have helped here. > The comment from the changed code is: > /* We already know that bufs is too small to use for this log message. > + The next vsnprintf into bufs is used only to calculate the total > + required buffer length. We will discard bufs contents and allocate > + an appropriately sized buffer later instead. > */ > > So, they decided to allocate a larger buffer for larger input. > > So, they modified their buffer size based on the input size. > > Now, for the sake of the conversation, what would happen if the program > name was 1G in length. In this case, the allocation would have failed. How do you know that? My laptop has 64GB of memory; if it were running Linux rather than macOS, that code would probably have succeeded. And, on a Ubuntu 24.04 VM with 8GB of memory, a small test program tha allocates an 8GB buffer, fills it with 'a',sets the last bit and sets the first and last octet of that buffer, and then does a strcmp of that with argv[0], the malloc() *didn't* fail, but the program got killed, probably by the OOM killer. The swapfile is about 924MB, so that's not enough for the 8GB buffer - but the allocation was allowed anyway. I tried the same program on the laptop, and it eventually completed, with only about 22GB of physical memory being used. That machine's swapfiles have about 15GB, with about 1G free. > So, putting a limit is better than allocating a buffer according to > the input size. The code *already allocated a buffer as necessary*, because it might have to send the entire message in a UDP packet, so it can't just send it out over a socket in pieces. The size of the buffer can't be easily calculated from the size of the input, as one of the inputs to the function is a *printf-style format*, so the buffer size is a complicated function of the priority, the timestamp length, the program name length, the process ID, the format, and all the arguments to the format. Yes, that means it's a *varargs function*, so you don't even know, without looking at the format, *what the arguments to the function are*. So imposing a limit on all the components of the message is just too complicated. It *could* impose a limit on the *total size of the message*, but that's different. Running out of address space is not a major security issue, except for the possibility of a DoS. To handle *that* problem is more complicated, which would require a way of knowing how much more memory you could successfully allocate *and use* in your process - i.e., not just "how much will the memory allocator be able to hand you, where it'll provoke an OOM error or not", but 'how much can I get and not have an OOM error". That's not necessarily a simple percentage of RAM; it may also include swap space; it also involves available *address space* (especially on 32-bit systems, but possibly even on 64-bit systems). Furthermore, you also probably want to have some allocations stop before they exhaust so much memory that error recovery code that, for example, pops up an application dialog saying "sorry, that file's too big to maintain an index of the locations of all the packets in the file" (yes, Wireshark has entered the chat), closes the file, and frees up the space. That's an interesting problem, but it's not one with some simple "just pick some arbitrary sizes for strings and buffers" solution.
