On Jan 31, 2026, at 9:17 AM, Amit via austin-group-l at The Open Group <[email protected]> wrote:
> Already in UNIX/LInux, you can't give an input of more than 4096 > characters on the terminal The buffering for that input is usually done in OS kernel-mode code, which usually means that it's done in unpaged memory, and may even mean that it's done with memory from a fixed-size pool allocated at system startup time. To prevent exhaustion of that memory, a fixed limit is imposed. Userland code isn't subject to the unpaged-memory limitation and may also not be subject to the fixed-amount-of-preallocated-virtual-memory limitation. > Also, there are limits on the maximum length of a file name, File name size limitation in those APIs that typically involve a system call are a combination of 1) the design of the file system and 2) in-kernel limits similar to the above, if any. *Userland* code does not necessarily impose such limitations. > In my opinion, denial of service is better than getting hacked. In > DoS, no personal information is leaked. But if someone gets hacked > then his/her personal information can be leaked. So what would be an example in which not imposing a limit on the size of arguments to qsort() would result in a system being hacked, rather than, say, either 1) slow performance due to paging or 2) exhaustion of resources (RAM, swap space, address space) and memory allocation resulting in an allocation failure? > Inputs to pointers should not be NULL. But most of the inputs in > POSIX/glibc are dynamic, we can't put a hard limit, and that's why I > came up with this percentage of RAM idea. > > But even then, we can put limits at the application level - like, name > of a person - we can limit this to 50 characters. If someone's name is > more than 50 characters then we can't provide service to that person. > But in general, no one will have a name of more than 50 characters. A character may take more than one octet, and a limit of 50 octets, for a name in which all characters require 3 octet in UTF-8, means a limit of 16 characters. So presumably you mean a limit of 50 *characters*, which would be a limit of 150 to 200 octets, depending on whether a name could contain a character that requires 4 octets in UTF-8 or would only contain characters requiring 3 or fewer octets in UTF-8. > We can also put a limit on the maximum length of a URL, maybe like, 1024 > or 2048 characters. I don't think that in general URLs will have more > than 1024 characters. I've seen URLs with components that seem to use base-64 or some other form of encoding to include session keys or some other flavor of binary data, so I wouldn't make that assumption. Furthermore, a file: URL must be able to hold a full-length path, so if PATH_MAX is 1024 or larger, a 1024-octet limit on URLs would be insufficient, as it wouldn't leave room for the URL scheme. And, again, I don't see what security benefits imposing size limits would provide. Yes, if an implementation of a POSIX API happens to involve a fixed-size buffer, that particular implementation should make sure it doesn't overrun the buffer, which may involve imposing limits on arguments passed to it, but another way to avoid that security issue would be not to have fixed-size buffers.
