On Wed, 2013-04-17 at 23:00 +0300, Eli Zaretskii wrote: > I'd be surprised if this were a real problem nowadays. E.g., the > Windows C runtime is documented to allow up to 512 FILE streams, which > can be enlarged to 2048 by calling a function. The max number of file > descriptors is also 2048.
GNU make is still used on some pretty ancient UNIX versions, but of course they probably aren't using -j512 either. I don't know if it's a problem in reality. > > Also, a stream is much more resource-heavy than a file descriptor, as it > > implies buffering etc. in addition to the open file. We wouldn't use > > the buffering, but it's still there. > > What's wrong with using the buffering? Nothing, really, but we just don't need it. We don't write, ourselves, to the temporary file: the jobs we invoke write to them. What we do is after the job exits we seek to the beginning of the file, then pump the exact contents out of the temporary file and into our stdout (and/or stderr) as quickly and efficiently as possible (because this is done while holding the lock and thus is potentially blocking other jobs from finishing). Because of this we're using read(2) and write(2) with a big buffer. There's no particular reason I know of that we couldn't use, for example, fread()/fwrite() instead, other than efficiency. One assumes that using a stream interface introduces an extra copy operation on both the read and write side (instead of kernel->buffer->kernel, we would have kernel->stream->buffer->stream->kernel), but I don't have any particular opinion on the difference it would make: it would require some testing. Of course there's no reason we have to use fread()/fwrite() even if we keep FILE*; that can be transformed into a file descriptor (for POSIX) or HANDLE (for Windows) for more efficiency. _______________________________________________ Bug-make mailing list Bug-make@gnu.org https://lists.gnu.org/mailman/listinfo/bug-make