Micah Cowan <[email protected]> writes: > Well, but this problem is the same one suffered by an program anywhere > that uses stdio.
I wouldn't call it a problem, but it feels suboptimal in the case of Wget's download. Most programs make use of stdio buffering so they don't need to worry about the size of chunks in which they write out the data. Some programs don't use stdio because they have different needs, e.g. access to underlying file descriptors. In the case of storing the downloaded chunks, Wget uses stdio primarily for portability, not for its buffering layer. > But your mention of stdio brings an important point: if folks want to > buffer the data to "page size" chunks before writing, If someone really needs to do that, he can use redirect output to stdout and implement custom buffering in a simple separate program in the pipeline. But I seriously doubt that this is ever needed in practice. Hrvoje
