"Tony Lewis" <[EMAIL PROTECTED]> writes:

> Hrvoje Niksic wrote:
>
>>     Please be aware that Wget needs to know the size of the POST
>>     data in advance.  Therefore the argument to @code{--post-file}
>>     must be a regular file; specifying a FIFO or something like
>>     @file{/dev/stdin} won't work.
>
> There's nothing that says you have to read the data after you've
> started sending the POST. Why not just read the --post-file before
> constructing the request so that you know how big it is?

I don't understand what you're proposing.  Reading the whole file in
memory is too memory-intensive for large files (one could presumably
POST really huge files, CD images or whatever).

What the current code does is: determine the file size, send
Content-Length, read the file in chunks (up to the promised size) and
send those chunks to the server.  But that works only with regular
files.  It would be really nice to be able to say something like:

    mkisofs blabla | wget http://burner/localburn.cgi --post-file /dev/stdin

>> My first impulse was to bemoan Wget's antiquated HTTP code which
>> doesn't understand "chunked" transfer.  But, coming to think of it,
>> even if Wget used HTTP/1.1, I don't see how a client can send
>> chunked requests and interoperate with HTTP/1.0 servers.
>
> How do browsers figure out whether they can do a chunked transfer or
> not?

I haven't checked, but I'm 99% convinced that browsers simply don't
give a shit about non-regular files.

Reply via email to