found it. the server sends Content-Encoding header which causes hget
to add a decompression filter, so you get as output a tarball.

<- Content-Type: application/x-gzip
<- Content-Encoding: gzip

from the w3c:

The Content-Encoding entity-header field is used as a modifier to the 
media-type.
When presented, its value indicates what additional content codings have been 
applied
to the entity-body, and thus what decoding mechanisms must be applied in order 
to
obtail the media-type referenced by the Conent-Type header field. 
Content-Encoding
is primarily used to allow a document to be compressed without losing the
identity of its underlying media type.

this is clearly silly, as the file is already compressed, and decompressing it
will not yield the indicated content-type: application/x-gzip, but a tarball.

maybe the w3c is wrong, or is ignored in practice or we need to handle gzip
specially. the problem is that some webservers compress the data, like you 
request
a html file and it gives you gzip back, thats why hget uncompresses.

--
cinap

Reply via email to