On Mon, 20 Nov 2017, Tim Rühsen wrote:
There already has been a discussion about that (starting here:
http://lists.gnu.org/archive/html/bug-wget/2017-11/msg0.html).
Looks like we didn't fix it correctly.
Hmm, but after that report til now no new release were published
and the reported bug
On Fri, 3 Nov 2017, Tim Rühsen wrote:
On 11/03/2017 06:37 AM, James Cloos wrote:
"TR" == Tim Rühsen writes:
TR> I downloaded/tested thousands of web pages and they behave as if 'Content-
TR> Encoding: gzip' is a compression for the transport. Uncompressing it
'on-the-
TR> fly' and saving th
Hi Tim,
On Mittwoch, 1. November 2017 17:27:58 CET Jens Schleusener wrote:
Hi,
the new "wget" release 1.19.2 has got a new feature:
"gzip Content-Encoding decompression"
But that feature - at least for my self-conmpiled binary - leads to a
problem if one downloads gzip
Hi,
the new "wget" release 1.19.2 has got a new feature:
"gzip Content-Encoding decompression"
But that feature - at least for my self-conmpiled binary - leads to a
problem if one downloads gzip-compressed tarballs from sites that send for
e.g. an HTTP response header containing lines like
ent configurations of an
self-administrated Apache/Varnish system. Anyone knows a "simple" (batch)
tool to "simulate" real browser behaviour for that purposes? My current
test approach using Firefox with Firebug/PageSpeed and/or Wireshark
is probably realistic but a little bit troub
Hi,
sorry, the below described wget behaviour may not be a real bug:
I use often the wget option
--page-requisites
("-p") but for some test purposes I now added also the option
--header='Accept-Encoding: gzip, deflate'
Now wget downloads and saves for e,g, a file named index.html (not
ind