More often than not I try recursively downloading a webpage using wget, only to 
have it download a single `index.html.gz` then stop. Obviously wget can't read 
gzipped files so it fails to find any links for recursive downloading... I 
ended up using a wget fork[1] that was last updated 10 years ago and it works 
fine, however I find it odd that such a basic feature never made it into 
mainline wget.

Please add a feature for automatically detecting and uncompressing gzipped 
webpages before crawling them.

[1] https://github.com/ptolts/wget-with-gzip-compression
  • Please use gzip/gun... itstheworm--- via Primary discussion list for GNU Wget

Reply via email to