Steven M. Schweda wrote:

   But the real question is: If a Web page has links to other files, how
is Wget supposed to package all that stuff into _one_ file (which _is_
what -O will do), and still make any sense out of it?

even more, how is Wget supposed to properly postprocess the saved data, which can as well be a combination of HTML pages and binary files?

from my perspective the main problem with -O is that wget users seem not to understand its semantics. -O behaves as an stdio redirection (or a pipeline concatenation in case of "wget -O - | someothercommand") in shell, and presents some non-negligible limitations (e.g. in postprocessing of the saved data). -O was never meant to provide a "rename saved files after download and postprocessing" semantics.

perhaps we should make this clear in the manpage and provide an additional option which just renames saved files after download and postprocessing according to a given pattern. IIRC, hrvoje was just suggesting to do this some time ago. what do you guys think?

--
Aequam memento rebus in arduis servare mentem...

Mauro Tortonesi                          http://www.tortonesi.com

University of Ferrara - Dept. of Eng.    http://www.ing.unife.it
GNU Wget - HTTP/FTP file retrieval tool  http://www.gnu.org/software/wget
Deep Space 6 - IPv6 for Linux            http://www.deepspace6.net
Ferrara Linux User Group                 http://www.ferrara.linux.it

Reply via email to