Hello,
I want to archive a HTML page and « all the files that are necessary to
properly display » it (Wget manual), plus all the linked images (a
href=linked_image_urlimg src=inlined_image_url/a). I tried most
options and features : recursive archiving, including and excluding
directories and
On Tuesday, March 7, 2006 at 8:56:15 +0100, Hrvoje Niksic wrote:
Alain Bench [EMAIL PROTECTED] writes:
unusable nonsense
totally weird.
So far Google didn't help me much. Or rather discouraged me with
Borland supports only C locale-like statements. But I didn't found any
official doc.
Alain Bench [EMAIL PROTECTED] writes:
Sure: I installed Subversion, and began learning it, just to put my
hands on the said trunk (found no tar.gz snapshots?).
There are no tar.gz snapshots yet. They would be easy to
autogenerate, it's just that no one has volunteered to set up such a
script,
Jean-Marc MOLINA wrote:
Hello,
I want to archive a HTML page and « all the files that are necessary to
properly display » it (Wget manual), plus all the linked images (a
href=linked_image_urlimg src=inlined_image_url/a). I tried most
options and features : recursive archiving, including and
Frank McCown wrote:
I'm afraid wget won't do exactly what you want it to do. Future
versions of wget may enable you to specify a wildcard to select which
files you'd like to download, but I don't know when you can expect
that behavior.
The more I use wget, the more I like it, even if I use
Frank McCown wrote:
I'm afraid wget won't do exactly what you want it to do. Future
versions of wget may enable you to specify a wildcard to select which
files you'd like to download, but I don't know when you can expect
that behavior.
I have an other opinion about that limitation. Could it
Tony Lewis wrote:
Mauro Tortonesi wrote:
i would like to read other users' opinion before deciding which
course of action to take, though.
Other users have suggested adding a command line option for -a two or
three times in the past:
- 2002-11-24: Steve Friedl [EMAIL PROTECTED]
Hi,
Jean-Marc MOLINA schrieb:
I have an other opinion about that limitation. Could it be considered as a
bug ? From the Types of Files section of the manual we can read : « Note
that these two options do not affect the downloading of html files; Wget
must load all the htmls to know where to
Denis Solovyov [EMAIL PROTECTED] writes:
Is it true or false that if --connect-timeout is set to a value
larger than timeout implemented by system libraries, it will make no
sense because system timeout will take precedence (i.e. it will
happen earlier than wget's internal timeout)?
True.