Hrvoje Niksic wrote:
I think you have a point there -- -A shouldn't so blatantly invalidate
-p. That would be IMHO the best fix to the problem you're
encountering.
Frank mentionned that limitation in its first reply.
thomas [EMAIL PROTECTED] writes:
i tried adding '-r -l1 - A.pdf' but that removes the html page and all the
'-p' files.
How about -r -l1 -R.html? That would download the HTML and the linked
contents, but not other HTMLs.
well that doesn't work in most real case situations since .html's are a minority now a days so you'd get all the dynamic pages (.php, urls with no extension, etc)i feel like the desired behavior is closer to -p than -r. it seems kind of unnatural to me that --accept totally overrides -p but on the
thomas [EMAIL PROTECTED] writes:
i feel like the desired behavior is closer to -p than -r. it seems
kind of unnatural to me that --accept totally overrides -p but on
the other hand the current -A behavior is important in the context
of -r.
I think you have a point there -- -A shouldn't so
Tobias Tiederle wrote:
I just set up my compile environment for WGet again.
When I did regex support, I had the same problem with exclusion, so I
introduced a new parameter --follow-excluded-html.
(Which is of course the default) but you can turn it off with
--no-follow-excluded-html...
See
Mauro Tortonesi wrote:
although i really dislike the name --no-follow-excluded-html, i
certainly agree on the necessity to introduce such a feature into
wget.
can we come up with a better name (and reach consensus on that)
before i include this feature in wget 1.11?
I agree no shouldn't be
Tobias Tiederle wrote:
Hi,
Jean-Marc MOLINA schrieb:
I just set up my compile environment for WGet again.
When I did regex support, I had the same problem with exclusion, so I
introduced a new parameter --follow-excluded-html.
(Which is of course the default) but you can turn it off with
Hello,
I want to archive a HTML page and « all the files that are necessary to
properly display » it (Wget manual), plus all the linked images (a
href=linked_image_urlimg src=inlined_image_url/a). I tried most
options and features : recursive archiving, including and excluding
directories and
Jean-Marc MOLINA wrote:
Hello,
I want to archive a HTML page and « all the files that are necessary to
properly display » it (Wget manual), plus all the linked images (a
href=linked_image_urlimg src=inlined_image_url/a). I tried most
options and features : recursive archiving, including and
Frank McCown wrote:
I'm afraid wget won't do exactly what you want it to do. Future
versions of wget may enable you to specify a wildcard to select which
files you'd like to download, but I don't know when you can expect
that behavior.
The more I use wget, the more I like it, even if I use
Frank McCown wrote:
I'm afraid wget won't do exactly what you want it to do. Future
versions of wget may enable you to specify a wildcard to select which
files you'd like to download, but I don't know when you can expect
that behavior.
I have an other opinion about that limitation. Could it
Hi,
Jean-Marc MOLINA schrieb:
I have an other opinion about that limitation. Could it be considered as a
bug ? From the Types of Files section of the manual we can read : « Note
that these two options do not affect the downloading of html files; Wget
must load all the htmls to know where to
12 matches
Mail list logo