Am 20.07.2006 15:22, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
By the way, FTP transfers shouldn't be downloaded as a whole, too, in
this mode.
well, the semantics of --spider for FTP are still not very clear to me.
at the moment, i was considering whether to simply perform FTP l
Stefan Melbinger ha scritto:
By the way, FTP transfers shouldn't be downloaded as a whole, too, in
this mode.
well, the semantics of --spider for FTP are still not very clear to me.
at the moment, i was considering whether to simply perform FTP listing
in case --spider is given, or to disabl
Am 20.07.2006 14:32, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
As you might have noticed I was trying to use wget as a tool to check
for dead links on big websites. The combination of -r and --spider is
working in the new alpha version, however wget is still downloading
ALL files (
Stefan Melbinger ha scritto:
Hi,
As you might have noticed I was trying to use wget as a tool to check
for dead links on big websites. The combination of -r and --spider is
working in the new alpha version, however wget is still downloading ALL
files (no matter if they are parseable for furth
Hello there,
I think I've got the most recent version of wget running now (SVN
trunk), and I think there is another problem:
When the report about the broken links is printed in the end, the
duration of the operation is always 0 seconds:
"FINISHED --11:56:10--
Downloaded: 62 files, 948K in
--
Your document is attached.
Unrecognized command -- skipping. Use HELP for assistance.
Stefan Melbinger ha scritto:
I don't think that non-existing robots.txt-files should be reported as
broken links (as long as they are not referenced by some page).
Current output, if spanning over 2 hosts (e.g., -D
www.domain1.com,www.domain2.com):
-
Found 2 broken links.
http://www.do
Hi again,
I don't think that non-existing robots.txt-files should be reported as
broken links (as long as they are not referenced by some page).
Current output, if spanning over 2 hosts (e.g., -D
www.domain1.com,www.domain2.com):
-
Found 2 broken links.
http://www.domain1.com/robots.tx
Hi,
As you might have noticed I was trying to use wget as a tool to check
for dead links on big websites. The combination of -r and --spider is
working in the new alpha version, however wget is still downloading ALL
files (no matter if they are parseable for further links or not),
instead of