for the record, the real --domains value was
'www.euroskop.cz,www2.euroskop.cz,rozcestnik.euroskop.cz'.
In this case, that doesn't change the output, tho.
Have a nice day,
Stefan
Am 28.08.2006 16:44, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
Hello everyone,
I'm having troubles
Hello everyone,
I'm having troubles with the newest trunk version of wget (revision 2187).
Command-line arguments:
wget
--recursive
--spider
--no-parent
--no-directories
--follow-ftp
--retr-symlinks
--no-verbose
--level='2'
--span-hosts
Hello,
Could somebody please tell me what's wrong with my way of compiling wget
as described on the Wget development page [1]?
Here's what I get after running autoheader and autoconf, when I try to
run configure:
./configure --prefix=$HOME/wget
Hi,
Am 21.07.2006 12:45, www.mail schrieb:
The following command crashes wget 1.11-alpha-1 on Windows 2000 SP4:
wget --output-document=- --no-content-disposition
http://www.google.com/;
Just FYI: This works in the current SVN trunk version, at least on
Windows XP Professional SP2.
Hi,
As you might have noticed I was trying to use wget as a tool to check
for dead links on big websites. The combination of -r and --spider is
working in the new alpha version, however wget is still downloading ALL
files (no matter if they are parseable for further links or not),
instead of
Hi again,
I don't think that non-existing robots.txt-files should be reported as
broken links (as long as they are not referenced by some page).
Current output, if spanning over 2 hosts (e.g., -D
www.domain1.com,www.domain2.com):
-
Found 2 broken links.
Hello there,
I think I've got the most recent version of wget running now (SVN
trunk), and I think there is another problem:
When the report about the broken links is printed in the end, the
duration of the operation is always 0 seconds:
FINISHED --11:56:10--
Downloaded: 62 files, 948K in
Am 20.07.2006 14:32, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
As you might have noticed I was trying to use wget as a tool to check
for dead links on big websites. The combination of -r and --spider is
working in the new alpha version, however wget is still downloading
ALL files
Am 20.07.2006 15:22, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
By the way, FTP transfers shouldn't be downloaded as a whole, too, in
this mode.
well, the semantics of --spider for FTP are still not very clear to me.
at the moment, i was considering whether to simply perform FTP
would have to be parsed completely,
while pictures and other files should only be HEAD-checked for existence
(in order to save bandwidth)...
Using --spider and --spider -r was not the right way to do this, I fear.
Any help is appreciated, thanks in advance!
Greets,
Stefan Melbinger
Hi,
First of all thanks for the quick answer! :)
Am 18.07.2006 17:34, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
I need to check whole websites for dead links, with output easy to
parse for lists of dead links, statistics, etc... Does anybody have
experience with that problem
11 matches
Mail list logo