Hi,

First of all thanks for the quick answer! :)

Am 18.07.2006 17:34, Mauro Tortonesi schrieb:
Stefan Melbinger ha scritto:
I need to check whole websites for dead links, with output easy to parse for lists of dead links, statistics, etc... Does anybody have experience with that problem or has maybe used the --spider mode for this before (as suggested by some pages)?
>
historically, wget never really supported recursive --spider mode. fortunately, this has been fixed in 1.11-alpha-1:

How will wget react when started in recursive --spider mode? It will have to download, parse and delete/forget HTML pages in order to know where to go, but what happens with images and large files like videos, for example? Will wget check whether they exist?

Thanks a lot,
  Stefan

PS: The background for my question is that my company wants to check large websites for dead links (without using any commercial software). Hours of Google-searching left me with wget, which seems to have the best fundamentals to do this...

Reply via email to