Hello,

I need to check whole websites for dead links, with output easy to parse for lists of dead links, statistics, etc... Does anybody have experience with that problem or has maybe used the --spider mode for this before (as suggested by some pages)?

If this should work, all HTML pages would have to be parsed completely, while pictures and other files should only be HEAD-checked for existence (in order to save bandwidth)...

Using --spider and --spider -r was not the right way to do this, I fear.

Any help is appreciated, thanks in advance!

Greets,
  Stefan Melbinger

Reply via email to