"Tony Lewis" <[EMAIL PROTECTED]> writes:

> Hrvoje Niksic wrote:
>
>> I'm curious: what is the use case for this?  Why would you want to
>> save the unfollowed links to an external file?
>
> I use this to determine what other websites a given website refers to.
>
> For example:
> wget
> http://directory.google.com/Top/Regional/North_America/United_States/California/Localities/H/Hayward/
>  -
> -mirror -np --unfollowed-links=hayward.out
>
> By looking at hayward.out, I have a list of all websites that the
> directory refers to. When I use this file, I sort it and throw away
> the Google and DMOZ links. Everything else is supposed to be
> something interesting about Hayward.

I see.  Hmm.. if you have to post-process the list anyway, wouldn't it
be more useful to have a list of *all* encountered URLs?  It might be
nice to accompany this output with the exit statuses, so people can
easily grep for 404's.

A comprehensive reporting facility has often been requested.  Perhaps
something should be done about it for the next release.

Reply via email to