For that you might consider using "wget", which has the capability to
recursively retrieve web sites (among other things). See the following
for more info: http://www.gnu.org/software/wget/wget.html
Mark Friedman wrote:
>
> Is there a way to configure htdig to be used to just spider and collect
> pages and documents without doing any of the index/search related stuff?
>
> Thanks in advance.
>
> -Mark
--
Tim Peterman - Web Master,
IT&P Unix Support Group Technical Lead
Lockheed Martin EIS/NE&SS, Moorestown, NJ
------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED]
You will receive a message to confirm this.
List archives: <http://www.htdig.org/mail/menu.html>
FAQ: <http://www.htdig.org/FAQ.html>