ht://Dig is doing exactly what it's told to, ignore pages that are marked off-limits
for search engines. What I think you would be best off doing, is creating a text file
with all the links in it and then just refer 'start_url' to that text file (just
links, and no <a href> tags). It will then dig the links but not reference the file
itself.
You should be able to find documentation on this (I can't remember where it's at right
now).
Hope this helps,
Atle
>>> "Fanac Webmaster" <[EMAIL PROTECTED]> 04/14/00 04:31PM >>>
On the fanac.org site there is a cross reference listing of all the names that
I have been able to find that have been mentioned in the various documents that
the site holds. I do not want this listing to be indexed by ht://Dig because
all of these documents are reacable by other paths so I put <META NAME="robots"
CONTENT="noindex,nofollow"> in the header sectuion of its index. However I do
want to let the search engine to be able to find the entries in the index so I
created and html document (allnames.html) that looks like this:
------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED]
You will receive a message to confirm this.