At least IMO, true operational requirements for any such system would be 
quite user-specific.  The (full) set of user requirements would tend to 
include:

    Scheduling capabilities.
    Varying frequencies, perhaps even within the same URL.
    Inclusion and/or Exclusion of specified nodes, within a url.
    Statistical-recording capabilities.
    Varying underlying-database formats.  

If this were implemented AT ALL, I'd see it in the form of an independent 
sub-system; output taking the form of .conf files, which would then be 
processed 
thru a htdig/htmerge script.  The format of the underlying database would be 
"up to the user"; perhaps accessed thru something similar to Perl/DBI.   

In a message dated 1/4/01 11:43:05 AM US Mountain Standard Time, 
[EMAIL PROTECTED] writes:

<< When using a very large list of URL's to index, it can get pretty tough to 
keep track of 
 which sites to index or not. In other words, a site that I remove from the 
list could end 
 up back in by error and be indexed again.
 
 I can think of a simple way to take care of that, some sort of database 
program to 
 maintain that list but would it be possible to add some smarts to htdig to 
take care of 
 that?
  >>

------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED]
You will receive a message to confirm this.
List archives:  <http://www.htdig.org/mail/menu.html>
FAQ:            <http://www.htdig.org/FAQ.html>

Reply via email to