If you want to be safe, we can implement things like:
1) Only tag a server as down after N failed connections.
2) Only "remember" the server as down within M minutes.
   After M minutes, go back to rule 1).

Frank


On Sun, 18 Jul 1999, Frank Guangxin Liu wrote:

> 
> 
> Normally the update htdig is pretty fast but this time
> it took several days. A further research shows that a
> major web server was down during the update dig. It appears
> to me that htdig tries to update earch url to that server
> and timeout on earch url. Since that is a major server and
> has tens of thousands files, it took a long time for htdig
> to check through (update) all of them (and then timeout each of
> the files). I would suggest to modify "htdig" so that
> once it finds a server is down it should "remember" this
> in the current digging session and skip digging this server
> instead of having numerous timeouts.
> 
> Even for an initial htdig, it should be better for htdig
> to "remember" if a server is down (either the server machine
> itself is down, or the httpd process on that machine is down).
> 
> Frank
> 
> 
> 
> ------------------------------------
> To unsubscribe from the htdig mailing list, send a message to
> [EMAIL PROTECTED] containing the single word "unsubscribe" in
> the SUBJECT of the message.
> 

------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED] containing the single word "unsubscribe" in
the SUBJECT of the message.

Reply via email to