Check my followup post for solutions:
If you want to be safe, we can implement things like:
1) Only tag a server as down after N failed connections.
2) Only "remember" the server as down within M minutes.
After M minutes, go back to rule 1).
M and N can be configurable if you are so inclined.
Frank
On Mon, 19 Jul 1999, Albert Desimone jr wrote:
>
> On Sun, 18 Jul 1999, Frank Guangxin Liu wrote:
>
> > the files). I would suggest to modify "htdig" so that
> > once it finds a server is down it should "remember" this
> > in the current digging session and skip digging this server
> > instead of having numerous timeouts.
>
> And then Geoff Hutchinson said:
>
> > This is a good point.
>
> Which, of course, it is. However, under some circumstances I can see
> where remembering a downed server may not be ideal. For example, you have
> a link to a URL on a server, and the server is down for only a very short
> period, or a network hiccup, or something very minor inhibited retrieval
> of robots.txt on the server. Then a little further in the dig a URL to
> the same server is encountered, and the server is now back up (not an
> unlikely scenario when you index across hundreds of servers, as many
> of us do). Hitting it again could be benificial.
>
> Trade-off question, I guess. (Just thinking out loud here.)
>
> -bd
>
>
> ------------------------------------
> To unsubscribe from the htdig mailing list, send a message to
> [EMAIL PROTECTED] containing the single word "unsubscribe" in
> the SUBJECT of the message.
>
------------------------------------
To unsubscribe from the htdig mailing list, send a message to
[EMAIL PROTECTED] containing the single word "unsubscribe" in
the SUBJECT of the message.