Sim IJskes - QCG wrote:
Hello,

does anybody know, or know how to calculate how long it would take (taking the complete system into account) from a call to a Remote to an error in the following scenario:

A workstation TCP based Endpoint. Service registered. Remote handle retrieved by client. Physical network disconnect on server side.

(From the TCP side off things this would constitute complete packetloss without other indications (no ICMP for instance, no interface takedown at client because of cable disconnect)).

With TCP, keepalive is the only mechanism that can reveal a complete physical layer loss for a "reading" socket. The write retries will eventually produce a timeout. The typical TCP timers for these things are around 3 minutes depending on local configuration changes such as with linux where you can "echo" new timer values into the kernel /proc tree.

This is another reason why I tend to use smart proxies with leases in them that the service has a lease listener on. In the end, I've found the use of a lease normalizes and reveals, the most uniformly, the "lost client" case that needs to have cleanup actions performed. Not only that, but the lease renewal failure in the client can cause it to trigger a rediscovery and retrieval of an appropriate server instance as well.

Gregg Wonderly

Reply via email to