On 2005-05-02, Tom Lane <[EMAIL PROTECTED]> wrote: > While that isn't an unreasonable issue on its face, I think it really > boils down to this: the OP is complaining because he thinks the > connection-loss timeout mandated by the TCP RFCs is too long. Perhaps > the OP knows network engineering far better than the authors of those > RFCs, or perhaps not. I'm not convinced that Postgres ought to provide > a way to second-guess the TCP stack ...
Speaking as someone who _does_ know network engineering, I would say that yes, Postgres absolutely should do so. The TCP keepalive timeout _is not intended_ to do this job; virtually every application-level protocol manages its own timeouts independently of TCP. (The few exceptions, such as telnet, tend to be purely interactive protocols that rely on the user to figure out that something got stuck.) One way to handle this is to have an option, set by the client, that causes the server to send some ignorable message after a given period of time idle while waiting for the client. If the idleness was due to network partitioning or similar failure, then this ensures that the connection breaks within a known time. This is safer than simply having the backend abort after a given idle period. If you want comparisons from other protocols, just look around - SMTP, ssh, IRC, BGP, NNTP, FTP, and many, many more protocols all use timeouts (or in some cases keepalive messages) with intervals much shorter than the TCP keepalive timeout itself. -- Andrew, Supernews http://www.supernews.com - individual and corporate NNTP services ---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings