Joe Conway <[EMAIL PROTECTED]> writes: > ...it seems that the connection timeout can only be specified to the nearest > second. Given that, is there any reason not to use time() instead of > gettimeofday()?
As the code stands it's pretty necessary. Since we'll go around the loop multiple times, in much less than a second per loop in most cases, the timeout resolution will be really poor if we only measure each iteration to the nearest second. > It looks like there is a great deal of complexity added to the function just > to accommodate the fact that gettimeofday returns seconds and microseconds as > distinct members of the result struct. It is ugly coding; if you can think of a better way, go for it. It might work to measure time since the start of the whole process, or until the timeout target, rather than accumulating adjustments to the "remains" count each time through. In other words something like at start: targettime = time() + specified-timeout each time we are about to wait: set select timeout to targettime - time(). This bounds the error at 1 second which is probably good enough (you might want to add 1 to targettime to ensure the error is in the conservative direction of not timing out too soon). regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org