W. Stevens in TCP/IP Illustrated, Vol.I states that many BSD based
implementations use a 500 msec (2 HZ) timer to measure RTT. I had once
heard or read that Linux used a 100 HZ timer for much more accurate
estimates of RTT. Is this true?
The code (2.2.14 kernel) seems to indicate that RTO is limited between
1/5sec to 120 sec. With gigabit ethernets it would be nice to be able to
lower RTO to around 10 msec in some "real-time" applications. Since pkt
losses in controlled env. are very low any way a fast timeout would not
add too much overhead since it would hardly ever go off. Hopefully some
non fixed timer based technique is used for RTT estimation.
Gautam H. Thaker
Distributed Processing Lab; Lockheed Martin Adv. Tech. Labs
A&E 3W; 1 Federal Street; Camden, NJ 08102
856-338-3907, fax 856-338-4144 email: [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]