At 01:36 PM 12/1/99 -0800, Cary FitzGerald wrote:
>Christian:
>
>Is this something that you think is an inherent flaw in DNS?  Will this new 
>class
>of servers be less susceptible to congestion?

Cary,

We should really conduct this debate on a more appropriate list, e.g.
end2end-interest. But the answer is yes, we have a basic architecture
problem. The stability of the Internet, today, is ensured by the TCP
congestion avoidance mechanisms. In case of congestion, the routers drop
packets, and as soon as enough packets have been dropped, enough TCP
connections diminish the size of their windows and the congestion is eased.
A corrolary is that congested parts of the Internet have to constantly drop
a rather large fraction of packets, in order to ensure that TCP regulates
itself. My numbers tell me that the prevalent drop rate is between 1 and
5%, depending of the network, etc.

The drop rate has an inconvenient effect on DNS.  DNS requests are sent
over UDP, and rely on timers to repeat the transaction if either the
request or the response is lost. The observed transaction repeat rate will
be about twice the packet loss rate. DNS implementations use fixed timers,
set to 2 or 3 seconds. So, each transaction failure induces an additional 2
or 3 second delay. A single DNS query requires 2 or 3 transactions. We can
thus deduce that packet loss rates between 1 and 5% imply that 4 to 20% of
DNS queries experience at least one retransmission.
-- Christian Huitema

Reply via email to