Matthew Luckie wrote:
 > hmm, we looked at how other protocols handled the ENOBUFS case from
 > ip_output.
 >
 > tcp_output calls tcp_quench on this error.
 >
 > while the interface may not be able to send any more packets than it
 > does currently, closing the congestion window back to 1 segment
 > seems a severe way to handle this error, knowing that the network
 > did not drop the packet due to congestion.  Ideally, there might be
 > some form of blocking until such time as a mbuf comes available.
 > This sounds as if it will be much easier come FreeBSD 5.0

TCP will almost never encouter this scenario, since it's self-clocking.
The NIC is very rarely the bottleneck resource for a given network
connection. Have you looked at mean queue lengths for NICs? They are
typically zero or one. The NIC will only be the bottleneck if you are
sending at a higher rate than line speed and your burt time is too long
to be absorbed by the queue.

 > I'm aware that if people are hitting this condition, they need to
 > increase the number of mbufs to get maximum performance.

No. ENOBUFS in ip_output almost always means that your NIC queue is
full, which isn't controlled through mbufs. You can make the queue 
longer, but that won't help if you're sending too fast.

 > This section of code has previously been discussed here:
 > http://docs.freebsd.org/cgi/getmsg.cgi?fetch=119188+0+archive/2000/fr-
 > eebsd-net/20000730.freebsd-net and has been in use for many years (a

This is a slightly different problem than you describe. What Archie saw
was an ENOBUFS being handled like a loss inside the network, even though
the sender has information locally that can allow it to make smarter
retransmission decisions.

Lars
-- 
Lars Eggert <[EMAIL PROTECTED]>               Information Sciences Institute
http://www.isi.edu/larse/              University of Southern California

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to