A have a problem , and I am believe that the reason is a wrong behavior
of linux tcp stack.
A linux box (linux 2.0.36) have a poor internet connection 14.400 bps
with frequent frame loss.
httpd and smtp servers work great at small files (till about 100K), but
can not send larger ones. I did some bug hunting and I am believe that
this because linux kernel slow down tcp connection speed. After each
frame loss (and retransmitting) tcp_timer.c multiplies  rlo by two, but
never decreases it.
I am not kernel hacker, I am a pure system administrator that have to
resolve the problem, but I am afraid that I'll have to fix the code
myself.
I am going to:
1. Decrease the connection speed not after every retransmit, but only if
a certain percentage of retransmission is reached (e.g., 30%).
2. Increase the connection speed if a certain percentage of frames
(e.g., 90%) are sent without retransmission.

I have the following questions:
1. May be the problem is already resolved in linux 2.1.x?
2. Do my plan violates any RFC?
3. Can the sock structure be changed without recompiling all
applications?
4. I am not familiar with the code, how I can know that the frame is
transmitted without retransmission?
5. What precisely mean rlo field of sock structure. Where I can find
additional documentation on sock structure?


Any additional comments are appreciated.
Vyacheslav Zavadsky





-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]

Reply via email to