[EMAIL PROTECTED] wrote:
I've missed part of this conversation but here is my two cents on this specific
question -  just keep increasing the amount of data that you are sending in
bursts and the speed of those bursts until you achieve a certain target error
rate.  i.e. 2% or whatever.  After bumping up against failures, you should be
able to get a sense of an optimal rate.  Be sensitive to TCP congestion at the
same time.  I back off if the round trip time starts spiking.

I want to second RTT-based congestion avoidance approach. Given that it
is *the* idea behind TCP/Vegas, it is nothing new, but the nice thing
about it is that it works very well for consumer Internet connections.

The reason being is that their bandwidth is typically capped by queuing
traffic shapers (as opposed by an actual hardware limits). So once some
the shaper starts queuing packets, it can be detected by a sender by
looking at RTT going up.

It can also be detected by the recipient and thus allow for a faster
(pre-)congestion detection. This however requires both sides to first
synchronize their clocks, and it's really worth doing only if the link
has very large latency.

Alex

_______________________________________________
p2p-hackers mailing list
p2p-hackers@zgp.org
http://zgp.org/mailman/listinfo/p2p-hackers
_______________________________________________
Here is a web page listing P2P Conferences:
http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences

Reply via email to