On 30 Aug 2013, at 18:36, Terry Gilsenan <terry.gilse...@interoil.com> wrote:

> The killer on high latency links is the tcp-window and the continual wait for 
> ack. With links above 1000ms this compounded delay reduces the available 
> bandwidth to a very small percentage of the interface speed (eg:256kbps on a 
> 2mbps link). Without this I have seen UDP data streams that will approach the 
> actual interface speed. (eg: 1.8mbps on a 2mbps link)

Tuning a TCP implementation to match the physical characteristics of such links 
can deliver close to 100% bandwidth utilisation.

Of course this takes a bit more work than just pumping out UDP packets at wire 
speed and hoping none of them get dropped or the thing at the other end wants 
to say slow down or stop. It is however much less work than kludging some 
datagram streaming protocol to sit alongside an MTA. :-)

Dropped, duplicated or out of sequence packets don't matter much for audio or 
video streaming. It's a very different story for email or transferring 
documents. Which is why these mainly use connection-oriented transport. Once 
support to handle dropped, duplicated or out of sequence packet gets added on 
top of a datagram transport protocol such as UDP, you'll end up with something 
remarkably similar to TCP. Why reinvent the wheel?

Reply via email to