On Tue, 27 May 2014, Dave Taht wrote:

On Tue, May 27, 2014 at 4:27 PM, David Lang <da...@lang.hm> wrote:
On Tue, 27 May 2014, Dave Taht wrote:

There is a phrase in this thread that is begging to bother me.

"Throughput". Everyone assumes that throughput is a big goal - and it
certainly is - and latency is also a big goal - and it certainly is -
but by specifying what you want from "throughput" as a compromise with
latency is not the right thing...

If what you want is actually "high speed in-order packet delivery" -
say, for example a movie,
or a video conference, youtube, or a video conference - excessive
latency with high throughput, really, really makes in-order packet
delivery at high speed tough.


the key word here is "excessive", that's why I said that for max throughput
you want to buffer as much as your latency budget will allow you to.

Again I'm trying to make a distinction between "throughput", and "packets
delivered-in-order-to-the-user." (for-which-we-need-a-new-word-I think)

The buffering should not be in-the-network, it can be in the application.

Take our hypothetical video stream for example. I am 20ms RTT from netflix.
If I artificially inflate that by adding 50ms of in-network buffering,
that means a loss can
take 120ms to recover from.

If instead, I keep a 3*RTT buffer in my application, and expect that I have 5ms
worth of network-buffering, instead, I recover from a loss in 40ms.

(please note, it's late, I might not have got the math entirely right)

but you aren't going to be tuning the retry wait time per connection. what is the retry time that is set in your stack? It's something huge to survive international connections with satellite paths (so several seconds worth). If your server-to-eyeball buffering is shorter than this, you will get a window where you aren't fully utilizing the connection.

so yes, I do think that if your purpose is to get the maximum possible in-order packets delivered, you end up making different decisions than if you are just trying to stream a HD video, or do other normal things.

The problem is thinking that this absolute throughput is representitive of normal use.

As physical RTTs grow shorter, the advantages of smaller buffers grow larger.

You don't need 50ms queueing delay on a 100us path.

Many applications buffer for seconds due to needing to be at least
2*(actual buffering+RTT) on the path.

For something like streaming video, there's nothing wrong with the application buffering aggressivly (assuming you have the space to do so on the client side), the more you have gotten transmitted to the client, the longer it can survive a disruption of it's network.

There's nothing wrong with having an hour of buffered data between the server and the viewer's eyes.now, this buffering should not be in the network devices, it should be in the client app, but this isn't because there's something wrong with bufferng, it's just because the client device has so much more available space to hold stuff.

David Lang


You eventually lose a packet, and you have to wait a really long time
until a replacement arrives. Stuart and I showed that at last ietf.
And you get the classic "buffering" song playing....


Yep, and if you buffer too much, your "lost packet" is actually still in
flight and eating bandwidth.

David Lang


low latency makes recovery from a loss in an in-order stream much, much
faster.

Honestly, for most applications on the web, what you want is high
speed in-order packet delivery, not
"bulk throughput". There is a whole class of apps (bittorrent, file
transfer) that don't need that, and we
have protocols for those....



On Tue, May 27, 2014 at 2:19 PM, David Lang <da...@lang.hm> wrote:

the problem is that paths change, they mix traffic from streams, and in
other ways the utilization of the links can change radically in a short
amount of time.

If you try to limit things to exactly the ballistic throughput, you are
not
going to be able to exactly maintain this state, you are either going to
overshoot (too much traffic, requiring dropping packets to maintain your
minimal buffer), or you are going to undershoot (too little traffic and
your
connection is idle)

Since you can't predict all the competing traffic throughout the
Internet,
if you want to maximize throughput, you want to buffer as much as you can
tolerate for latency reasons. For most apps, this is more than enough to
cause problems for other connections.

David Lang


 On Mon, 26 May 2014, David P. Reed wrote:

Codel and PIE are excellent first steps... but I don't think they are
the
best eventual approach.  I want to see them deployed ASAP in CMTS' s and
server load balancing networks... it would be a disaster to not deploy
the
far better option we have today immediately at the point of most
leverage.
The best is the enemy of the good.

But, the community needs to learn once and for all that throughput and
latency do not trade off. We can in principle get far better latency
while
maintaining high throughput.... and we need to start thinking about
that.
That means that the framing of the issue as AQM is counterproductive.

On May 26, 2014, Mikael Abrahamsson <swm...@swm.pp.se> wrote:


On Mon, 26 May 2014, dpr...@reed.com wrote:

I would look to queue minimization rather than "queue management"


(which


implied queues are often long) as a goal, and think harder about the
end-to-end problem of minimizing total end-to-end queueing delay


while


maximizing throughput.



As far as I can tell, this is exactly what CODEL and PIE tries to do.
They
try to find a decent tradeoff between having queues to make sure the
pipe
is filled, and not making these queues big enough to seriously affect
interactive performance.

The latter part looks like what LEDBAT does?
<http://tools.ietf.org/html/rfc6817>

Or are you thinking about something else?



-- Sent from my Android device with K-@ Mail. Please excuse my brevity.



_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel










_______________________________________________
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel

Reply via email to