On 30/11/16 04:56 PM, John Leslie wrote:
    Better would be to identify latency-sensitive flows by a specific signal,
and continue to deliver a "fair share" of packets "immediately" and marked
to show the congestion.

    Consider voice traffic. Voice can be intelligible at rather low data
rates; but sounds much better at higher data rates. For conferencing uses,
low delay is pretty critical.

To a capacity planner, latency is the enemy in /every/ kind of flow: it's time lost you can never get back, and it's an opportunity cost that reduces the maximum bandwidth you can achieve, notably in stop-wait and delayed-ack schemes. Even in completely asynchronous schemes, it cuts the transmitter's possible bandwidth by the product of the delay and the bandwidth.

If I induce latency by trying to queue more data than the pipe can transfer per unit time, then I'm harming myself and incidentally everyone else. It's therefor important that I know, as soon as possible, that I have reached a limit, so I can then cut my offered load and not indulge in self-flagellation. That's only for monks, and rather old-fashioned ones at that!

If i have a flow that requires timed delivery, like voice, I can then dedicate more of my available bandwidth to it, and less to bulk-data flows, at the times when I need to deliver the stream. In between those, I'll prioritize other stuff. If I get close to the point where the voice flow will necessarily be damaged, then I'll first apply mitigations like compression, then I'll signal the recipient and go to a narrower dynamic range and schemes like lossy compression.
    Sudden reductions in data rate are annoying, but unexpected silence
while you wait for delayed packets are much worse; and losing track of
how much delay there actually is can be fatal.
Like you, I really want to know as soon as I can what's happening, so I can adapt. One of the flows I worry about is real-time interaction between video gamers, and that's a similar mix of low-bandwidth latency-sensitive actions and quiet periods. Plus some really demanding users!

    Ideally, each packet would carry an indication of a latency target,
beyond which it's less useful and congestion needs to be indicated.

I can't swear it is the case, but some games may do the equivalent of just that. A large class certainly do feed back latency and transfer time indications to their servers, so they can dynamically adapt, much as I've described here.

--dave

--
David Collier-Brown,         | Always do right. This will gratify
System Programmer and Author | some people and astonish the rest
dav...@spamcop.net           |                      -- Mark Twain

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to