On Thu, Jan 23, 2014 at 1:15 AM, Dave Taht <dave.t...@gmail.com> wrote:
> On Thu, Jan 23, 2014 at 1:10 AM, Fred Baker (fred) <f...@cisco.com> wrote:
>> No, you're not blowing smoke. I'm not sure I would compare the behavior to
>> PMTUD, as in that the endpoint is given a magic number and manages to it,
>> where in this case, it is given the results of its behavior, and it manages
>> to improve that.
>>
>> But this is what I have rambled on about in threads relating to the size of
>> a buffer. Folks would really like to have a magic way to calculate the
>> buffer size (an amount of memory) they need to install in a router or
>> switch, and it isn't that easy, because it has a lot to do with where in the
>> network a system is located and how it is used by the applications that use
>> it. But AQM, in the end, isn't about buffer size. It is about buffer
>> occupancy. In the ideal case, if there are N sessions active on the
>> bottleneck link in a path, we would like each to obtain 1/N of the
>> bottleneck's capacity, which is to say that it should be able to maximize
>> its throughput, while keeping an average of zero packets standing in the
>> queue (minimizing both latency and variation in latency). If you know your
>> math, you know that the ideal goal isn't actually achievable. But that
>> doesn't stop us from trying to asymptotically approach it.
>
> I prefer to think of the goal as to keep a minimum of 1 packet in the queue,
> not as an average of 0.

And that's not strictly true either. In the case of wifi, and other bundling
technologies like those used in cable, you want to keep a minimum of a "good"
aggregate of packets in the queue for that technology.

>
>>
>> On Jan 17, 2014, at 3:51 PM, David Collier-Brown <dave...@rogers.com> wrote:
>>
>> I've been reading through the internet-drafts, and one paragraph struck me
>> as very illuminating.  This is therefor a sanity-check before I go full-hog
>> down a particular path...
>>
>> The comment is from Baker and Fairhurst,
>> https://datatracker.ietf.org/doc/draft-ietf-aqm-recommendation/ and the
>> paragraph is [emphases added]
>>
>> The point of buffering in the network is to absorb data bursts and to
>> transmit them during the (hopefully) ensuing bursts of silence. This is
>> essential to permit the transmission of bursty data. Normally small queues
>> are preferred in network devices, with sufficient queue capacity to absorb
>> the bursts. The counter-intuitive result is that maintaining normally-small
>> queues can result in higher throughput as well as lower end-to- end delay.
>> In summary, queue limits should not reflect the steady state queues we want
>> to be maintained in the network; instead, they should reflect the size of
>> bursts that a network device needs to absorb.
>>
>> All of a sudden we're talking about the kinds of queues I know a little
>> about (:-))
>> ---
>>
>> I'm going to suggest that these are queues and associated physical buffers
>> that do two things:
>>
>> hold packets that arrive at a bottleneck for a long as it takes to send them
>> out a slower link that they came in on, and
>> hold bursts of packets that arrive adjacent to each other until they can be
>> sent out in a normal spacing, with some small amount of time between them
>>
>>
>> In an illustration of Dave Taht's, the first looks something like this
>>
>> -------------------------+
>>     |X|            |X|      +-------------------
>>     |X|            |X|         |XXX|    |XXX|
>>     |X|            |X|      +-------------------
>> -------------------------+
>>
>> At the choke-point there is a buffer at least big enough to give the packet
>> a chance to wheel from line into column (:-)) and start down the smaller
>> pipe.
>>
>> The speed at which the acks come back,  the frequency of drops, and any
>> explicit congestion notifications slows the sender until they don't overload
>> the skinnier pipe, thus spacing the packets in the fatter pipe out.
>>
>> Various causes [Leland] can slow or speed the packets in the fat pipe,
>> making it possible for several to arrive adjacent to each other, followed by
>> a gap.  The second purpose of a buffer is to hold  these bursts while things
>> space themselves back out.
>>
>> They need to be big enough at minimum to do the speed matching,  and at
>> maximum,  big enough to spread a burst back into a normal progression,
>> always assuming that acks, drops and explicit congestion notifications are
>> slowing the sender to the speed of the slowest part of the network.
>> ---
>>
>> If I'm right about this, we can draw some helpful conclusions
>>
>> buffer sizes can be set based on measurements:
>>
>> speed differences, which are pretty static, plus
>> observed burstyness
>>
>> drops and ECN can be done to match the slowest speed in the path
>>
>> The latter suddenly sounds a bit like path MTU discovery, except it's a bit
>> more dynamic, and varies with both path and what's happening in various
>> parts of it.
>>
>> To me, as a capacity/performance nerd, this sounds a lot more familiar and
>> manageable. My question to you, before I start madly scribbling on the
>> internet drafts is:
>>
>>     Am I blowing smoke?
>>
>> --dave
>>
>> --
>> David Collier-Brown,         | Always do right. This will gratify
>> System Programmer and Author | some people and astonish the rest
>> dav...@spamcop.net           |                      -- Mark Twain
>> (416) 223-8968
>>
>> _______________________________________________
>>
>> aqm mailing list
>> aqm@ietf.org
>> https://www.ietf.org/mailman/listinfo/aqm
>>
>>
>>
>
>
>
> --
> Dave Täht
>
> Fixing bufferbloat with cerowrt: 
> http://www.teklibre.com/cerowrt/subscribe.html



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to