OK, wow, this conversation got long. and I'm still 20 messages behind.

Two points, and I'm going to go back to work, and maybe I'll try to
summarize a table
of the competing viewpoints, as there's far more than BDP of
discussion here, and what
we need is sqrt(bdp) to deal with all the different conversational flows. :)

On Tue, Nov 27, 2018 at 1:24 AM Luca Muscariello
<luca.muscarie...@gmail.com> wrote:
>
> I think that this is a very good comment to the discussion at the defense 
> about the comparison between
> SFQ with longest queue drop and FQ_Codel.
>
> A congestion controlled protocol such as TCP or others, including QUIC, 
> LEDBAT and so on
> need at least the BDP in the transmission queue to get full link efficiency, 
> i.e. the queue never empties out.

no, I think it needs a BDP in flight.

I think some of the confusion here is that your TCP stack needs to
keep around a BDP in order to deal with
retransmits, but that lives in another set of buffers entirely.

> This gives rule of thumbs to size buffers which is also very practical and 
> thanks to flow isolation becomes very accurate.
>
> Which is:
>
> 1) find a way to keep the number of backlogged flows at a reasonable value.
> This largely depends on the minimum fair rate an application may need in the 
> long term.
> We discussed a little bit of available mechanisms to achieve that in the 
> literature.
>
> 2) fix the largest RTT you want to serve at full utilization and size the 
> buffer using BDP * N_backlogged.
> Or the other way round: check how much memory you can use
> in the router/line card/device and for a fixed N, compute the largest RTT you 
> can serve at full utilization.

My own take on the whole BDP argument is that *so long as the flows in
that BDP are thoroughly mixed* you win.

>
> 3) there is still some memory to dimension for sparse flows in addition to 
> that, but this is not based on BDP.
> It is just enough to compute the total utilization of sparse flows and use 
> the same simple model Toke has used
> to compute the (de)prioritization probability.
>
> This procedure would allow to size FQ_codel but also SFQ.
> It would be interesting to compare the two under this buffer sizing.
> It would also be interesting to compare another mechanism that we have 
> mentioned during the defense
> which is AFD + a sparse flow queue. Which is, BTW, already available in Cisco 
> nexus switches for data centres.
>
> I think that the the codel part would still provide the ECN feature, that all 
> the others cannot have.
> However the others, the last one especially can be implemented in silicon 
> with reasonable cost.
>
>
>
>
>
> On Mon 26 Nov 2018 at 22:30, Jonathan Morton <chromati...@gmail.com> wrote:
>>
>> > On 26 Nov, 2018, at 9:08 pm, Pete Heist <p...@heistp.net> wrote:
>> >
>> > So I just thought to continue the discussion- when does the CoDel part of 
>> > fq_codel actually help in the real world?
>>
>> Fundamentally, without Codel the only limits on the congestion window would 
>> be when the sender or receiver hit configured or calculated rwnd and cwnd 
>> limits (the rwnd is visible on the wire and usually chosen to be large 
>> enough to be a non-factor), or when the queue overflows.  Large windows 
>> require buffer memory in both sender and receiver, increasing costs on the 
>> sender in particular (who typically has many flows to manage per machine).
>>
>> Queue overflow tends to result in burst loss and head-of-line blocking in 
>> the receiver, which is visible to the user as a pause and subsequent jump in 
>> the progress of their download, accompanied by a major fluctuation in the 
>> estimated time to completion.  The lost packets also consume capacity 
>> upstream of the bottleneck which does not contribute to application 
>> throughput.  These effects are independent of whether overflow dropping 
>> occurs at the head or tail of the bottleneck queue, though recovery occurs 
>> more quickly (and fewer packets might be lost) if dropping occurs from the 
>> head of the queue.
>>
>> From a pure throughput-efficiency standpoint, Codel allows using ECN for 
>> congestion signalling instead of packet loss, potentially eliminating packet 
>> loss and associated lead-of-line blocking entirely.  Even without ECN, the 
>> actual cwnd is kept near the minimum necessary to satisfy the BDP of the 
>> path, reducing memory requirements and significantly shortening the recovery 
>> time of each loss cycle, to the point where the end-user may not notice that 
>> delivery is not perfectly smooth, and implementing accurate completion time 
>> estimators is considerably simplified.
>>
>> An important use-case is where two sequential bottlenecks exist on the path, 
>> the upstream one being only slightly higher capacity but lacking any queue 
>> management at all.  This is presently common in cases where home CPE 
>> implements inbound shaping on a generic ISP last-mile link.  In that case, 
>> without Codel running on the second bottleneck, traffic would collect in the 
>> first bottleneck's queue as well, greatly reducing the beneficial effects of 
>> FQ implemented on the second bottleneck.  In this topology, the overall 
>> effect is inter-flow as well as intra-flow.
>>
>> The combination of Codel with FQ is done in such a way that a separate 
>> instance of Codel is implemented for each flow.  This means that congestion 
>> signals are only sent to flows that require them, and non-saturating flows 
>> are unmolested.  This makes the combination synergistic, where each 
>> component offers an improvement to the behaviour of the other.
>>
>>  - Jonathan Morton
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to