On Tue, Mar 4, 2014 at 5:53 PM, Scheffenegger, Richard <r...@netapp.com> wrote:
> First of all, thanks to the note takers.
>
> We've had quite some discussion around the AQM evaluation guideline draft,
> and I believe the notes capture many of the points brought up.
>
> If you have been up and made a comment on the Microphone, I would like you
> to check if the spirit of your comment has been properly captured in the
> notes:
> http://etherpad.tools.ietf.org:9000/p/notes-ietf-89-aqm

Not even close to what I said. Is it too much to request, that for
future meetings
that the proceedings be recorded and comments transcribed? The technology
exists...

"Dave Taht: Care a lot about inter-flow packet loss. Bursty is really
bad. Like to have a metric on inter flow loss"

This reminds me of an old far side joke.

http://hubpages.com/hub/Gary-Larson#slide209782

Substitute "Packet loss" for "Ginger" here.

What I said was:

"I care a lot about interflow latency, jitter, and packet loss. Only
bursty packet loss is really bad.  I'd like to have
a metric on interflow latency, jitter, and packet loss."

Of these, packet loss is the *least* of my concerns. Our protocols
recover from and compensate well
from non-bursty packet loss, and packet loss IS the most common signal
to tell protocols to slow down.
and thus desirable...

As an illustrative example, the cerowrt group has been working on ways
to make aqm and packet scheduling
technologies work well at rates well below 10Mbit, notably on the
768kbit uplinks common in the DSL world (which
also has weird framing derived from the bad olde days of ATM)

At below 100Mbit, TCP behavior is dominated by certain constants -
notably the initial window, be it 3,4 or 10, but also
MTU * IWx in relation to MSS, availability of pacing on on/off traffic
with a large cwnd, etc.

There are a string of recent tests put up here

http://richb-hanover.com/

The first graph shows bufferbloat in all its glory on the link - well
over 2secs of delay and goodput of
about 1.6Mbits on the download.

The remainder of the graphs are on variants of nfq_codel and fq_codel
setups, but the core result was
after applying the cerowrt SQM system (scheduling and aqm)
goodput was way, way, up and latency way, way, down compared to the
bufferbloat'd alternative,
nearly triple the download goodput, and 1/50th the latency. (The
debate is over how best to
get better interflow results and the differences in results not much
above a percentage point)

 - packet loss on this link after applying AQM was well over 35%! But
as it is not bursty, and latency is
held low, the link remains markedly useful, all the flows work pretty
well, and the low rate flows are
doing good...

Thread for ongoing discussion here:

https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-February/002370.html

Packet captures seem to show that MAC TCP is not reducing it's window
to a reasonable value,
nor is it reducing MSS to something more appropriate for the link
rate. I'd recomend looking at
the packet captures on that test to get a feel for how slow start,
fast recovery and dup acks are is
interact at these timescales.

Packet loss, particularly when taken as a pure percentage is not a
good metric for most measurements.
Most of the time, I don't give a rats arse about it.

>


>
>
> Richard Scheffenegger
>
> NetApp
> r...@netapp.com
> +43 1 3676811 3146 Office (2143 3146 - internal)
> +43 676 654 3146 Mobile
> www.netapp.com
>
> EURO PLAZA
> Gebäude G, Stiege 7, 3.OG
> Am Euro Platz 2
> A-1120 Wien
>
>
>
>
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm
>



-- 
Dave Täht

Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to