[Cerowrt-devel] fq_codel is *three* years old today

2015-05-14 Thread Rich Brown
Folks,

Today is the third anniversary of the announcement of a testable fq_codel (see 
https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-May/002984.html et. 
seq.)

Here's (an approximation of) the state of the world:

- We didn't know it at the time, but we would be able to declare victory on 
CeroWrt less than three months later with the 3.10.50-1 build. Not only did 
that firmware reduce bufferbloat, but it showed that DNSSEC and IPv6 could be 
implemented in "normal" home routers without any kind of jiggery-pokery. Field 
reports at the end of 2014 showed that build was very stable - we had lots of 
reports of 80 day uptimes, and a high of ~140 days.

- fq_codel is installed a large and growing of places. It's available off the 
shelf for OpenWrt in the SQM QoS package, the Linux kernel, IPFire, DD-WRT, and 
other routers.

- "Bufferbloat" is entering the lexicon. People are speaking about it in blogs 
and open literature as a known entity, not some voodoo effect that's only a 
concern to crazy people. The writers don't always get the description or 
symptoms right, but there is an acknowledgement that something could be better 
in your home (and everywhere) network connection. (See for example, 
http://www.internetsociety.org/blog/tech-matters/2015/04/measure-your-bufferbloat-new-browser-based-tool-dslreports
 ) 

- Speaking of which, the new DSLReports Speed Test has recently stirred things 
up. Not only do we have an attractive tool that we can recommend to friends, 
but people are getting a little hot under the collar when they see the crummy 
performance of the router that they just paid dearly for. See, for example, 
http://www.dslreports.com/forum/r30051856-Connectivity-Buffer-Bloat

- Now that we've shown that fq_codel conquers bufferbloat, we're finding 
further optimizations. There's a lot of effort on cake, which promises to bring 
higher speed processing, and looking into corner cases that can be improved. 

- And of course, Dave Täht is taking on another big project: "Making Wi-Fi 
Fast".

What else has happened in this year?

Best,

Rich
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] openwrt build with latest cake and other qdiscs

2015-05-14 Thread Alan Jenkins

On 14/05/15 11:53, Jonathan Morton wrote:

On 14 May, 2015, at 13:50, Alan Jenkins  
wrote:

generic-receive-offload: on

This implies that adding GRO peeling to cake might be a worthwhile priority.

  - Jonathan Morton


Ah, not on my account, it seems.

# tc -stat qdisc |grep maxpacket
  maxpacket 590 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
...
  maxpacket 1696 drop_overlimit 0 new_flow_count 305 ecn_mark 0
  maxpacket 1749 drop_overlimit 0 new_flow_count 274 ecn_mark 0

Alan
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] openwrt build with latest cake and other qdiscs

2015-05-14 Thread Jonathan Morton

> On 14 May, 2015, at 16:09, Alan Jenkins  
> wrote:
> 
> On 14/05/15 11:53, Jonathan Morton wrote:
>>> On 14 May, 2015, at 13:50, Alan Jenkins 
>>>  wrote:
>>> 
>>> generic-receive-offload: on

>> This implies that adding GRO peeling to cake might be a worthwhile priority.
>> 
>>  - Jonathan Morton
> 
> Ah, not on my account, it seems.
> 
> # tc -stat qdisc |grep maxpacket
>  maxpacket 590 drop_overlimit 0 new_flow_count 1 ecn_mark 0
>  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
> ...
>  maxpacket 1696 drop_overlimit 0 new_flow_count 305 ecn_mark 0
>  maxpacket 1749 drop_overlimit 0 new_flow_count 274 ecn_mark 0

A maxpacket of 1749 *does* imply that GRO or GSO is in use.  Otherwise I’d 
expect to see 1514 or less.

 - Jonathan Morton

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] openwrt build with latest cake and other qdiscs

2015-05-14 Thread Alan Jenkins

On 14/05/15 14:14, Jonathan Morton wrote:



This implies that adding GRO peeling to cake might be a worthwhile priority.

  - Jonathan Morton

Ah, not on my account, it seems.

# tc -stat qdisc |grep maxpacket
  maxpacket 590 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
...
  maxpacket 1696 drop_overlimit 0 new_flow_count 305 ecn_mark 0
  maxpacket 1749 drop_overlimit 0 new_flow_count 274 ecn_mark 0

A maxpacket of 1749 *does* imply that GRO or GSO is in use.  Otherwise I’d 
expect to see 1514 or less.

  - Jonathan Morton


Look at the exact difference between the two maxpackets :p (without 
getting into why it's there).  It must be including all the ATM 
estimation.  For GSO we should be seeing 2x or more... not a multiplier 
of 53/48.


Alan
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] fq_codel is *three* years old today

2015-05-14 Thread Kevin Darbyshire-Bryant



On 14/05/15 13:11, Rich Brown wrote:

Folks,

Today is the third anniversary of the announcement of a testable fq_codel (see 
https://lists.bufferbloat.net/pipermail/cerowrt-devel/2014-May/002984.html et. 
seq.)

Happy Birthday! :-)

- Speaking of which, the new DSLReports Speed Test has recently stirred things 
up. Not only do we have an attractive tool that we can recommend to friends, 
but people are getting a little hot under the collar when they see the crummy 
performance of the router that they just paid dearly for. See, for example, 
http://www.dslreports.com/forum/r30051856-Connectivity-Buffer-Bloat




Not just dslreports.  Thinkbroadband (quite popular in the UK) have been 
working on a new HTML5 based 'speed/latency' test (they've been 
collecting data for a few months) 
http://www.thinkbroadband.com/speedtest.html


It looks like it's still in progress and only gives a 'bufferbloat 
rating' rathing than DSLReport's pretty graphs, BUT it is adding to the 
tide of bufferbloat awareness which can only be a good thing.


My awareness and interest has certainly been piqued over the past few 
months.  Dave & Jim's explanations of bufferbloat have been avid 
reading/viewing here, as has the increasing alarm at the poor^H^H^H^H 
non-existent support of standard CPE routing devices (I now won't touch 
anything older than linux 3.0 based)  'Friends don't let friends run 
factory firmware' (especially parents) has been the motivation for 
switching to OpenWrt as well as bufferbloat improvements (though I'm 
also concerned by coding monocultures) Most things have stretched my 
circa 1990 unix sys admin skills (used to sysadmin before another path 
took hold) but picking up little tidbits of info from this list and 
others, I hope to be able to contribute in the near future.


Sincerest thanks to (in no particular order) Jim, Dave, Toke, Alan & all 
the crew here for working on the bufferbloat problem.


Kevin



smime.p7s
Description: S/MIME Cryptographic Signature
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] openwrt build with latest cake and other qdiscs

2015-05-14 Thread Dave Taht
On Thu, May 14, 2015 at 6:35 AM, Alan Jenkins
 wrote:
> On 14/05/15 14:14, Jonathan Morton wrote:
>>
>>
 This implies that adding GRO peeling to cake might be a worthwhile
 priority.

Strike "might".

I saw 64k sized packets out of the GRO implementation on the mvneta
driver on the linksys 1900ac.
I think this explains a lot about the performance of the qos system on
that box's native firmware, dropping
big chunks and not fq-ing well.

disabling GRO packets on all interfaces is hard to get right, and has
a significant cost if
you have more than one interface (as in the edgerouters).

So yes, peeling.

Also I note that at the higher inbound rates (e.g. 100mbit+) policing
is not horrible.
let me discuss that in a different mail.



   - Jonathan Morton
>>>
>>> Ah, not on my account, it seems.
>>>
>>> # tc -stat qdisc |grep maxpacket
>>>   maxpacket 590 drop_overlimit 0 new_flow_count 1 ecn_mark 0
>>>   maxpacket 256 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>>> ...
>>>   maxpacket 1696 drop_overlimit 0 new_flow_count 305 ecn_mark 0
>>>   maxpacket 1749 drop_overlimit 0 new_flow_count 274 ecn_mark 0
>>
>> A maxpacket of 1749 *does* imply that GRO or GSO is in use.  Otherwise I’d
>> expect to see 1514 or less.
>>
>>   - Jonathan Morton
>
>
> Look at the exact difference between the two maxpackets :p (without getting
> into why it's there).  It must be including all the ATM estimation.  For GSO
> we should be seeing 2x or more... not a multiplier of 53/48.
>
> Alan
>
> ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel



-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Cake] openwrt build with latest cake and other qdiscs

2015-05-14 Thread Jonathan Morton
Ah - looking at it from that perspective, your largest packet includes a
1500 byte payload, 40 bytes of PPPoE framing, and 44 more bytes of AAL5
padding, all wrapped up in 33 ATM cells. With even slightly less overhead
or a fractionally reduced payload, you'd go down to 32 cells.

- Jonathan Morton
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] [Bloat] better business bufferbloat monitoring tools?

2015-05-14 Thread dpreed

Tools, tools, tools.  Make it trivially easy to capture packets in the home 
(don't require cerowrt, for obvious reasons).  For example, an iPhone app that 
does a tcpdump and sends it to us would be fantastic to diagnose "make wifi 
fast" issues and also bufferbloat issues.  Give feedback that is helpful to 
every one who contributes data.  (That's what made netalyzr work so well... you 
got feedback ASAP that could be used to understand your own situation).
 
Not sure an iPhone app can be disseminated.  An Android app might be, as could 
a MacBook app and a WIndows app.
 
Linux/FreeBSD options: One could  generate a memstick app that would boot Linux 
on a standard windows laptop to run tcpdump and upload the results, or 
something that would run in Parallels or VMWare fusion on a Mac.
 
I've started looking at a hardware measurement platform for my "make WiFi fast" 
work - currently looks like a Rangely board will do the trick.  But that won't 
scale well outside my home since it costs a few hundred bucks for the hardware.

On Wednesday, May 13, 2015 11:30am, "Jim Gettys"  said:






On Wed, May 13, 2015 at 9:20 AM, Bill Ver Steeg (versteb) <[ vers...@cisco.com 
]( mailto:vers...@cisco.com )> wrote:
Time scales are important. Any time you use TCP to send a moderately large 
file, you drive the link into congestion. Sometimes this is for a few 
milliseconds per hour and sometimes this is for 10s of minutes per hour.

 For instance, watching a 3 Mbps video (Netflix/YouTube/whatever) on a 4 Mbps 
link with no cross traffic can cause significant bloat, particularly on older 
tail drop middleboxes.  The host code does an HTTP get every N seconds, and 
drives the link as hard as it can until it gets the video chunk. It waits a 
second or two and then does it again. Rinse and Repeat. You end up with a very 
characteristic delay plot. The bloat starts at 0, builds until the middlebox 
provides congestion feedback, then sawtooths around at about the buffer size. 
When the burst ends, the middlebox burns down its buffer and bloat goes back to 
zero. Wait a second or two and do it again.

​It's time to do some packet traces to see what the video providers are doing.  
In YouTube's case, I believe the traffic is using the new sched_fq qdisc, which 
does packet pacing; but exactly how this plays out by the time packets reach 
the home isn't entirely clear to me. Other video providers/CDN's may/may not 
have started generating clues.


Also note that so far, no one is trying to pace the IW transmission at all.

​ 
 You can't fix this by adding bandwidth to the link. The endpoint's TCP 
sessions will simply ramp up to fill the link. You will shorten the congested 
phase of the cycle, but TCP will ALWAYS FILL THE LINK (given enough time to 
ramp up)

​That has been the behavior in the past, but it's no longer safe to presume​ we 
should tar everyone with the same brush, rather, we should do a bit of science, 
and then try to hold people's feet to the fire that do not "play nice" with the 
network.
​Some packet captures in the home can easily sort this out.
Jim
​
 The new AQM (and FQ_AQM) algorithms do a much better job of controlling the 
oscillatory bloat, but you can still see ABR video patterns in the delay 
figures.

 Bvs




 -Original Message-
 From: [ bloat-boun...@lists.bufferbloat.net ]( 
mailto:bloat-boun...@lists.bufferbloat.net ) [mailto:[ 
bloat-boun...@lists.bufferbloat.net ]( 
mailto:bloat-boun...@lists.bufferbloat.net )] On Behalf Of Dave Taht
 Sent: Tuesday, May 12, 2015 12:00 PM
 To: bloat; [ cerowrt-devel@lists.bufferbloat.net ]( 
mailto:cerowrt-devel@lists.bufferbloat.net )
 Subject: [Bloat] better business bufferbloat monitoring tools?

 One thread bothering me on [ dslreports.com ]( http://dslreports.com ) is that 
some folk seem to think you only get bufferbloat if you stress test the 
network, where transient bufferbloat is happening all the time, everywhere.

 On one of my main sqm'd network gateways, day in, day out, it reports about 
6000 drops or ecn marks on ingress, and about 300 on egress.
 Before I doubled the bandwidth that main box got, the drop rate used to be 
much higher, and a great deal of the bloat, drops, etc, has now moved into the 
wifi APs deeper into the network where I am not monitoring it effectively.

 I would love to see tools like mrtg, cacti, nagios and smokeping[1] be more 
closely integrated, with bloat related plugins, and in particular, as things 
like fq_codel and other ecn enabled aqms deploy, start also tracking congestive 
events like loss and ecn CE markings on the bandwidth tracking graphs.

 This would counteract to some extent the classic 5 minute bandwidth summaries 
everyone looks at, that hide real traffic bursts, latencies and loss at sub 5 
minute timescales.

 mrtg and cacti rely on snmp. While loss statistics are deeply part of snmp, I 
am not aware of there being a mib for CE events and a quick google search was 
unrevealing. ?

 There is 

Re: [Cerowrt-devel] [Cake] openwrt build with latest cake and other qdiscs

2015-05-14 Thread Jonathan Morton
A 64k aggregate would be broken up at any speed below half a gigabit. So
the 1ms heuristic seems sane from that perspective.

- Jonathan Morton
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] inbound policing revisited

2015-05-14 Thread Dave Taht
While fiddling with the dslreports "cable" test - (24 full rate flows
downstream,
oh, my!) I went back to fiddling with policing in addition to sqm. (My
take on policing was that at lower rates it sucked but at higher rates
it mattered less but was too hard to get the burst size right).

The problem we have with inbound rate limiting and codelling it is
that the bandwidth disparity between the ISP and your setting is only
a few percentage
points - so it remains clearly possible to build up a huge queue at
the isp which
slowly drains to only slowly be brought back under control by the AQM.

You can see this on this spreadsheet, where I was pinging every 10ms
to watch what the dslreports test did (and reported as it's
statistics. I am happy that what dslreports is reporting is somewhat
accurate, as cross checked this way)

http://snapon.lab.bufferbloat.net/~cero3/dslreports.xlsx

[1]
Probably the clearest part of the dataset is the pie tab where you can
see the two phases of the dslreports test. Pie does quite well on
inbound, policing does better, cake and fq_codel poorly. (24 full rate
inbound flows! aggh! can't clear the delay in 10 seconds). (I elided
the normal cable drop tail test for those with heart problems, and we
still rock on outbound.)

Policing as presently defined is a brick wall filter - above the rate,
after a burst, drop everything, you don't care about rtt, you just
want to shoot packets, there are couple rfcs on it.

It has the distinct advantages of occurring no delay or caching in the
router, and also being lower cpu. And policers can be improved with
some post bufferbloat ideas (see "bobbie" and the linux code is crufty
and old). And it looks to be rather effective against tons of flows in
slow start, when trying to "fool" the ISP's shaper.


#!/bin/sh

# set downlink, run sqm, then run this to wipe out the existing downlink
# shaper.

DOWNLINK=10
IFACE=eth2

tc qdisc del dev ${IFACE} ingress
tc qdisc add dev ${IFACE} handle : ingress
tc filter add dev ${IFACE} parent : protocol all u32 match u32 0 0 \
police rate ${DOWNLINK}kbit burst 100k drop flowid :1

# note that the burst parameter is finicy and depends hugely on the
# bandwidth and workload.


[1] I had asked if I was the only one that used spreadsheets...

-- 
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] Replacing CeroWrt with OpenWrt - Routing

2015-05-14 Thread Rich Brown
Dave - thanks for this overview. I'll check it out, then report to the 
group/OpenWrt wiki.

Rich


On May 13, 2015, at 10:49 AM, Dave Taht  wrote:

> What I typically do was simpler for ethernet connectivity.
> 
> kill the firewall on the sub router (ACCEPT 3 times)
> renumber the sub router
> use dhcp on the sub router's wan interface. Turn off fetching the
> default route. (option defaultroute '0')
> Enable babel on all interfaces (including wan) on the sub router
> enable babel on the main router.
> 
> done. No need for static routes.
> 
> can do same for wifi either adhoc or as a wifi client

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


[Cerowrt-devel] RE : [Bloat] better business bufferbloat monitoring tools?

2015-05-14 Thread luca.muscariello
Bill

I beleive you hit the limit of what you can do with AQM w/o FQ.

something more can be achieved with paced sources as said in this thread.

I do not see  incentives for ABR folks to do true pacing however.

doing partial pacing to fix the TSO/GSO problem is of course a must but won't 
solve the problem you mention.

see you on monday at the conference.  I 'm giving a talk right before you.

Luca




 Message d'origine 
De : "Bill Ver Steeg (versteb)"
Date :2015/05/14 00:54 (GMT+01:00)
À : Dave Taht
Cc : cerowrt-devel@lists.bufferbloat.net, bloat
Objet : Re: [Bloat] better business bufferbloat monitoring tools?

Dave That said - It has generally been my hope that most of the big movie 
streaming folk have moved to some form of pacing by now but have no data on it. 
(?)

Bill VerSteeg replies - Based on my recent tests, the production ABR flows are 
still quite bursty. There has been some work done in this area, but I do not 
think bloat is top-of-mind for the ABR folks, and I do not think it has made it 
into production systems. Some of the work is in the area of pacing TCP's 
micro-bursts using sch_fq-like methods. Some has been in the area of 
application rate estimation. Some of the IW10 pacing stuff may also be useful.

I am actually giving a talk on AQM to a small ABR video conference next week. 
The executive summary of my talk is "AQM makes bursty ABR flows less impactful 
to the network buffers (and thus cross traffic), but the bursts still cause 
problems. The problems are really bad on legacy buffer management algorithms. 
The new AQM algorithms take care of most of the issues, but bursts of data make 
the new algorithms work harder and do cause some second-order problems."

The main problem that I have seen in my testing has been in the CoDel/PIE (as 
opposed to FQ_XXX) variants. When the bottleneck link drops packets as the 
elephant bursts, the mice flows suffer. Rather than completing in a handful of 
RTTs, it takes several times longer for the timeouts and rexmits to complete 
the transfer. When running FQ_Codel or FQ_PIE, the elephant flow only impacts 
itself, as the mice are on their own queues. There are also some corner cases 
when the offered load is extremely high, but these seem to be third order 
effects.

I will let the list know what the current state of the art on pacing is after 
next week's conference, but I suspect that the ABR folks are still on a 
learning curve here.

Bvs


-Original Message-
From: Dave Taht [mailto:dave.t...@gmail.com]
Sent: Wednesday, May 13, 2015 9:37 AM
To: Bill Ver Steeg (versteb)
Cc: bloat; cerowrt-devel@lists.bufferbloat.net
Subject: Re: [Bloat] better business bufferbloat monitoring tools?

On Wed, May 13, 2015 at 6:20 AM, Bill Ver Steeg (versteb)  
wrote:
> Time scales are important. Any time you use TCP to send a moderately large 
> file, you drive the link into congestion. Sometimes this is for a few 
> milliseconds per hour and sometimes this is for 10s of minutes per hour.
>
> For instance, watching a 3 Mbps video (Netflix/YouTube/whatever) on a 4 Mbps 
> link with no cross traffic can cause significant bloat, particularly on older 
> tail drop middleboxes.  The host code does an HTTP get every N seconds, and 
> drives the link as hard as it can until it gets the video chunk. It waits a 
> second or two and then does it again. Rinse and Repeat. You end up with a 
> very characteristic delay plot. The bloat starts at 0, builds until the 
> middlebox provides congestion feedback, then sawtooths around at about the 
> buffer size. When the burst ends, the middlebox burns down its buffer and 
> bloat goes back to zero. Wait a second or two and do it again.

The dslreports tests are opening 8 or more full rate streams at once.
Not pretty results.

Web browsers expend most of their flows entirely in slow start.

Etc.

I am very concerned with what 4k streaming looks like, and just got an amazon 
box to take a look at it. (but have not put out the cash for a suitable monitor)

> You can't fix this by adding bandwidth to the link. The endpoint's TCP
> sessions will simply ramp up to fill the link. You will shorten the
> congested phase of the cycle, but TCP will ALWAYS FILL THE LINK (given
> enough time to ramp up)

It is important to keep stressing this point as the memes propagate outwards.

>
> The new AQM (and FQ_AQM) algorithms do a much better job of controlling the 
> oscillatory bloat, but you can still see ABR video patterns in the delay 
> figures.

It has generally been my hope that most of the big movie streaming folk have 
moved to some form of pacing by now but have no data on it.
(?)

Certainly I'm happy with what I saw of quic and have hope that http/2 will cut 
the number of simultaneous flows in progress.

But I return to my original point in that I would like to continue to find more 
ways to make the sub 5 minute behaviors visible and comprehensible to more 
people...

> Bvs
>
>
> -Original Message--

[Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems

2015-05-14 Thread Dave Taht
One thing I find remarkable is that my isochronous 10ms ping flow test
(trying to measure the accuracy of the dslreports test) totally
heisenbugs the cable 16 (used to be 24) flow dslreports test.

Without, using cake as the inbound and outbound shaper, I get a grade
of C to F, due to inbound latencies measured in the seconds.

http://www.dslreports.com/speedtest/479096

With that measurement flow, I do so much better, (max observed latency
of 210ms or so) with grades ranging from B to A...

http://www.dslreports.com/speedtest/478950

I am only sending and receiving an extra ~1 bytes/sec (100 ping
packets/sec) to get this difference between results. The uplink is
11Mbits, downlink 110 (configured for 100)

Only things I can think of are:

* ack prioritization on the modem
* hitting packet limits on the CMTS forcing drops upstream (there was
a paper on this idea, can't remember the name (?) )
* always on media access reducing grant latency
* cake misbehavior (it is well understood codel does not react fast enough here)
* cake goodness (fq of the ping making for less ack prioritization?)
* 

I am pretty sure the cable makers would not approve of someone
continuously pinging their stuff in order to get lower latency on
downloads (but it would certainly be one way to add continuous tuning
of the real rate to cake!)

Ideas?

The simple test:
# you need to be root to ping on a 10ms interval
# and please pick your own server!

$ sudo fping -c 1 -i 10 -p 10 snapon.lab.bufferbloat.net  >
vscabletest_cable.out

start a dslreports "cable" test in your browser

abort the (CNTRL-C) ping when done. Post processing of fping's format

$ cat vscabletest_cable.out | cut -f3- -d, |  awk '{ print $1 }' >
vscabletest-cable.txt

import into your favorite spreadsheet and plot.
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems

2015-05-14 Thread Greg White
I don't have any ideas, but I can try to cross some of yours off the
list...  :)

"* always on media access reducing grant latency"
During download test, you are receiving 95 Mbps, that works out to what,
an upstream ACK every 0.25ms?  The CM will get an opportunity to send a
piggybacked request approx every 2ms.  It seems like it will always be
piggybacking in order to send the 8 new ACKs that have arrived in the last
2ms interval.  I can't see how adding a ping packet every 10ms would
influence the behavior.

"* ack prioritization on the modem"
ACK prioritization would mean that the modem would potentially delay the
pings (yours and dlsreports') in order to service the ACKs.  Not sure why
this wouldn't delay dslreports' ACKs when yours are present.

"* hitting packet limits on the CMTS..."
I'd have to see the paper, but I don't see a significant difference in DS
throughput between the two tests.  Are you thinking that there are just
enough packet drops happening to reduce bufferbloat, but not affect
throughput?  T'would be lucky.




On 5/14/15, 1:13 PM, "Dave Taht"  wrote:

>One thing I find remarkable is that my isochronous 10ms ping flow test
>(trying to measure the accuracy of the dslreports test) totally
>heisenbugs the cable 16 (used to be 24) flow dslreports test.
>
>Without, using cake as the inbound and outbound shaper, I get a grade
>of C to F, due to inbound latencies measured in the seconds.
>
>http://www.dslreports.com/speedtest/479096
>
>With that measurement flow, I do so much better, (max observed latency
>of 210ms or so) with grades ranging from B to A...
>
>http://www.dslreports.com/speedtest/478950
>
>I am only sending and receiving an extra ~1 bytes/sec (100 ping
>packets/sec) to get this difference between results. The uplink is
>11Mbits, downlink 110 (configured for 100)
>
>Only things I can think of are:
>
>* ack prioritization on the modem
>* hitting packet limits on the CMTS forcing drops upstream (there was
>a paper on this idea, can't remember the name (?) )
>* always on media access reducing grant latency
>* cake misbehavior (it is well understood codel does not react fast
>enough here)
>* cake goodness (fq of the ping making for less ack prioritization?)
>* 
>
>I am pretty sure the cable makers would not approve of someone
>continuously pinging their stuff in order to get lower latency on
>downloads (but it would certainly be one way to add continuous tuning
>of the real rate to cake!)
>
>Ideas?
>
>The simple test:
># you need to be root to ping on a 10ms interval
># and please pick your own server!
>
>$ sudo fping -c 1 -i 10 -p 10 snapon.lab.bufferbloat.net  >
>vscabletest_cable.out
>
>start a dslreports "cable" test in your browser
>
>abort the (CNTRL-C) ping when done. Post processing of fping's format
>
>$ cat vscabletest_cable.out | cut -f3- -d, |  awk '{ print $1 }' >
>vscabletest-cable.txt
>
>import into your favorite spreadsheet and plot.

___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel


Re: [Cerowrt-devel] heisenbug: dslreports 16 flow test vs cablemodems

2015-05-14 Thread Aaron Wood
ICMP prioritization over TCP?


> >Ideas?
>
___
Cerowrt-devel mailing list
Cerowrt-devel@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/cerowrt-devel