Re: [Bloat] netdevconf "vikings"

2019-02-05 Thread Dave Taht
Toke Høiland-Jørgensen  writes:

> Dave Taht  writes:
>
>> speaking of toke and jesper:
>>
>> "Two networking Vikings, masters Jesper Brouer @JesperBrouer and Toke
>> Høiland-Jørgensen will give a tutorial on XDP at 0x13. Join them,
>> listen, get blessed and learn, write and run ebpf/XDP code. Dont
>> forget your laptop!"
>>
>> https://netdevconf.org/0x13/session.html?tutorial-XDP-hands-on
>>
>> But what sort of blessings do vikings bestow?
>
> Well, it's usually something about a glorious death in battle (which is
> how you get into Valhalla). Not sure how appropriate that will be, so we
> may have to improvise ;)
>
> There's a whole sub-genre of Viking metal. E.g.:
> https://www.youtube.com/watch?v=fu2bgwcv43o
>
> (The youtube comments to that video are glorious)

Great video. Nice boat. Wonderful comments.

"Songs like this make me proud to be Earth citizen. I doubt aliens can do 
this."
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] Flent-farm costs

2019-02-05 Thread Rich Brown
Dave wrote:

> Costs on the "flent-farm" continue to drop. Our earliest linode
> servers cost $20/month and our two latest ones (nanoservers) cost
> $5/month. For "science!" I've been generally unwilling to
> update/change these much ...

I've been running netperf.bufferbloat.net (the netperf server that we most 
publicize) for several years. It's a modest OpenVZ VPS from RamNode in Atlanta. 
It has two failings:

- It costs ~$16/month (I don't mind this expense, but $16/month >> $5/month for 
the nanoservers)
- About ever third month, its traffic goes over the 4TB/month limit, and 
RamNode shuts the server off. I regularly run a script to find heavy users and 
block their IP using iptables. (Many people are running a test every five 
minutes for days at a time.) But that's a hassle. And buying an additional 
terabyte per month from RamNode is $10/month, which gets expensive.

To address this, I stood up up a new (KVM-based) VPS with RamNode (also in 
Atlanta, presumably in their same data center) that will permit more in-depth 
iptables rules. My goal would be to look at connection frequency, and if 
someone is trying to do every-five-minute testing, limit their bandwidth to 
10kbps. 

This raises a host of questions:

- For Science - the current netperf.bufferbloat.net is atl.richb-hanover.com; 
the new server is atl2.richb-hanover.com. Do you get similar performance from 
both servers?

- Is this plan to bandwidth-limit abusers realistic? Will it be possible to 
design rules that exclude abusers while allowing legitimate research use? (I'm 
concerned that running five tests in a row in 10 minutes might look like an 
every-five-minute abuser...)

- Should I use one of the Linode nanoservers?

- Should we move the netperf.bufferbloat.net name to use the existing flent 
server farm machines?

- Are there other approaches to supporting netperf.bufferbloat.net?

Many thanks!

Rich


___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] keeping the lights on at bufferbloat.net

2019-02-05 Thread Jonathan Foulkes
Thanks for sharing that Toke, it is a very good animated explainer. I’ll be 
linking to it in some of my FAQ’s and articles.

A key point of this video is to illustrate capacity with large vehicles, that 
move much content, but themselves do not have ‘quick’ transit, or round-trip 
times. That correlates nicely with the metrics for RTT and CWND in speedtests.

Examples focusing on ’top-speed’ of vehicles is missing a bit of the point that 
Internet ’speed’ metric is truly more about capacity rather than 
responsiveness. I’ll be re-thinking the use of those.

Cheers,

Jonathan

> On Feb 5, 2019, at 3:48 AM, Toke Høiland-Jørgensen  wrote:
> 
> Jonathan Foulkes  writes:
> 
>> One analogy I’m have her illustrate depicts an ambulance vs a Ferrari
>> getting through traffic. The ’slower’ one has an advantage, it has
>> traffic rules on its side ;-)
> 
> The RITE project already did the F1 car vs bus in their video back in
> 2014: https://www.youtube.com/watch?v=F1a-eMF9xdY
> 
> I like the ambulance analogy, though :)
> 
> -Toke

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] $106 achieved and flent-farm status

2019-02-05 Thread Pete Heist

> On Feb 5, 2019, at 4:37 AM, Dave Taht  wrote:
> 
> Thank you mikael and jake, matt and matthew and richard! (and jon, and
> dev for trying)

+1

> and here's a puzzler for you! Both boxes are running ntp yet one box
> was still *30* seconds off.

I haven’t explored this fully but I find it works best when ntp is configured 
in an identical way across servers, either all running systemd-timesyncd, or 
ntpd, or chronyd, and to the same server pool. When that’s the case, I often 
see it works within a few milliseconds.

For me, it looks like clocks are currently about 10ms off to london and 75ms 
off to singapore. I’m using systemd-timesyncd with the default Debian servers- 
"0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 
3.debian.pool.ntp.org”.

> irtt 1ms between california and germany has a mindboggling amount of
> loss... in america... ahh... tools…

Curiously, I see less loss at 1ms to singapore than london, london’s loss comes 
from the downstream, and I wasn’t seeing it four days ago. The upstream loss is 
my NLOS uplink as I get it straight to my next hop, which also has an irtt 
server.

Next hop router:
   packets sent/received: 4999/4864 (2.70% loss)
 server packets received: 4864/4999 (2.70%/0.00% loss up/down)

London:
   packets sent/received: 5000/4124 (17.52% loss)
 server packets received: 4817/5000 (3.66%/14.39% loss up/down)

Singapore:
   packets sent/received: 5000/4830 (3.40% loss)
 server packets received: 4830/5000 (3.40%/0.00% loss up/down)

Also curiously, RTT to london has increased for some reason, mainly with 
increased receive delay, but this could easily be from our peering provider, 
which we hope to switch soon for other reasons.

Feb. 1, 2019


$ irtt client -q -i 10ms -d 1s flent-london.bufferbloat.net 

[Connecting] connecting to flent-london.bufferbloat.net 

[176.58.107.8:2112] [Connected] connection established
[176.58.107.8:2112] [WaitForPackets] waiting 135ms for final packets

 Min Mean   Median  Max  Stddev
 ---    --  ---  --
RTT  31.08ms  37.09ms  37.04ms  44.98ms  2.75ms
 send delay  16.02ms  21.55ms  21.39ms  29.51ms  2.67ms
  receive delay   14.3ms  15.54ms  15.61ms   17.2ms   470µs
   
  IPDV (jitter)   49.5µs   3.27ms   2.45ms  10.85ms  2.88ms
  send IPDV   63.4µs   3.16ms   2.22ms  10.79ms  2.84ms
   receive IPDV903ns400µs203µs   2.82ms   506µs
   
 send call time   21.3µs   79.1µs 146µs  28.9µs
timer error   1.84µs721µs2.22ms   501µs
  server proc. time   1.29µs   31.4µs2.23ms   222µs

duration: 1.13s (wait 135ms)
   packets sent/received: 100/100 (0.00% loss)
 server packets received: 100/100 (0.00%/0.00% loss up/down)
 bytes sent/received: 6000/6000
   send/receive rate: 48.5 Kbps / 48.8 Kbps
   packet length: 60 bytes
 timer stats: 0/100 (0.00%) missed, 7.21% error

Feb. 5, 2019


$ irtt client -q -i 10ms -d 1s flent-london.bufferbloat.net
[Connecting] connecting to flent-london.bufferbloat.net
[176.58.107.8:2112] [Connected] connection established
[176.58.107.8:2112] [WaitForPackets] waiting 176.8ms for final packets

 Min Mean   Median  Max  Stddev
 ---    --  ---  --
RTT  53.38ms  54.59ms  54.38ms  58.93ms  1.05ms
 send delay  16.81ms  18.07ms  17.87ms  22.17ms   998µs
  receive delay  35.72ms  36.52ms   36.5ms  38.07ms   333µs
   
  IPDV (jitter)   47.1µs960µs370µs   4.69ms  1.03ms
  send IPDV   13.4µs974µs646µs   4.69ms  1.04ms
   receive IPDV   5.05µs379µs287µs   1.51ms   322µs
   
 send call time   32.5µs   35.5µs  61µs  3.65µs
timer error 90ns   15.1µs 190µs  25.5µs
  server proc. time   7.47µs   13.4µs 191µs  20.6µs

duration: 1.17s (wait 176.8ms)
   packets sent/received: 99/77 (22.22% loss)
 server packets received: 99/99 (0.00%/22.22% loss up/down)
 bytes sent/received: 5940/4620
   send/receive rate: 48.0 Kbps / 37.3 Kbps
   packet length: 60 bytes
 timer stats: 1/100 (1.00%) missed, 0.15% error

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] netdevconf "vikings"

2019-02-05 Thread Toke Høiland-Jørgensen
Dave Taht  writes:

> speaking of toke and jesper:
>
> "Two networking Vikings, masters Jesper Brouer @JesperBrouer and Toke
> Høiland-Jørgensen will give a tutorial on XDP at 0x13. Join them,
> listen, get blessed and learn, write and run ebpf/XDP code. Dont
> forget your laptop!"
>
> https://netdevconf.org/0x13/session.html?tutorial-XDP-hands-on
>
> But what sort of blessings do vikings bestow?

Well, it's usually something about a glorious death in battle (which is
how you get into Valhalla). Not sure how appropriate that will be, so we
may have to improvise ;)

There's a whole sub-genre of Viking metal. E.g.:
https://www.youtube.com/watch?v=fu2bgwcv43o

(The youtube comments to that video are glorious)

-Toke
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] keeping the lights on at bufferbloat.net

2019-02-05 Thread Toke Høiland-Jørgensen
Jonathan Foulkes  writes:

> One analogy I’m have her illustrate depicts an ambulance vs a Ferrari
> getting through traffic. The ’slower’ one has an advantage, it has
> traffic rules on its side ;-)

The RITE project already did the F1 car vs bus in their video back in
2014: https://www.youtube.com/watch?v=F1a-eMF9xdY

I like the ambulance analogy, though :)

-Toke
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat