Re: [nznog] Comcom/Samknows broadband testing from a different perspective

2018-09-29 Thread Dylan Hall
Bringing this conversation back on list, Jason is accessing the M-Lab
servers via Spark.

I would expect the performance through Spark to be poorer as they don't
peer in NZ. Instead, we're exchanging traffic with them in Sydney.

We have reminded Spark that we're on the various peering exchanges and we
are of course happy to accept their routes from the route servers.

Thanks,

Dylan

On Fri, 28 Sep 2018 at 11:11, Dylan Hall  wrote:

> Do you know which node (AKL vs WLG) you're talking to or it's IP address?
>
> If not, can you flick me your address off list and I'll dig through the
> flow records and try and figure it out.
>
> The reason I ask is I'd like to see a traceroute to/from your address to
> see the path the traffic is taking.
>
> We have 10 Gbps ports on APE/AKL-IX, but only 1 Gbps on WIX/CHIX. Since
> the testing started we've started seeing discards on our WIX port during
> the evening which I'm pretty sure is being driven by the testing traffic.
> We've ordered an upgrade to 10 Gbps which I expect to get live in the next
> couple of weeks. This could explain the download test struggling to get
> close to 1 Gbps, but I think it less likely to impact the upload test.
>
> Thanks,
>
> Dylan
>
>
> On Fri, 28 Sep 2018 at 08:49, Jason Orchard 
> wrote:
>
>> Hi
>>
>> Since putting the device on the Enable Network, I’m actually not overall
>> happy with the results I'm seeing.
>>
>> There is an inconsistency with the Download/Upload Speed, Latency and
>> Jitter.
>>
>> When testing using another Testing service such as Speedtest.net and
>> Fast.com, Iperf  are consistently report my connection speed as 950Mb/s
>> Downstream and 500Mb/s upstream.
>>
>> Samknows is reporting this My peak connection Speed for download of
>> 742Mbp/s and upload 429Mbps, but these results are not consistent when I
>> compare other speedtest services.
>>
>>
>>
>> According to the data provided from Samknows user analytics, they are
>> testing the download speed on the hour between 6 pm and 11 am daily and one
>> sample at 5 am.
>> These times represent Enable network peak and off periods for traffic.
>>
>>
>>
>>- The graph below are taken over a 7days period, sampling every 30
>>   seconds at a bit per second.
>>   - Have another member of NZNOC got similar testing results to mine
>>   - What I can’t answer is how the data in the graphs below is
>>   represented by Samknows.
>>
>>
>>
>> Note:
>>
>> The Samknows device is the only (Connection) plugged into this GPON
>> interface.
>>
>> There is other user traffic been generated on this port other than
>> samknows device.
>>
>>
>>
>> [image: cid:image002.png@01D45702.A09DE340]
>>
>>
>>
>>
>>
>> *Jason Orchard*
>>
>> Senior Network Engineer | Enable Networks Limited
>>
>> DDI +64 3 741 5283
>>
>> M+64 27 666 8468
>>
>> www.enable.net.nz
>>
>>
>>
>> [image: cid:image004.jpg@01D3B097.6E668880]
>>
>>
>>
>> *From:* nznog-boun...@list.waikato.ac.nz <
>> nznog-boun...@list.waikato.ac.nz> *On Behalf Of *brianpars...@subpico.com
>> *Sent:* Thursday, 27 September 2018 5:12 PM
>> *To:* Peter Lambrechtsen 
>> *Cc:* NZNOG ; brian.c...@waikato.ac.nz
>> *Subject:* Re: [nznog] Comcom/Samknows broadband testing from a
>> different perspective
>>
>>
>>
>> OpenWRT/LEDE binary, Just shell on.. opkg update, -i what you need...,
>> netstat -antop | grep EST this will surprise you, dest will surprise you
>> more then I  put in the green wheely.  On the WAN link you may see lots of
>> src privates indicating typical dinky rtr/fw nat not coping similarly
>> actual/true $paid for bandwidth at src (home) = rubbish + legit flows.
>> rubbish flows (in/out) of breaks packet concurrency increasing delays and
>> maxin out (netflix/gaming stream) PDVs on clients - recommend filter|block
>> upstream of router will lead to a huge improvement in media quality and
>> performance of your home router.
>>
>> good luck - regards brian
>>  Original Message 
>> Subject: Re: [nznog] Comcom/Samknows broadband testing from a different
>> perspective
>> From: "Peter Lambrechtsen" 
>> Date: Thu, September 27, 2018 4:37 pm
>> To: brian.c...@waikato.ac.nz
>> Cc: "NZNOG" 
>> --
>>
>> > There is a Geekzone thre

Re: [nznog] Hello Hurricane Electric, doubling the size of our domestic table?

2018-09-18 Thread Dylan Hall
Did the AKL-IX people send an email ahead of this?

I ask because we didn't see anything so I'm wondering if we just lost it in
a spam folder somewhere or if others got missed also.

Thanks,

Dylan


On 19 September 2018 at 16:23, Dave Mill  wrote:

> Megaport AKL emailed their customers multiple times about HE turning up
> their services.
>
> NZIX also emailed.
>
> So I assume its just 'domestic' transit providers that maybe aren't
> notifying us?
>
> Dave
>
> On Wed, Sep 19, 2018 at 4:17 PM, Michael Fincham 
> wrote:
>
>> OK, so the routes are on AKL-IX now [cue a bunch more peoples import
>> limits breaking again].
>>
>> Did we not learn anything from the last time this broke a whole bunch of
>> stuff?
>>
>> HE needs to be giving us some heads up before they turn up here and
>> increase the size of NZ's domestic route table by an order of magnitude...
>>
>> Hello?
>>
>> --
>> Michael
>> ___
>> NZNOG mailing list
>> NZNOG@list.waikato.ac.nz
>> https://list.waikato.ac.nz/mailman/listinfo/nznog
>>
>
>
> ___
> NZNOG mailing list
> NZNOG@list.waikato.ac.nz
> https://list.waikato.ac.nz/mailman/listinfo/nznog
>
>
___
NZNOG mailing list
NZNOG@list.waikato.ac.nz
https://list.waikato.ac.nz/mailman/listinfo/nznog


[nznog] Interesting issue with iperf3 and UDP

2017-03-30 Thread Dylan Hall
I stumbled across an issue with iperf3 and it's UDP mode recently that I
thought was worth sharing. In short it's very bursty and in my opinion
broken. Use iperf (2.0.x) instead.

The interesting detail:

I recently got UFB at home (200/100 plan) and wanted to put it through it's
paces. I did the usual TCP tests and it all looked good, so I decided to
try UDP to look for packet loss. Even with fairly low rates (10-50 Mbps) I
was seeing loss vary from 5% to 50%. I tried a number of different hosts
around NZ, and one in the US and although each reported different amounts
of loss there was always loss. I ran the tests with "-l 1400" as an option
to force 1400 byte packets. Without this iperf sends 8kB packets that get
fragmented which confuses the loss figures somewhat. I repeated the tests
with iperf rather than iperf3 and everything worked perfectly.

Focusing on one testing host, a 10G connected server near my ISP to
minimise transit network in the way.

The following is a little piece of a packet capture from the sending host
using iperf (2.0.5). The rate was set to 15 Mbps. I've added the Delta
field which is the time since the last packet was seen in micro seconds.

23:27:57.979021 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 746
23:27:57.979766 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 745
23:27:57.980511 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 745
23:27:57.981254 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 743
23:27:57.982001 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 747
23:27:57.982749 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 748
23:27:57.983492 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 743
23:27:57.984238 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 746
23:27:57.984986 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 748
23:27:57.985731 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 745
23:27:57.986478 IP x.x.x.x.56460 > y.y.y.y.5001: UDP, length 1400 Delta: 747

The packets are very evenly spaced.

The following is using iperf3 (3.1.3) also at 15 Mbps.

23:28:23.913489 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8
23:28:23.913496 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7
23:28:23.913505 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9
23:28:23.913513 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8
23:28:23.913520 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7
23:28:23.913529 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9
23:28:23.913537 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8
23:28:24.012445 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta:
98908
23:28:24.012458 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 13
23:28:24.012468 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 10
23:28:24.012475 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7
23:28:24.012483 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 8
23:28:24.012492 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9
23:28:24.012499 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 7
23:28:24.012508 IP x.x.x.x.49233 > y.y.y.y.5201: UDP, length 1400 Delta: 9

It appears to send a burst of very closely spaced packets, then take a
break, then repeat. The cycle is about 100ms.

Applying some hopefully correct maths, it appears to send about 187kB of
data in just over 1ms, then rest for almost 99ms. This gives is about the
right average rate (15 Mbps), but for that first 1 ms it's sending at about
1.4 Gbps.

I assume that the loss I'm seeing is either buffers over flowing somewhere
in the path or the Chorus rate-limit on UFB discarding the packets.

The old version of iperf uses a busy loop in the sender to achieve the very
precise timing for UDP mode which has the side effect of causing 100% CPU
while sending. I wonder if the change in iperf3 is an attempt to make it
more efficient.

Had I Googled the issue at the beginning I would have found this is a known
problem:

https://github.com/esnet/iperf/issues/296
https://github.com/esnet/iperf/issues/386

Hopefully this will save someone else from a couple hours of confusion :)

Thanks,

Dylan
___
NZNOG mailing list
NZNOG@list.waikato.ac.nz
https://list.waikato.ac.nz/mailman/listinfo/nznog


Re: [nznog] Citylink IX communication

2017-02-27 Thread Dylan Hall
Disclaimer: I worked for CityLink a while back so feel free to consider the
following biased or well informed as you see fit :)

I think one aspect of this discussion that is often overlooked is who
participates in the peering exchanges. It's not just a handful of large
ISP's and content providers.

Take a look at:

http://nzix.net/ape-peers.html
http://nzix.net/wix-peers.html

We have a huge range of participants: universities, schools, government,
data centres, finance, researchers, broadcasters, etc. Very few of these
organisations operate networks as their core business.

If it's so important to convince them all to opt-in to the change what are
we doing to explain why they should make that change? Pointing at an RFC
and yelling loudly hardly seems likely to be accepted as a convincing
argument outside of the networking community.

Personally, I think the approach CityLink is taking seems entirely
reasonable.

I would be keen to have regular updates from CityLink regarding the amount
of uptake and any issues encountered by those making the change that might
help others considering it.

Thanks,

Dylan






On 27 February 2017 at 18:01, Nathan Ward  wrote:

>
> On 27/02/2017, at 5:50 PM, Tim Hoffman  wrote:
>
> *there may exist valid reasons in particular circumstances when the
> particular behavior is acceptable or even useful*
> I don't necessarily disagree that a migration over time may be useful, I
> disagree with the end state of an inconsistent behavior... The key here is
> having a date by which we enforce a consistent behavior….
>
>
> So let’s make up a date and push people towards it. No reason we can’t get
> to a consistent state, right? The Citylink IXes started life as community
> IXes, no reason we can’t make them community IXes again. How about your
> birthday next year?
>
> Perhaps we could talk about ways to track who is opting-in and who isn’t,
> do you have thoughts on how to achieve that? I’m not sure I can think of
> anything technically. Does Citylink intend to publish this information?
> Perhaps we can encourage people to post on the NZNOG list when they change
> their “mode”?
>
> With that information, you could channel your energies in to an email to a
> handful of operators every couple of months. That would be totally
> reasonable to copy to the list.
>
> Having said that, fair point on RFC2119 sir :)
>
>
> Others who thought of it first know who they are :)
>
> --
> Nathan Ward
>
>
> ___
> NZNOG mailing list
> NZNOG@list.waikato.ac.nz
> https://list.waikato.ac.nz/mailman/listinfo/nznog
>
>
___
NZNOG mailing list
NZNOG@list.waikato.ac.nz
https://list.waikato.ac.nz/mailman/listinfo/nznog