Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-26 Thread Matthias Tafelmeier
On 09/26/2017 01:35 AM, Dave Täht wrote:
> > sysadmin@luke:~ $ ./irtt -i 10ms -d 30s -l 160 a.b.c.d
> > IRTT to a.b.c.d (a.b.c.d:2112)
> >
> > Min Mean Median Max Stddev
> > ---  -- --- --
> > RTT 11.59ms 15.73ms 14.39ms 49.34ms 3.64ms
> > send delay 5.9ms 9.23ms 6.8ms 43.16ms 3.48ms
> > receive delay 5.42ms 6.5ms 7.59ms 17.88ms 937µs
> >
> > IPDV (jitter) 1.25µs 2.52ms 4.15ms 29.16ms 2.75ms
> > send IPDV 36ns 2.41ms 595µs 28.84ms 2.69ms
> > receive IPDV 60ns 734µs 3.55ms 9.57ms 914µs
> >
> > send call time 56.3µs 70.6µs 236µs 22.7µs
> > timer error 4ns 11.3µs 9.59ms 187µs
> > server proc. time 6.93µs 7.62µs 68.1µs 2.23µs
> >
> > duration: 30.2s (wait 148ms)
> > packets received/sent: 2996/2996 (0.00% loss)
> > bytes received/sent: 479360/479360
> > receive/send rate: 127.9 Kbps / 127.9 Kbps
> > timer stats: 4/3000 (0.13%) missed, 0.11% error
> >
> > g711.json.gz
> >

Hm, these efforts could have been synergized w\ mtr, no?

https://linux.die.net/man/8/mtr

-- 
Besten Gruß

Matthias Tafelmeier



0x8ADF343B.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-22 Thread Toke Høiland-Jørgensen
The owd data is already being collected, so it's fairly trivial to add the 
plots...

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346483968___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-22 Thread Dave Täht
Toke Høiland-Jørgensen  writes:

> Oh, and many thanks for your work on irtt, @peteheist! We really needed such a
> tool :)

Thx very much also. I'd really like to get some owd plots out of
flent

>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub, or mute the thread.


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346478522___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-22 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

> So I'm glad! Looking forward to playing with this more soon. Thanks
> for all that refactoring too, looks like it was some real walking
> through walls...

Meh, it needed doing anyway. You just gave me a chance to repay a bit of
technical debt ;)


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346351161___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-22 Thread Pete Heist
Oh yeah, probably time for this issue thread to retire. :)

So I'm glad! Looking forward to playing with this more soon. Thanks for all 
that refactoring too, looks like it was some real walking through walls...

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346345805___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-22 Thread Toke Høiland-Jørgensen
Oh, and many thanks for your work on irtt, @peteheist! We really needed such a 
tool :)

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346331677___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-22 Thread Pete Heist

> On Nov 22, 2017, at 8:49 AM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Pete Heist  writes:
> 
> >> > And this likely takes the mean value of all transactions and
> >> > summarizes it at the end of the interval, then the calculated latency
> >> > was what was plotted in flent?
> >> 
> >> Yup, that's exactly it :)
> >
> > Ok, it’ll be interesting for me to look at the differences between the
> > two going forward. Naturally doing it the udp_rr way would probably
> > result in a smoother line. The other impacts on the test might be fun
> > to explore.
> 
> Well the obvious one is that the netperf measurement uses more bandwidth
> as the latency decreases. Have been meaning to add that to the Flent
> bandwidth graphs, but now I'm not sure I'll even bother :P

True that, it ends up in a pretty tight loop with straight cabled GigE, as in 
my test bed...

> Also, the netperf measurement will stop at the first packet loss (later
> versions added in a timeout parameter that will restart it, but even
> with that we often see UDP latency graphs completely stopping after a
> few seconds of the RRUL test).

Yes, was noticing that before (one of our original motivations).

I know it’s a random connection, but I wonder how this would affect the 
throughput asymmetry I was seeing on the MBPs, for example. Would the 
driver/card grab airtime more aggressively when it’s transmitting many small 
packets, or do those get grouped together anyway? I can test it again when I 
get a chance, but I’m out of my league on the theory side here.

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346321260___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> > And this likely takes the mean value of all transactions and
>> > summarizes it at the end of the interval, then the calculated latency
>> > was what was plotted in flent?
>> 
>> Yup, that's exactly it :)
>
> Ok, it’ll be interesting for me to look at the differences between the
> two going forward. Naturally doing it the udp_rr way would probably
> result in a smoother line. The other impacts on the test might be fun
> to explore.

Well the obvious one is that the netperf measurement uses more bandwidth
as the latency decreases. Have been meaning to add that to the Flent
bandwidth graphs, but now I'm not sure I'll even bother :P

Also, the netperf measurement will stop at the first packet loss (later
versions added in a timeout parameter that will restart it, but even
with that we often see UDP latency graphs completely stopping after a
few seconds of the RRUL test).

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346269442___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Pete Heist

> On Nov 21, 2017, at 10:56 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Pete Heist  writes:
> 
> > Trying to confirm how latency was being calculated before with the
> > UDP_RR test. Looking at its raw output, I see that transactions per
> > second is probably used to calculate RTT, with interim results like:
> >
> > ```
> > NETPERF_INTERIM_RESULT[0]=3033.41
> > NETPERF_UNITS[0]=Trans/s
> > NETPERF_INTERVAL[0]=0.200
> > NETPERF_ENDING[0]=1511296777.475
> > ```
> >
> > So RTT = (1 / 3033.41) ~= 330us
> >
> > And this likely takes the mean value of all transactions and
> > summarizes it at the end of the interval, then the calculated latency
> > was what was plotted in flent?
> 
> Yup, that's exactly it :)

Ok, it’ll be interesting for me to look at the differences between the two 
going forward. Naturally doing it the udp_rr way would probably result in a 
smoother line. The other impacts on the test might be fun to explore.

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346185007___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

> Trying to confirm how latency was being calculated before with the
> UDP_RR test. Looking at its raw output, I see that transactions per
> second is probably used to calculate RTT, with interim results like:
>
> ```
> NETPERF_INTERIM_RESULT[0]=3033.41
> NETPERF_UNITS[0]=Trans/s
> NETPERF_INTERVAL[0]=0.200
> NETPERF_ENDING[0]=1511296777.475
> ```
>
> So RTT = (1 / 3033.41) ~= 330us
>
> And this likely takes the mean value of all transactions and
> summarizes it at the end of the interval, then the calculated latency
> was what was plotted in flent?

Yup, that's exactly it :)


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346174228___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Pete Heist
Trying to confirm how latency was being calculated before with the UDP_RR test. 
Looking at its raw output, I see that transactions per second is probably used 
to calculate RTT, with interim results like:

```
NETPERF_INTERIM_RESULT[0]=3033.41
NETPERF_UNITS[0]=Trans/s
NETPERF_INTERVAL[0]=0.200
NETPERF_ENDING[0]=1511296777.475
```

So RTT = (1 / 3033.41) ~= 330us

And this likely takes the mean value of all transactions and summarizes it at 
the end of the interval, then the calculated latency was what was plotted in 
flent?

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346163498___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Dave Täht
Pete Heist  writes:

>> On Nov 20, 2017, at 10:44 PM, flent-users  wrote:
>> 
>> A goal for me has been to be able to run Opus at 24 bit, 96Khz, with 2.7ms
>> sampling latency.
>> Actually getting 8 channels of that through a loaded box would be marvelous.
>
> Sounds like a musician. :) If it were CBR, I don’t know if this is a way to
> estimate it:
>
> 2.7ms ~= 370 packets/sec

Well, it might be 8 of those with different tuples.

> @128kbps, 56 bytes / packet (44 data + 12 RTP)
> @256kbps, 99 bytes / packet (87 data + 12 RTP)
>
> Just for fun, a ~256 kbps test between two sites, 50km apart, both using p2p
> WiFi to the Internet. For realtime audio, I guess it’s the maximums that could
> be the biggest issue.

Hah. I didn't say over wifi. That's impossible.


>
> ```
> % ./irtt client -i 2.7ms -l 99 -q -d 10s a.b.c.d
> [Connecting] connecting to a.b.c.d
> [Connected] connected to a.b.c.d:2112
>
> Min Mean Median Max Stddev
> ---  -- --- --
> RTT 10.16ms 15.57ms 14.14ms 71.37ms 4.89ms
> send delay 4.5ms 8.01ms 6.85ms 33.1ms 3.6ms
> receive delay 4.99ms 7.56ms 6.93ms 64.86ms 3.05ms
>
> IPDV (jitter) 1.06µs 2.52ms 2.56ms 56.16ms 2.55ms
> send IPDV 50ns 2.1ms 1.93ms 25.94ms 2.18ms
> receive IPDV 49ns 1.14ms 663µs 58.63ms 1.9ms
>
> send call time 38.2µs 83.2µs 13.46ms 310µs
> timer error 2ns 44.7µs 18.23ms 620µs
> server proc. time 33.6µs 47.4µs 242µs 18.1µs
>
> duration: 10.2s (wait 214.1ms)
> packets sent/received: 3647/3644 (0.08% loss)
> server packets received: 3644/3647 (0.08%/0.00% loss up/down)
> bytes sent/received: 361053/360756
> send/receive rate: 288.9 Kbps / 288.7 Kbps
> packet length: 99 bytes
> timer stats: 57/3704 (1.54%) missed, 1.65% error
> ``` 
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub, or mute the thread.


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346130517___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Pete Heist

> On Nov 21, 2017, at 3:53 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> > Next thing I noticed as for current tests, for rrul_be_nflows, the
> > test completed but only one irtt instance ran (also just saw one
> > connection to the server).
> >
> > % flent rrul_be_nflows --test-parameter upload_streams=8
> > --test-parameter download_streams=8 --socket-stats -l 60 -H $SERVER -p
> > all_scaled --figure-width=10 --figure-height=7.5 -t irtt -o
> > irtt_8flows.png
> 
> Well that's actually to be expected. That test only varies the number of
> TCP parameters; there's always a single ICMP and a single UDP latency
> measurement.

Aha, my bad, I must have never noticed that. I’ll plot some of my older stuff 
too and let you know…

Pete



-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346055549___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> On Nov 21, 2017, at 11:36 AM, Toke Høiland-Jørgensen 
>>  wrote:
>> 
>> Ha! Epic fail! :D
>> 
>> Well, I only just managed to finish writing the code and unbreaking the
>> CI tests; didn't actually get around to running any tests. I've fixed
>> those two errors, and am running a full test run on my testbed now…
>
> Much better now though! Both rrul_be test ran fine for me (with and
> without —socket-stats).

Cool. Getting closer. Still a few bugs to fix with the more esoteric
runners, but I'm working on that.

> Next thing I noticed as for current tests, for rrul_be_nflows, the
> test completed but only one irtt instance ran (also just saw one
> connection to the server).
>
> % flent rrul_be_nflows --test-parameter upload_streams=8
> --test-parameter download_streams=8 --socket-stats -l 60 -H $SERVER -p
> all_scaled --figure-width=10 --figure-height=7.5 -t irtt -o
> irtt_8flows.png

Well that's actually to be expected. That test only varies the number of
TCP parameters; there's always a single ICMP and a single UDP latency
measurement.

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-346050537___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Pete Heist

> On Nov 21, 2017, at 11:36 AM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Ha! Epic fail! :D
> 
> Well, I only just managed to finish writing the code and unbreaking the
> CI tests; didn't actually get around to running any tests. I've fixed
> those two errors, and am running a full test run on my testbed now…

Much better now though! Both rrul_be test ran fine for me (with and without 
—socket-stats).

I have a number of .flent.gz files from Jan this year I can try when I get a 
chance. I just deleted thousands of them from my newer (unreleased) tests from 
March or so as I want to re-run them all in my new test bed, but oh well...

Next thing I noticed as for current tests, for rrul_be_nflows, the test 
completed but only one irtt instance ran (also just saw one connection to the 
server).

% flent rrul_be_nflows --test-parameter upload_streams=8 --test-parameter 
download_streams=8 --socket-stats -l 60 -H $SERVER -p all_scaled 
--figure-width=10 --figure-height=7.5 -t irtt -o irtt_8flows.png



-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345994289___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> On Nov 20, 2017, at 9:58 PM, Toke Høiland-Jørgensen 
>>  wrote:
>> Okay, testable code in the runner-refactor branch.
>> 
>> Ended up doing a fairly involved refactoring of how runners work with
>> data; which is good, as the new way to structure things makes a lot more
>> sense in general; but it did mean I had to change the data format, so
>> quite a few places this can break. So testing appreciated, both for
>> running new tests, and for plotting old data files.
>> 
> Awesome, I’m sure it could take some shaking out. I tried an rrul_be
> test on the runner-refactor branch...

Ha! Epic fail! :D

Well, I only just managed to finish writing the code and unbreaking the
CI tests; didn't actually get around to running any tests. I've fixed
those two errors, and am running a full test run on my testbed now...

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345985779___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-21 Thread Pete Heist

> On Nov 20, 2017, at 10:14 PM, flent-users  wrote:
> 
> Winstein plot of latency variance? It doesn't get denser, it gets darker.
> 
> Packet loss vs throughput?

Not sure what that is exactly. Something like from July 2014 on this page?

https://cs.stanford.edu/~keithw/ 



-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345946520___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Pete Heist

> On Nov 20, 2017, at 9:58 PM, Toke Høiland-Jørgensen 
>  wrote:
> Okay, testable code in the runner-refactor branch.
> 
> Ended up doing a fairly involved refactoring of how runners work with
> data; which is good, as the new way to structure things makes a lot more
> sense in general; but it did mean I had to change the data format, so
> quite a few places this can break. So testing appreciated, both for
> running new tests, and for plotting old data files.
> 
Awesome, I’m sure it could take some shaking out. I tried an rrul_be test on 
the runner-refactor branch...

```
% flent rrul_be --socket-stats -l 60 -H 10.72.0.231 -p all_scaled 
--figure-width=10 --figure-height=7.5 -t new_runner_test -o new_runner_test.png
Started Flent 1.1.1-git-b958d01 using Python 2.7.13.
Starting rrul_be test. Expected run time: 70 seconds.
Traceback (most recent call last):
  File "/usr/local/bin/flent", line 11, in 
load_entry_point('flent===1.1.1-git-b958d01', 'console_scripts', 'flent')()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/__init__.py",
 line 59, in run_flent
b.run()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/batch.py",
 line 609, in run
return self.run_test(self.settings, self.settings.DATA_DIR, True)
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/batch.py",
 line 508, in run_test
res = self.agg.postprocess(self.agg.aggregate(res))
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/aggregators.py",
 line 232, in aggregate
measurements, metadata, raw_values = self.collect()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/aggregators.py",
 line 120, in collect
t.check()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/runners.py",
 line 964, in check
ip_version=args['ip_version'])
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/runners.py",
 line 232, in add_child
c.check()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/runners.py",
 line 1652, in check
super(SsRunner, self).check()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/runners.py",
 line 393, in check
self.metadata['UNITS'] = self.units
AttributeError: 'SsRunner' object has no attribute 'units'
```

Without —socket-stats:

```
% flent rrul_be -l 60 -H 10.72.0.231 -p all_scaled --figure-width=10 
--figure-height=7.5 -t new_runner_test -o new_runner_test.png
Started Flent 1.1.1-git-b958d01 using Python 2.7.13.
Starting rrul_be test. Expected run time: 70 seconds.
Traceback (most recent call last):
  File "/usr/local/bin/flent", line 11, in 
load_entry_point('flent===1.1.1-git-b958d01', 'console_scripts', 'flent')()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/__init__.py",
 line 59, in run_flent
b.run()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/batch.py",
 line 609, in run
return self.run_test(self.settings, self.settings.DATA_DIR, True)
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/batch.py",
 line 508, in run_test
res = self.agg.postprocess(self.agg.aggregate(res))
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/aggregators.py",
 line 232, in aggregate
measurements, metadata, raw_values = self.collect()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/aggregators.py",
 line 120, in collect
t.check()
  File 
"/usr/local/lib/python2.7/dist-packages/flent-1.1.1_git_b958d01-py2.7.egg/flent/runners.py",
 line 1458, in check
delay=self.delay, remote_host=self.remote_host,
AttributeError: 'UdpRttRunner' object has no attribute 'delay'

```



-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345945567___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Dave Taht
A goal for me has been to be able to run Opus at 24 bit, 96Khz, with 2.7ms
sampling latency.
Actually getting 8 channels of that through a loaded box would be mahvelous.



On Mon, Nov 20, 2017 at 1:14 PM, Dave Taht  wrote:

>
>
> On Mon, Nov 20, 2017 at 5:21 AM, Toke Høiland-Jørgensen <
> notificati...@github.com> wrote:
>
>> Pete Heist  writes:
>>
>> >> On Nov 20, 2017, at 1:11 PM, Toke Høiland-Jørgensen <
>> notificati...@github.com> wrote:
>> >>
>> >> Pete Heist  writes:
>> >>
>> >> > G.711 can be simulated today with `-i 20ms -l 172 -fill rand
>> >> > -fillall`. I do this test pretty often, and I think it would be a
>> good
>> >> > default voip test.
>> >>
>> >> The problem with this is that it also changes the sampling rate. I
>> don't
>> >> necessarily want to plot the latency every 20ms, so I'd have to
>> >> compensate for that in the Flent plotter somehow. Also, a better way to
>> >> deal with loss would be needed.
>> >
>> >
>> > I wondered if/when this would come up… Why not plot the latency every
>> > 20ms, too dense?
>>
>> For the current plot type (where data points are connected by lines),
>> certainly. It would probably be possible to plot denser data sets by a
>> point cloud type plot, but that would make denser data series harder to
>> read.
>
>
> Winstein plot of latency variance? It doesn't get denser, it gets darker.
>
> Packet loss vs throughput?
>
>
>> > I guess even if not, eventually at a low enough interval the round
>> > trip and plotting intervals would need to be decoupled, no matter what
>> > plot type is used.
>>
>> Yeah, exactly.
>>
>> > If we want to minimize flent changes, irtt could optionally produce a
>> > `round_trip_snapshots` (name TBD) array in the json with elements
>> > created at a specified interval (`-si duration` or similar) that would
>> > summarize the data from multiple round trips. For each snapshot, there
>> > would be no timestamps, but the start and end seqnos would be there
>> > (if needed), mean delays and ipdv, counts (or percentages?) of lost,
>> > lost_up or lost_down, etc. I’d need to spec this out, but would
>> > something like this help?
>>
>> Hmm, seeing as we probably want to keep all the data points in the Flent
>> data file anyway, I think we might as well do the sub-sampling in Flent.
>> Just thinning the plots is a few lines of numpy code; just need to
>> figure out a good place to apply it.
>>
>> Handling loss is another matter, but one that I need to deal with
>> anyway. Right now I'm just throwing away lost data points entirely,
>> which loses the lost_{up,down} information. Will fix that and also
>> figure out the right way to indicate losses.
>>
>
> Groovy.
>
>
>> -Toke
>>
>> —
>> You are receiving this because you commented.
>> Reply to this email directly, view it on GitHub
>> , or mute
>> the thread
>> 
>> .
>>
>> ___
>> Flent-users mailing list
>> Flent-users@flent.org
>> http://flent.org/mailman/listinfo/flent-users_flent.org
>>
>>
>
>
> --
>
> Dave Täht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-226-2619 <(669)%20226-2619>
>



-- 

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Dave Taht
On Mon, Nov 20, 2017 at 5:21 AM, Toke Høiland-Jørgensen <
notificati...@github.com> wrote:

> Pete Heist  writes:
>
> >> On Nov 20, 2017, at 1:11 PM, Toke Høiland-Jørgensen <
> notificati...@github.com> wrote:
> >>
> >> Pete Heist  writes:
> >>
> >> > G.711 can be simulated today with `-i 20ms -l 172 -fill rand
> >> > -fillall`. I do this test pretty often, and I think it would be a good
> >> > default voip test.
> >>
> >> The problem with this is that it also changes the sampling rate. I don't
> >> necessarily want to plot the latency every 20ms, so I'd have to
> >> compensate for that in the Flent plotter somehow. Also, a better way to
> >> deal with loss would be needed.
> >
> >
> > I wondered if/when this would come up… Why not plot the latency every
> > 20ms, too dense?
>
> For the current plot type (where data points are connected by lines),
> certainly. It would probably be possible to plot denser data sets by a
> point cloud type plot, but that would make denser data series harder to
> read.


Winstein plot of latency variance? It doesn't get denser, it gets darker.

Packet loss vs throughput?


> > I guess even if not, eventually at a low enough interval the round
> > trip and plotting intervals would need to be decoupled, no matter what
> > plot type is used.
>
> Yeah, exactly.
>
> > If we want to minimize flent changes, irtt could optionally produce a
> > `round_trip_snapshots` (name TBD) array in the json with elements
> > created at a specified interval (`-si duration` or similar) that would
> > summarize the data from multiple round trips. For each snapshot, there
> > would be no timestamps, but the start and end seqnos would be there
> > (if needed), mean delays and ipdv, counts (or percentages?) of lost,
> > lost_up or lost_down, etc. I’d need to spec this out, but would
> > something like this help?
>
> Hmm, seeing as we probably want to keep all the data points in the Flent
> data file anyway, I think we might as well do the sub-sampling in Flent.
> Just thinning the plots is a few lines of numpy code; just need to
> figure out a good place to apply it.
>
> Handling loss is another matter, but one that I need to deal with
> anyway. Right now I'm just throwing away lost data points entirely,
> which loses the lost_{up,down} information. Will fix that and also
> figure out the right way to indicate losses.
>

Groovy.


> -Toke
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> , or mute
> the thread
> 
> .
>
> ___
> Flent-users mailing list
> Flent-users@flent.org
> http://flent.org/mailman/listinfo/flent-users_flent.org
>
>


-- 

Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Toke Høiland-Jørgensen
Okay, testable code in the runner-refactor branch.

Ended up doing a fairly involved refactoring of how runners work with
data; which is good, as the new way to structure things makes a lot more
sense in general; but it did mean I had to change the data format, so
quite a few places this can break. So testing appreciated, both for
running new tests, and for plotting old data files.

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345829692___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Pete Heist
:) Gut laugh, I know that feeling sometimes...

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345746540___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Toke Høiland-Jørgensen

> Really looking forward to it!

Working on it. Turned out to need a bit of refactoring. This is me
currently: https://i.imgur.com/t0XHtgJ.gif

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345740515___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Pete Heist

> On Nov 20, 2017, at 2:21 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Pete Heist  writes:
> 
> >> On Nov 20, 2017, at 1:11 PM, Toke Høiland-Jørgensen 
> >>  wrote:
> 
> > I wondered if/when this would come up… Why not plot the latency every
> > 20ms, too dense?
> 
> For the current plot type (where data points are connected by lines),
> certainly. It would probably be possible to plot denser data sets by a
> point cloud type plot, but that would make denser data series harder to
> read.

Yeah, thought of the same, or some area fill between min and max, or 98th 
percentile values, or something.

> Hmm, seeing as we probably want to keep all the data points in the Flent
> data file anyway, I think we might as well do the sub-sampling in Flent.
> Just thinning the plots is a few lines of numpy code; just need to
> figure out a good place to apply it.

Didn’t think of that (keeping all data points anyway), but it really makes more 
sense. At first I thought numpy was a new street-talkin’ adjective (as in, 
that’s some really numpy code). I see: NumPy. :)

> Handling loss is another matter, but one that I need to deal with
> anyway. Right now I'm just throwing away lost data points entirely,
> which loses the lost_{up,down} information. Will fix that and also
> figure out the right way to indicate losses.

Really looking forward to it!

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345701066___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Pete Heist

> On Nov 20, 2017, at 1:11 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Pete Heist  writes:
> 
> > G.711 can be simulated today with `-i 20ms -l 172 -fill rand
> > -fillall`. I do this test pretty often, and I think it would be a good
> > default voip test.
> 
> The problem with this is that it also changes the sampling rate. I don't
> necessarily want to plot the latency every 20ms, so I'd have to
> compensate for that in the Flent plotter somehow. Also, a better way to
> deal with loss would be needed.


I wondered if/when this would come up… Why not plot the latency every 20ms, too 
dense? I guess even if not, eventually at a low enough interval the round trip 
and plotting intervals would need to be decoupled, no matter what plot type is 
used.

If we want to minimize flent changes, irtt could optionally produce a 
`round_trip_snapshots` (name TBD) array in the json with elements created at a 
specified interval (`-si duration` or similar) that would summarize the data 
from multiple round trips. For each snapshot, there would be no timestamps, but 
the start and end seqnos would be there (if needed), mean delays and ipdv, 
counts (or percentages?) of lost, lost_up or lost_down, etc. I’d need to spec 
this out, but would something like this help?



-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345686847___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-20 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

> G.711 can be simulated today with `-i 20ms -l 172 -fill rand
> -fillall`. I do this test pretty often, and I think it would be a good
> default voip test.

The problem with this is that it also changes the sampling rate. I don't
necessarily want to plot the latency every 20ms, so I'd have to
compensate for that in the Flent plotter somehow. Also, a better way to
deal with loss would be needed.

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345678089___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-19 Thread Pete Heist
G.711 can be simulated today with `-i 20ms -l 172 -fill rand -fillall`. I do 
this test pretty often, and I think it would be a good default voip test. The 
reason for the 172 vs 160 is the addition of a 12 byte RTP header, which is 
present in the wireshark trace of a SIP G.711 call:

https://wiki.wireshark.org/SampleCaptures?action=AttachFile=get=SIP_CALL_RTP_G711

GSM is older now and I'm not sure how much it's still used over the Internet, 
but since it has a payload size of 33 bytes(?), some statistics would have to 
be sacrificed. I'd give up server received stats and dual timestamps, so `-i 
20ms -l 33 -rs none -ts midpoint` is a start. Not sure about additional headers.

It should possible to simulate Opus in CBR mode in a similar way. But Opus also 
supports VBR, which would require varying packet sizes, which irtt can't yet do 
(plus, this would invalidate or at least pollute the IPDV calculation).

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345511922___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-18 Thread Dave Taht
Of all the codecs out there, I like opus best. (It's deeply in webrtc).

g711, and gsm, used to be the most common, but I've been out of this field
a long time.
___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-17 Thread Pete Heist
The `-n` and `-timeouts` parameters have been added to the client, and are 
documented in the usage. Quick examples:

```
tron:~/src/github.com/peteheist/irtt:% ./irtt client -timeouts 
250ms,500ms,1s,2s -n 127.0.0.2
[Connecting] connecting to 127.0.0.2
Error: no reply from server
tron:~/src/github.com/peteheist/irtt:% echo $?
1
tron:~/src/github.com/peteheist/irtt:% ./irtt client -timeouts 
250ms,500ms,1s,2s -n 127.0.0.1
[Connecting] connecting to 127.0.0.1
[Connected] connected to 127.0.0.1:2112
[NoTest] skipping test at user request
tron:~/src/github.com/peteheist/irtt:% echo $?  
 
0
```

I don't know how aggressive you want to go on open packet timeouts, but I set a 
minimum at 200ms so users won't be as tempted to abuse public servers. Default 
Linux TCP syn timeout looks to be hardcoded at 3s,6s,12s, etc. I think our 
default of 1s,2s,4s,8s is fine in this day and age. In flent, I think 
250ms,500ms,1s,2s would be one way of being pretty sure whether a server is 
running or not in a reasonable amount of time (3.75s), but I don't know what 
actual the time constraints are.

I took a quick look at the irtt runner in flent. Looks good, my only comment is 
that `-fill rand` and `-fillall` aren't necessary because there's no payload 
(whose length would be specified with `-l`). Seeing that though made me add a 
micro-optimization to not call Read with an empty slice on the Filler when 
there's no payload requested. So if you want to leave those params in 
anticipation of using `-l` later, it's probably fine.

That makes me think, any reason we can't use irtt for the voip tests? There 
could be tests simulating some different codecs with different intervals and 
payload lengths, if we wanted:

https://www.cisco.com/c/en/us/support/docs/voice/voice-quality/7934-bwidth-consume.html


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345252859___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-16 Thread Pete Heist

> On Nov 16, 2017, at 6:18 PM, Dave Täht  wrote:
> 
> Pete Heist  writes:
> 
> > Measurement-wise, I see similar results with netperf vs irtt in my Gbit LAN 
> > with
> > BQL and 'cake besteffort lan' test, with irtt having a slightly higher mean 
> > and
> > maximums, as might be expected. This is something I might be able to 
> > improve.
> > I'll try playing with chrt when I have time.
> >
> > On the positive side(?), with irtt, I don't see the 'latency locking' effect
> > that I see with netperf, where for whatever reason, certain flows would stay
> > more fixed in some position relative to the mean. Also, in these runs, the
> > download throughput was somewhat less with netperf, but not with irtt.
> 
> My guess is you are seeing the difference between scheduling netperf and
> irtt in the cpus, and their effect on cache behavior, with the irtt
> version having a larger footprint and skewing the runs of netperf a tiny
> bit.

To be clear, it was the netperf UDP_RR version that showed the slightly lower 
download throughput. When irtt was running, it did not seem to affect total TCP 
throughput in either direction. In any case, after doing more runs, I see 
things can really differ from run to run with either tool in use, so I don’t 
want to read into this too much.

After more runs, I do see the general trend that the mean of netperf’s UDP_RR 
RTTs tend to be slightly less, and irtt tends to not affect the TCP flows as 
much. I’m going to have to do a _lot_ of runs and save all the results to say 
anything more statistically significant.

I did not find that "sudo chrt -r 99 ./irtt server” or locking goroutines to 
threads with “irtt server -thread” or a combination of the two had any 
immediately measurable impact.

We know that when Go makes a system call, it has a higher overhead than the 
equivalent call from native code due to its scheduler. We trade this off for 
other things. I think I’ll get in touch with the runtime team to see what if 
any optimizations can be made there, since they’re gathering input for what’s 
important to people in Go 2...

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345040965___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-16 Thread Toke Høiland-Jørgensen


On 16 November 2017 18:20:16 CET, "Dave Täht"  wrote:
>Toke Høiland-Jørgensen  writes:
>
>> Pete Heist  writes:
>>
>>> On the positive side(?), with irtt, I don't see the 'latency
>locking'
>>> effect that I see with netperf, where for whatever reason, certain
>>> flows would stay more fixed in some position relative to the mean.
>>> Also, in these runs, the download throughput was somewhat less with
>>> netperf, but not with irtt.
>>
>> Yeah, one of the issues with the netperf UDP_RR test is that it uses
>> more bandwidth the lower the latency, because it really measures
>> "transactions per second" which Flent then converts to RTT. That is
>> probably also the reason for the 'locking' behaviour...
>
>The rrul spec was for isochronous behavior. Would not mind a rrulv2
>test
>that did that using irtt.

I don't really see any reason to keep the netperf UDP_RR behaviour for anything 
other than a fallback. So once I'm done with the integration, RRUL would just 
switch to isochronous behaviour everywhere whenever irtt is available...

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-345009469___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-16 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> On Nov 16, 2017, at 2:07 PM, Toke Høiland-Jørgensen 
>>  wrote:
>> 
>> Pete Heist  writes:
>> 
>> >> On Nov 16, 2017, at 1:48 PM, Toke Høiland-Jørgensen 
>> >>  wrote:
>> >> 
>> >> > The handshake takes up to 15 seconds to complete (delays of 1, 2, 4
>> >> > and 8 seconds waiting for a reply), so not having irtt on the server
>> >> > will mean a 15 second wait. Do you think it’s ok to keep that fixed?
>> >> 
>> >> Hmm, waiting 15 seconds before starting a test is probably a bit much.
>> >> For the test, I'd say don't retransmit and make the wait configurable?
>> >
>> > How about re-transmit but once per second with a flag for the number
>> > of times to try, plus I’d probably limit it to some maximum. If
>> > there’s no retransmit you might see fallbacks due to packet loss more
>> > than you’d want.
>> 
>> Sure, that works. Or make the interval a flag as well, maybe?
>
> Yeah, I was thinking of making the handshake waits configurable
> anyway, so I’ll probably do that and I think it will be flexible
> enough for what’s needed. Will update when it’s ready…

Cool :)

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-344919974___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-16 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> On Nov 16, 2017, at 1:48 PM, Toke Høiland-Jørgensen 
>>  wrote:
>> 
>> > The handshake takes up to 15 seconds to complete (delays of 1, 2, 4
>> > and 8 seconds waiting for a reply), so not having irtt on the server
>> > will mean a 15 second wait. Do you think it’s ok to keep that fixed?
>> 
>> Hmm, waiting 15 seconds before starting a test is probably a bit much.
>> For the test, I'd say don't retransmit and make the wait configurable?
>
> How about re-transmit but once per second with a flag for the number
> of times to try, plus I’d probably limit it to some maximum. If
> there’s no retransmit you might see fallbacks due to packet loss more
> than you’d want.

Sure, that works. Or make the interval a flag as well, maybe?

-Toke


-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-344917198___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-16 Thread Pete Heist

> On Nov 16, 2017, at 1:48 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> > The handshake takes up to 15 seconds to complete (delays of 1, 2, 4
> > and 8 seconds waiting for a reply), so not having irtt on the server
> > will mean a 15 second wait. Do you think it’s ok to keep that fixed?
> 
> Hmm, waiting 15 seconds before starting a test is probably a bit much.
> For the test, I'd say don't retransmit and make the wait configurable?

How about re-transmit but once per second with a flag for the number of times 
to try, plus I’d probably limit it to some maximum. If there’s no retransmit 
you might see fallbacks due to packet loss more than you’d want.

-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-344916227___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-16 Thread Pete Heist

> On Nov 16, 2017, at 1:15 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Meh, irtt's functionality is basically a superset of netperf's UDP_RR.
> So automatically picking irtt if available and a fallback is the right
> thing to do, I think. I'd just add the plots everywhere, and if OWD data
> is not available, those plots would just be empty. Same thing we do for
> TCP window stats currently. Longer term, maybe hiding the plots entirely
> when there is no data is better, but for now empty plots are fine.

I see, that does keep it simpler.

> > Personally I don't think it's necessary to check for server-side
> > support. If someone is specifying they want to use a particular tool,
> > I think they're declaring that it's available, as it is today with
> > netperf / netserver.
> 
> Yes, but if we do automatic detection with a preference we could get
> into the situation where irtt exists on the client but the test is being
> run against a server that doesn't have it. This is especially likely to
> happen before the *.netperf.bufferbloat.net servers have irtt deployed.
> 
> Could you be persuaded to add a 'check_server' action to irtt? Something
> that just does the handshake and doesn't run any more tests other than
> that. Then we could have Flent call that to verify that irtt is
> usable...

Sure, I’ll add that (or something similarly named). Should get to it today or 
tomorrow. Meanwhile you’re right that something like this works and check for 
return code 0 to mean success:

irtt client -i 1ms -d 1ms -wait 1ms a.b.c.d

The handshake takes up to 15 seconds to complete (delays of 1, 2, 4 and 8 
seconds waiting for a reply), so not having irtt on the server will mean a 15 
second wait. Do you think it’s ok to keep that fixed?



-- 
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-344910362___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-12 Thread Pete Heist
Cool, so far your namespaces scripts seem to work fine for me on 4.9.0 on the 
APU2. I tried an irtt run once with namespaces and once straight to the local 
adapter. There doesn't seem to be anything disqualifying in these results. In 
fact they look remarkably similar (took both results from the second run).

Things will probably be different with tests under load, so I added to my list 
to try some Flent runs. Maybe I could compare rrul_be on Gbit LAN vs simulated 
Gbit LAN?

 Local adapter

```
sysadmin@apu2a:~$ ./irtt client -i 1ms -d 10s -q localhost
[Connecting] connecting to localhost
[Connected] connected to 127.0.0.1:2112

MinMean  Median Max  Stddev
---  -- ---  --
RTT   136µs   194µs   189µs  1.23ms19µs
 send delay  67.3µs   104µs  99.8µs   316µs  11.5µs
  receive delay  57.7µs  89.7µs  89.6µs  1.12ms  13.5µs
   
  IPDV (jitter)  0s  12.5µs  8.95µs  1.03ms  17.8µs
  send IPDV 4ns  8.12µs  5.15µs   205µs  8.69µs
   receive IPDV 2ns  8.82µs  6.02µs  1.06ms  16.6µs
   
 send call time  35.4µs  50.4µs   240µs  3.57µs
timer error 2ns  4.49µs  99.9µs  5.15µs
  server proc. time  12.1µs  18.8µs   132µs  3.65µs

duration: 10s (wait 3.7ms)
   packets sent/received: 1/1 (0.00% loss)
 server packets received: 1/1 (0.00%/0.00% upstream/downstream loss)
 bytes sent/received: 60/60
   send/receive rate: 480.0 Kbps / 480.1 Kbps
   packet length: 60 bytes
 timer stats: 0/1 (0.00%) missed, 0.45% error
```

 Namespaces

```
root@apu2a:/home/sysadmin/src/veth# ip netns exec client /home/sysadmin/irtt 
client -i 1ms -d 10s -q 10.10.2.2
[Connecting] connecting to 10.10.2.2
[Connected] connected to 10.10.2.2:2112

MinMean  Median Max  Stddev
---  -- ---  --
RTT   135µs   193µs   189µs   290µs  15.3µs
 send delay  68.3µs   104µs  99.7µs   216µs  10.9µs
  receive delay  55.7µs  89.2µs  89.8µs   185µs  9.33µs
   
  IPDV (jitter) 1ns  11.2µs  7.81µs   118µs  11.5µs
  send IPDV 2ns  7.43µs  4.54µs   115µs  8.63µs
   receive IPDV  0s  8.29µs   5.5µs   120µs  8.08µs
   
 send call time  35.9µs  51.3µs   210µs  3.79µs
timer error  0s  8.52µs   110µs  10.2µs
  server proc. time  11.7µs  18.2µs  66.5µs  4.11µs

duration: 10s (wait 869µs)
   packets sent/received: 1/1 (0.00% loss)
 server packets received: 1/1 (0.00%/0.00% upstream/downstream loss)
 bytes sent/received: 60/60
   send/receive rate: 480.0 Kbps / 480.1 Kbps
   packet length: 60 bytes
 timer stats: 0/1 (0.00%) missed, 0.85% error
```

 Local Adapter, SCHED_RR

```
sysadmin@apu2a:~$ sudo chrt -r 99 ./irtt client -i 1ms -d 10s -q localhost
[Connecting] connecting to localhost
[Connected] connected to 127.0.0.1:2112

MinMean  Median Max  Stddev
---  -- ---  --
RTT   133µs   181µs   177µs   286µs  12.2µs
 send delay  63.4µs  93.6µs  90.1µs   189µs  10.1µs
  receive delay  62.1µs  87.3µs  86.3µs   145µs  5.01µs
   
  IPDV (jitter) 2ns  7.82µs  5.08µs   111µs  8.38µs
  send IPDV  0s  6.62µs  3.86µs   102µs  7.71µs
   receive IPDV  0s  3.94µs  2.77µs  69.3µs  4.47µs
   
 send call time  34.4µs  47.7µs  78.6µs  2.85µs
timer error 1ns   3.1µs   112µs  3.71µs
  server proc. time  12.5µs18µs  53.2µs  3.58µs

duration: 10s (wait 858µs)
   packets sent/received: 1/1 (0.00% loss)
 server packets received: 1/1 (0.00%/0.00% upstream/downstream loss)
 bytes sent/received: 60/60
   send/receive rate: 480.0 Kbps / 480.0 Kbps
   packet length: 60 bytes
 timer stats: 0/1 (0.00%) missed, 0.31% error
```

> I am running with CONFIG_HZ_1000 in the kernel.

On the APU2s, I'm currently using the default of CONFIG_HZ_250.

> An strace might be revealing as to the syscalls you are making and could
> try to optimize out. sar can show the context switches...
>
> (I am away from desk, and can do these things, too, when I get back to
> it, but enjoy teaching folk to fish and then eating the results. :))

Ok, put on my list to try...

> OSX may well make available a harder timer system than what go uses,
> since it is so often 

Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-10 Thread Dave Täht
Pete Heist  writes:

> I really like these PCEngines APU2 boards, and PTP HW timestamps. Now Peter,
> back to work... :)

What kernel is this, btw? A *lot* of useful stuff just landed in
net-next for network namespaces, which may mean I can try to validate
your results in emulation, also. My primitive (eyeballing the packet
captures of some tcp traces) jitter result in a 4 virtualized network
namespace, was 2-6us, but that's hardly trustable.

Knowing that basic measurement noise in your setup is < 107us is quite
helpful, I wonder what the sources are

>
> sysadmin@apu2a:~$ ./irtt client -i 1ms -d 10s -q 10.9.0.2
> [Connecting] connecting to 10.9.0.2
> [Connected] connected to 10.9.0.2:2112
>
> MinMean  Median Max  Stddev
> ---  -- ---  --
> RTT   237µs   270µs   268µs   366µs10µs
>  send delay   119µs   135µs   134µs   226µs  7.34µs
>   receive delay   113µs   135µs   134µs   227µs  6.25µs
>
>   IPDV (jitter) 1ns  9.47µs  6.32µs   107µs  9.86µs
>   send IPDV  0s  6.96µs  4.71µs  90.1µs  7.43µs
>receive IPDV  0s  4.92µs  2.61µs  92.5µs  6.84µs
> ...
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub, or mute the thread.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-343582828___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-11-10 Thread Pete Heist
I really like these PCEngines APU2 boards, and PTP HW timestamps. Now Peter, 
back to work... :)

```
sysadmin@apu2a:~$ ./irtt client -i 1ms -d 10s -q 10.9.0.2
[Connecting] connecting to 10.9.0.2
[Connected] connected to 10.9.0.2:2112

MinMean  Median Max  Stddev
---  -- ---  --
RTT   237µs   270µs   268µs   366µs10µs
 send delay   119µs   135µs   134µs   226µs  7.34µs
  receive delay   113µs   135µs   134µs   227µs  6.25µs
   
  IPDV (jitter) 1ns  9.47µs  6.32µs   107µs  9.86µs
  send IPDV  0s  6.96µs  4.71µs  90.1µs  7.43µs
   receive IPDV  0s  4.92µs  2.61µs  92.5µs  6.84µs
...
```

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-343412553___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-31 Thread Pete Heist
I should have another block of time next week to finish the upstream vs 
downstream packet loss stats, then after that could be a good time for Flent 
integration. Need my help for it? It would probably take me longer to get into 
the Flent code and I'm not much of a Pythonista, but I could try if it's needed.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-340724826___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-17 Thread Pete Heist

> On Oct 17, 2017, at 7:18 PM, Dave Täht  wrote:
> I tried rust out at about the same time esr did (in fact, sped up his go code 
> by threading it better). Didn't like it, either.
> 
> ( http://esr.ibiblio.org/?p=7294  )
> 
> Honestly, I don't know what to do about having better R/T capable code in a 
> better language. All I can do is point out things like temporarily disabling 
> garbage collection as means of getting closer, promoting cpus that could 
> context switch better, and buying old cars that don't have bluetooth support.
> 

Interesting commentary- not into bluetooth? :)

I like Go’s focus on code readability, because it makes coding more fun. A few 
things I don’t love about it: that there are multiple ways to declare 
variables, that I seem to spend more time plumbing enums than I want to, that 
some of the standard libs are a little bare bones. But for me there’s more to 
like than not. For some things though, I don’t know if anything will replace 
good ‘ole C before the machines take over.

Also, to summarize the plan for upstream/downtream packet loss differentiation 
in IRTT, in case there are any tips or better ideas.

The server will add two things to its replies, when requested in the 
negotiation:

1) A uint32 for each client connection that counts the total number of packets 
the server has received for that connection
2) A uint64 for each client connection containing a bitfield of the received 
status of the last 64 seqnos, so that:
   - The LSB always represents the latest received seqno
   - When a new packet comes in, this number get left shifted by the number of 
seqnos since the last seqno was received
   - The number is returned to the client in every response, and the client 
simply stores it for each round trip during the test
   - When the test is over, the client uses these numbers to re-create which 
packets the server received and which it didn’t
   - The “received” status for each round-trip in the results would then be one 
of these four values, loosely:
  - Received
  - Lost upstream
  - Lost downstream
  - Lost unknown

If more than 64 server replies are lost in succession, you lose per-packet 
differentiation of where the loss occurred, and it falls back to the total 
count and "lost unknown," but at least the total server received count is still 
there, which I don’t think is too bad. On the plus side, the client and server 
don’t have to process or store much during the test, it’s a small addition to 
the packet overall, and the client does most of the work reassembling the 
received state after the test is done.

I should also probably return both these numbers in the connection close reply, 
to make sure the client ends up with the latest values at the end of the test.

Otherwise, I can’t think of a way to make it “perfect” without storing results 
on the server, which I’d like to avoid.



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-337347348___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-17 Thread Dave Täht
I tried rust out at about the same time esr did (in fact, sped up his go code 
by threading it better). Didn't like it, either.

( http://esr.ibiblio.org/?p=7294 )

Honestly, I don't know what to do about having better R/T capable code in a 
better language. All I can do is point out things like temporarily disabling 
garbage collection as means of getting closer, promoting cpus that could 
context switch better, and buying old cars that don't have bluetooth support.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-337291571___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-16 Thread Pete Heist

> On Oct 16, 2017, at 10:16 PM, Dave Täht  wrote:
> 
> > Also, If there's a reason I should do my tests with iperf2 instead, I'm all
> > ears, as I'm a "scientist," not attached to my own work. :) I read that 
> > they're
> Cross checking is always good.

True, don’t need to find out in 2019 about a gross measurement bug from 2017...

> > setting the thread priority to realtime, which likely reduces scheduling 
> > delays,
> > but there's probably a cost to that. I experimented with Go's
> 
> So long as you can complete your work within a bound, realtime
> scheduling is a win. I also tended to like timer_fds as those let you
> know easily when you'd overrun a bound.

Timerfd is on the list.

Maybe I can increase thread prio from Go in Linux using the syscall package. If 
I understand correctly though, realtime scheduling policies may require root. 
Ok, this is on the list to play with, though I’m not at all clear on how or if 
I can do it yet. Hello there ‘man sched’…

> The *very* first thing I learned about GO was how to turn the garbage
> collector off. I'm weird that way. I'd still suggest turning the garbage
> collector off until after the end of a run.


Maybe you’d like Rust. Syntax probably too complicated for my taste...

But anyway, disabling GC is done. That cut back on some outliers, particularly 
with low interval tests.

Along with that, I’m now allowing the max capacity of the results slice to to 
go up to the expected size for the whole test (I was previously limiting the 
max cap, which I see now was misguided, especially with GC disabled). So 
basically it's just creating a big slice (each element a value) in one shot at 
the beginning of the test, disabling GC, filling it with results, then 
re-enabling GC for stats calculation after all the packets are collected. There 
shouldn’t be much of any other garbage created during data collection, but I’ll 
make sure later.

That said, they’ve made great strides with Go’s GC latency in the last year. 
Then this should theoretically make it into 1.10: 
https://github.com/golang/proposal/blob/master/design/17503-eliminate-rescan.md 


Also for Toke, the JSON doc is posted to README.md. Might be nice if there were 
an easier way to document JSON, but I don’t think JSON schema is that way, for 
example…



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-337080556___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-16 Thread Dave Täht


Pete Heist  writes:

> Thanks! I made most of your changes (-o was particularly broken, so this is a
> better solution), except:
>
> * I'm still thinking about whether to default durations to seconds or not. I'm
>   using Go's default duration flag parsing, and I like the explicitness of
>   seeing the units.

I also like explicitness, and rejecting the arg without it is ok by me.

> * I like that the JSON is written even after ctrl-C, so that interrupting a 
> long
>   test doesn't mean you lose all your results. But maybe if you're sending
>   output to stdout, it's not a good idea (or even breaks some convention?) 
>
> I changed it to ping-like behavior (although there is now -q for no per-packet
> results and -qq for no output at all). But just to explain the thought 
> process,
> I felt that the default of five round trips in one second, which produces a
> reasonable approximation of all relevant stats in a short period of time, was
> better than waiting in anticipation for the next one second ping. In one 
> second
> you could already be reviewing stats. :) Also, since I think IRTT will be
> typically used for lower intervals than ping, not defaulting to per-packet
> output made sense to me. I don't need to do things just because of tradition.
> However, I took your advice because we're all so accustomed to ping for so 
> many
> years now, that what I like as a default might be uncomfortable or annoying to
> others, and I don't wish for people to get that feeling.
>
> If you do get some time, I'll try to turn around any changes ASAP even while 
> my
> visitor is here. I'm looking forward to using Flent with IRTT to redo my 
> second
> round of point-to-point WiFi tests. Open Mesh has also just released their 
> first
> public beta with airtime fairness, so I'd like to give that a try.

Groovy.

> Also, If there's a reason I should do my tests with iperf2 instead, I'm all
> ears, as I'm a "scientist," not attached to my own work. :) I read that 
> they're

Cross checking is always good.

> setting the thread priority to realtime, which likely reduces scheduling 
> delays,
> but there's probably a cost to that. I experimented with Go's

So long as you can complete your work within a bound, realtime
scheduling is a win. I also tended to like timer_fds as those let you
know easily when you'd overrun a bound.

> runtime.LockOSThread() (it's there as the -thread option for both client and
> server) but since during one round of testing I saw 10ms outliers with that
> enabled, it's not the default. It does seem to reduce mean RTT somewhat 
> though.

The *very* first thing I learned about GO was how to turn the garbage
collector off. I'm weird that way. I'd still suggest turning the garbage
collector off until after the end of a run.

> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub, or mute the thread.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-337025131___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-16 Thread Pete Heist
Thanks! I made most of your changes (-o was particularly broken, so this is a 
better solution), except:

- I'm still thinking about whether to default durations to seconds or not. I'm 
using Go's default duration flag parsing, and I like the explicitness of seeing 
the units.
- I like that the JSON is written even after ctrl-C, so that interrupting a 
long test doesn't mean you lose all your results. But maybe if you're sending 
output to stdout, it's not a good idea (or even breaks some convention?)

I changed it to ping-like behavior (although there is now -q for no per-packet 
results and -qq for no output at all). But just to explain the thought process, 
I felt that the default of five round trips in one second, which produces a 
reasonable approximation of all relevant stats in a short period of time, was 
better than waiting in anticipation for the next one second ping. In one second 
you could already be reviewing stats. :) Also, since I think IRTT will be 
typically used for lower intervals than ping, not defaulting to per-packet 
output made sense to me. I don't need to do things just because of tradition. 
**However**, I took your advice because we're all so accustomed to ping for so 
many years now, that what I like as a default might be uncomfortable or 
annoying to others, and I don't wish for people to get that feeling.

If you do get some time, I'll try to turn around any changes ASAP even while my 
visitor is here. I'm looking forward to using Flent with IRTT to redo my second 
round of point-to-point WiFi tests. Open Mesh has also just released their 
first public beta with airtime fairness, so I'd like to give that a try.

Also, If there's a reason I should do my tests with iperf2 instead, I'm all 
ears, as I'm a "scientist," not attached to my own work. :) I read that they're 
setting the thread priority to realtime, which likely reduces scheduling 
delays, but there's probably a cost to that. I experimented with Go's 
[runtime.LockOSThread()](https://golang.org/pkg/runtime/#LockOSThread) (it's 
there as the -thread option for both client and server) but since during one 
round of testing I saw 10ms outliers with that enabled, it's not the default. 
It does seem to reduce mean RTT somewhat though.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-336893137___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-15 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

> Thanks again both for your kind help and feedback on this. I hope it's
> useful, and if not, it sure was fun anyway!

Very nice, and definitely useful! :)

Took it for a quick spin on localhost, some oddities from my initial
fiddling:

- On first run I expected it to work similar to ping, i.e., output data
  points as packets come in. Instead I almost immediately get a summary.
  Ah, it's running for a single second, and needs -v to output
  per-packet datapoints. For one-off command-line use it might be more
  useful to default to a more ping-like output (i.e., run until killed,
  output data points)?

- Hmm, -d 10 doesn't work. Ah, -d 10s does. Maybe default to seconds if
  no unit is given?

- Let's look at the json output. Oops, -o stdout outputs garbage to the
  terminal; ah, it's gzipped and needs -nogzip. Probably better to
  default stdout to not being gzipped? And instead of adding an
  extension, do the opposite and detect whether or not to gzip from the
  supplied file name instead? If I do '-o test.json' I get
  'test.json.json.gz' which is confusing. Also, supplying a filename of
  '-' creates '-.json.gz' instead of outputting to stdout. And the
  output file is written even if I cancel with Ctrl+C.

I consider all these minor issues, though, and overall it looks good!
I'll see if I can find some time to add support to Flent and try it out
in my testbed in various scenarios. Not sure if I'll have time this
week, though, have another deadline to attend to...

-Toke


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-336726462___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-13 Thread Pete Heist
Thanks for your patience...some late nights and mornings this week :) but the 
initial revision is pushed: https://github.com/peteheist/irtt

I'm not posting pre-built binaries yet, if ever. Let me know if you'd like that 
and I'll see, but so far you have to install Go 1.9.1 (https://golang.org/dl/) 
and you should be able to download and install it with:

go install github.com/peteheist/irtt/cmd/irtt

build.sh can be used for various things, to cross-compile to a few platforms, 
strip symbols, etc. (Sure, I could have a Makefile, but there's also a 
reasonable argument from some of Go's core developers that Makefiles can be 
overkill for Go's build commands, especially for small projects.)

It's totally undocumented so far, save the usage. So I'll be working on that 
this weekend along with testing / bugs / whatever else I can get done. If 
anyone gets a chance to try it before this Wednesday, I could try to nail any 
show stoppers before my old man's in town, which will slow things down for 10 
days.

I'm also trying to ascertain if this project is still needed given the iperf2 
changes, so if not, let me know, no hard feelings and I'll save some time. :)

Some recent changes:

- Added "handshake" and test parameter negotiation. The client tells the server 
what it wants, the server tells the client what it'll actually get, and they 
agree on a fixed packet format, which minimizes the packet size based on the 
selected options. (Dave was right that this is a real PITA to take too far, 
taking OWAMP as an example, but at least it should be helpful for users that 
they don't try running a test that will exceed the server's minimum interval or 
duration.)
- Added hard restrictions on server to enforce minimal interval and test 
duration. It's a token bucket-ey like thing with burst, but I want to get 
smarter on rate limiting options in general.
- Re-organized / improved JSON output per Toke's great feedback. Must document.
- Re-wrote packet buffer manipulation code. It's pretty performant, but not as 
maintainable as I'd like, so will probably end up re-writing it again.
- Added subcommands to the executable as the usage was getting out of hand. So 
you now need to do "irtt server" or "irtt client" for example to run the server 
or client.
- Added some utility subcommands: "bench" to test HMAC and fill performance, 
"clock" to test wall vs monotonic clock and "sleep" to test sleep accuracy.
- Added a basic out-of-order packet metric (late packets, meaning a seqno 
arrives with a number lower than the seqno of the previous arrival).
- Added context.Context support for proper cancellation.
- Optimized pattern fill to make it pretty much equivalent to Go's copy 
builtin. Performance can be tested with "irtt bench" but it has great 
performance particularly what buffers get larger.
- As a security measure, made "pattern fill" on the server the default. I think 
it should prevent on-path attackers from reflecting traffic to arbitrary 
addresses.
- Added a -thread option to client and server to lock packet handling 
Goroutines to OS threads, although this is off by default. Need to test that 
more if it does anything useful.
- Squashed some bugs, did some basic load testing and nailed what hopefully 
were the only two race conditions (thank you Go's race detector).

There's so much in my TODO list I won't list everything, but for starters:

- Docs.
- Implement received packets feedback from the server. I'm planning a total 
count along with 64-bit bitfield of receipt status of previous 64 seqnos) to 
get a decent per-packet estimate of upstream / downstream loss.
- Add server bitrate limiting.
- Allow specifying two out of three of interval, bitrate and packet length to 
the client (Toke's idea).
- Add more server limiting and protection, like per-IP limits and new 
connection rate limits so someone doesn't just flood the server with new 
connection requests.
- Make server initiated close more robust, instead of a single packet to the 
client, which could be lost.
- Maybe redo my event logging (you'll see lines prefixed with [tag] from both 
client and server...). I've hemmed and hawed over this several times but am 
still not satisfied for various reasons.
- Add an optional auth mode to negotiate an HMAC key using (probably NaCl's) 
public / private key encryption, providing more protection against on-path 
attacks.
- Add a subcommand to produce CSV from JSON.
- Add ability for client to request random fill from server (server may deny).
- Show IPDV in client's verbose output.
- Use Go's "unsafe" package for faster packet buffer manipulation, although it 
really might be good enough, and we might rather want to feel "safe", so I'll 
see after more profiling.
- Timerfd?
- Non-isochronous send schedules? I'm listening about the videoconferencing 
traffic idea Dave, just need to figure out how to make it work.
- Zero downtime restarts.

4am and time to quit. We can continue discussion in the IRTT repo...

Thanks 

Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-10 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

> Also, I hope the time I've invested is still useful, given the iperf2
> team's post about suddenly adding isochronous support for their
> latency test. :) Anyway, I'll finish what I've started. The handshake
> is working and it's a matter of wrapping up (a number of) details...

I'm sure it will be. Looking forward to the code dump ;)

-Toke


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-335410173___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-10 Thread Pete Heist
Also, I hope the time I've invested is still useful, given the iperf2 team's 
post about suddenly adding isochronous support for their latency test. :) 
Anyway, I'll finish what I've started. The handshake is working and it's a 
matter of wrapping up (a number of) details...

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-335392765___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-10-08 Thread Pete Heist
Getting there, still. :) I ended up re-writing a lot of stuff as a result of 
the "handshake". Several late nights there...

BTW Dave, I didn't mean to can the idea totally of simulating 
videoconferencing-like traffic, so thanks for that- it made me at least leave 
open a way to send packets with different sizes and on different schedules. 
This won't be a V1 thing though. I'd hoped that the server can stay pretty 
"dumb" as something that just returns packets. But if that's the case, I 
suppose it won't simulate videoconferencing traffic very well. I'm pretty sure 
you don't see the same packets echoed back right away in real-world 
videoconferencing traffic. I'd expect rather independent streams in each 
direction, but I've not looked at it.

Anyway, next time it's about posted code, promise. :) I've got a weeklong 
family visit starting later next week, so I'd love to get it done by then, but 
when it combines with work and other stuff, it's hard to say...

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-335013725___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Pete Heist

> On Sep 26, 2017, at 8:35 PM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Pete Heist  writes:
> 
> > Yes, because the seqno is just the array index. I’ll add the seqno
> > explicitly to make it easier to consume.
> 
> So what happens if a packet is lost? There'll be an empty element in the
> array? That's... unusual... ;)

An element in the array with an empty receive timestamp (which I could rather 
make 0 timestamps). As it stands now:

{
"client": {
"receive": {},
"send": {
"wall": 1506452529563882735,
"monotonic": 2006485513
}
},
"server": {
"receive": {},
"send": {}
}
}

But yeah, seqno should be there for ease / clarity.

> Well, whether they are omitted or can be blank, it'll be important to
> know what to expect :)

Indeed. :)

> > Adding the JSON encoder was a relative “whopper” at around 250K
> > unstripped (also eyeballed). If I’m looking for somewhere to cut, I
> > could later find another encoder or just write JSON by hand. I did
> > some gyrations to avoid pulling in regexp in a couple of cases. :)
> 
> Huh, that's quite impressive (and not in a good way). But then it
> probably can't run on the tiniest of devices anyway since there's no
> MIPS support in Go (last I looked anyway)…

There is 32-bit MIPS support as of Go 1.8 (2/2017), BUT, and just found out 
this is a big but:

"Go now supports 32-bit MIPS on Linux for both big-endian (linux/mips) and 
little-endian machines (linux/mipsle) that implement the MIPS32r1 instruction 
set with FPU or kernel FPU emulation. Note that many common MIPS-based routers 
lack an FPU and have firmware that doesn't enable kernel FPU emulation; Go 
won't run on such machines."

I just tried compiling a really tough program:

func main() {
fmt.Println("Hello MIPS!")
}

tron:~/src/github.com/peteheist/irtt:% GOOS=linux GOARCH=mipsle go build 
-ldflags="-s -w" ./cmd/hellomips

ran it on an OM2P-HS and got this:

root@Service_West:/tmp# ./hellomips
./hellomips: line 1: syntax error: unexpected “("

That’s a disappointment, as I’d been under the assumption it was going to work 
and I could use IRTT on these devices. But, I can still test _through_ them 
with other devices.

Does LEDE have FPU emulation enabled? I’ll try to find out more!



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-332308580___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Pete Heist

> On Sep 26, 2017, at 7:17 PM, Dave Täht  wrote:
> 
> with a seqno it might also be possible to see ooo packets.

It will be there. Since they’re in seqno order in the JSON, any processing to 
measure OOO will have to look at the arrival timestamps.

I’ve got on the list to add a simple OOO metric- the number of arriving packets 
with smaller seqnos than the one that arrived before it. Ah, just noticed RFC 
4737, likely to shred me up some more.

> To really complexify your (our) life at one point, I've been dying for a
> decent videoconferencing emulation. Typical behaviors are a burst on a
> keyframe, a long sparser string of updates to that keyframe, and
> bandwidth probing behavior.
> 
> And yes, there is a working group (rmcat), and RFCs and papers on it
> (the google congestion control one is pretty good, scream is also worth
> reading) BUT: Don't listen to me on this, just keep coding.

Oh boy. One of the things I toyed with but ix-nayed early on was adding a 
Scheduler interface that could schedule the send time and packet sizes, where 
the isochronous one would be the default. There were a host of questions that 
came up with that, so I put the piton in the rock, as it were.

The next questions become, can the server no longer be dumb and have 
independent send schedules from the client? Can we record measurements on both 
sides? Now you’re writing a generic traffic simulator, which I probably don’t 
want IRTT to become. Plus, IPDV, with what information I have available to me, 
is best measured with constant size packets. (Maybe even constant interval 
packets? Not sure about that though.)

I may be open to it later as a different project. My intuition is to limit the 
scope of this one and make it bulletproof first, if that is possible!



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-332301189___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Toke Høiland-Jørgensen
Pete Heist  writes:

>> - The data points are missing sequence numbers; makes it hard to infer
>> loss, and to relate IPDV to RTT values.
>
> Yes, because the seqno is just the array index. I’ll add the seqno
> explicitly to make it easier to consume.

So what happens if a packet is lost? There'll be an empty element in the
array? That's... unusual... ;)

>> - Some of the 'params' are not terribly useful:
>> - What are the local and remote addresses of? Is this where the server
>> listens? I'm guessing the client doesn't connect to 0.0.0.0 at
>> least... Would be better to know the values that were actually used.
>
> Yes, I can get the actual local address and fix that. The remote
> address would be there but I removed it manually as I usually don’t
> post internal IPs if I remember to remove them. BTW, I may have an
> option to keep out any “internal” info, like hostnames and IPs.
>
> Come to think of it I’d rather have params be params (what was
> supplied to the API), then I can have something separate after Dial
> has occurred with resolved addresses, actual IP version, etc. Likewise
> it can be the case that the server doesn’t support the timestamp mode
> you requested, or the length of the packet, or it doesn’t support DSCP
> or DF (in the case of Windows), etc. It would be good to know both
> what you asked for and what you actually got.

Presumably the caller knows what is being asked for, so the important
thing is what will actually be used. Depends a little bit on what you
envisage the output being used for; if you plan on storing it by itself,
then the supplied parameters might be useful to include. But for Flent's
usage, the command line options are stored in the data file...

>> - Similarly, it would be more useful to know whether packets were actually 
>> sent
>> as IPv4 or IPv6, rather than what was selected.
>> - Which fields are guaranteed to be present and which can be blank?
>
> I’m omitting some that can be blank, but I see that it may be easier
> for consumption if I include everything instead of documenting what
> may or may not be there.

Well, whether they are omitted or can be blank, it'll be important to
know what to expect :)

>> - What is the send and receive rates? Are they always the same? And in
>> which direction? Do they include packet loss?
>
> They’re based on:
>
> - for send, the total UDP payload data written to the socket between
> right before the first send and right after the last send
>
> - for receive, the total UDP payload data received from the socket
> between right after the first receive and right after the last receive
> (dups not included)
>
> They may differ due to packet loss.

Right, that's what I assumed; and that seems reasonable.

> BTW, later I want to have the server return the total data received
> for a flow to distinguish between upstream and downstream packet loss
> (and maybe some bits with a window into which packets were lost).

Yup, that could be useful :)

> I do not include duplicates in received data, which raises a point. I
> consider duplicates something you shouldn’t see unless there is a
> problem or misconfiguration somewhere, so other than having a
> duplicate counter and warning about them I don’t include them in other
> stats (bytes received, bitrate, RTT/ OWD, etc). If that’s misguided,
> if seeing duplicates is an ordinary thing on the open Internet, let me
> know and I may reconsider what I do with the stats.

Meh, don't think duplication is a huge concern over the public internet.
Probably save to ignore for now...

> I was eyeballing it before, plus that was before I added the json
> package, which may or may not have dependencies in common, now it's:
>
> Unstripped increase: 153600 bytes
> Stripped increase: 108544 bytes
>
> It’s not important enough to me now for the size.
>
> Adding the JSON encoder was a relative “whopper” at around 250K
> unstripped (also eyeballed). If I’m looking for somewhere to cut, I
> could later find another encoder or just write JSON by hand. I did
> some gyrations to avoid pulling in regexp in a couple of cases. :)

Huh, that's quite impressive (and not in a good way). But then it
probably can't run on the tiniest of devices anyway since there's no
MIPS support in Go (last I looked anyway)...

-Toke


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-332293836___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Pete Heist

> On Sep 26, 2017, at 11:41 AM, Toke Høiland-Jørgensen 
>  wrote:
> 
> Pete Heist  writes:
> 
> > An update:
> >
> > - JSON is working, sample attached in case there are comments /
> > wishes.
> 
> Lots of data; don't think I'll parse all of it in Flent. My thought
> would be to save:
> 
> For each data point: RTT, OWD (in both directions), IPDV.
> For the whole test:
> - Min/max/mean/median RTT values.
> - Packet size and/or bit rate
> - Most of the params object, probably

I think that’s all in there (except packet size, will add it), but I’ll 
reorganize it into groupings that may make more sense.

> A few comments about the data format:
> 
> - I'm guessing all values are nanoseconds? Are the absolute times in
> UTC?

Yes, time.Time.UnixNano()- the number of nanoseconds elapsed since January 1, 
1970 UTC.

> - The data points are missing sequence numbers; makes it hard to infer
> loss, and to relate IPDV to RTT values.

Yes, because the seqno is just the array index. I’ll add the seqno explicitly 
to make it easier to consume.

> - Why are the IPDV values in a separate array?

For less memory usage / processing during the test, the IPDV array isn’t 
created until after the test. But you’re right, it would make more sense for 
consumption to have the individual round trip data in a single array. I'll make 
it work for the JSON without changing the internal representation somehow.

> - Some of the 'params' are not terribly useful:
> - What are the local and remote addresses of? Is this where the server
> listens? I'm guessing the client doesn't connect to 0.0.0.0 at
> least... Would be better to know the values that were actually used.

Yes, I can get the actual local address and fix that. The remote address would 
be there but I removed it manually as I usually don’t post internal IPs if I 
remember to remove them. BTW, I may have an option to keep out any “internal” 
info, like hostnames and IPs.

Come to think of it I’d rather have params be params (what was supplied to the 
API), then I can have something separate after Dial has occurred with resolved 
addresses, actual IP version, etc. Likewise it can be the case that the server 
doesn’t support the timestamp mode you requested, or the length of the packet, 
or it doesn’t support DSCP or DF (in the case of Windows), etc. It would be 
good to know both what you asked for and what you actually got.

> - Similarly, it would be more useful to know whether packets were actually 
> sent
> as IPv4 or IPv6, rather than what was selected.
> - Which fields are guaranteed to be present and which can be blank?

I’m omitting some that can be blank, but I see that it may be easier for 
consumption if I include everything instead of documenting what may or may not 
be there.

> - What is the send and receive rates? Are they always the same? And in
> which direction? Do they include packet loss?

They’re based on:

- for send, the total UDP payload data written to the socket between right 
before the first send and right after the last send

- for receive, the total UDP payload data received from the socket between 
right after the first receive and right after the last receive (dups not 
included)

They may differ due to packet loss.

BTW, later I want to have the server return the total data received for a flow 
to distinguish between upstream and downstream packet loss (and maybe some bits 
with a window into which packets were lost).

I do not include duplicates in received data, which raises a point. I consider 
duplicates something you shouldn’t see unless there is a problem or 
misconfiguration somewhere, so other than having a duplicate counter and 
warning about them I don’t include them in other stats (bytes received, 
bitrate, RTT/ OWD, etc). If that’s misguided, if seeing duplicates is an 
ordinary thing on the open Internet, let me know and I may reconsider what I do 
with the stats.

> - I sort of get why there are so many time stamps in the beginning, but
> I think 'first_send_time/first_sent_time' is bound to be confusing at
> some point; is it really necessary to include both? I'm assuming those
> are timestamps on each side of the send() call?

Not really, it was just there for completeness. I just removed everything 
except for four that I need for the bitrate calculations. I don’t even need to 
have those in the JSON if it’s not something externally useful anyway.

> > - pflag adds 160K to executable, passing for now.
> 
> Yeah, the binary size of Go apps is a PITA. Is this stripped size? Flent
> can obviously work with both styles of flags, I just personally thing
> the Go defaults are annoying; just be aware that once you release,
> changing is going to break backwards compatibility.

I was eyeballing it before, plus that was before I added the json package, 
which may or may not have dependencies in common, now it's:

Unstripped increase: 153600 bytes
Stripped increase: 108544 bytes

It’s 

Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Dave Täht
Pete Heist  writes:

> An update:
>
> *   JSON is working, sample attached in case there are comments / wishes.
>
> *   Median (where possible) and stddev are working.

While I'm obsessive, so many seem to think networks behave with gaussian
(where the concept of stdev comes from) distributions.

Pareto distributions are closer but still inadaquate... Poisson is poison:

http://www.pollere.net/Pdfdocs/QrantJul06.pdf

but by all means, throw stdev in there. It's sometimes useful.

I am usually most interested in the outliers above the 98th percentile,
and fine some use in seven number summaries also.


>
> *   pflag adds 160K to executable, passing for now.
>
> *   Interval restriction on non-root users works. "No can do" in Windows so 
> far
>   (uid always -1). Tried looking at well-known Windows admin SIDs but it's
>   unclear I can really get that to work right on different Windows
>   versions/configs so...punt for now and no restriction in Windows.
>
> *   Spent more time than I should have making different combinations of
>   timestamp (none, receive, send, both, midpoint) and clock (wall, monotonic,
>   both) modes working, but at least now packets can be reduced in size by
>   sacrificing timestamps or clock modes for simulating VoIP codecs that have
>   smaller payloads, which was one of my goals.
>
> *   Read the OWAMP RFC, mostly. Good lord. At least it gave me an idea that 
> the
>   server can (later) return received packet count and/or a bitmap of recently
>   received packets for a flow so we can distinguish between upstream and
>   downstream packet loss, which is not possible right now.

Don't let that RFC freeze you up! (talk about overenginering!)

> *   Last thing to do is the handshake (saved the best for last!) Did some more
>   thinking on this to make it more robust than my previous design. Maybe 
> about a
>   week to complete, we'll see, as this mixes with life stuff too...

> *   Stats output now in columns for easier reading:
>
> sysadmin@luke:~ $ ./irtt -i 10ms -d 30s -l 160 a.b.c.d
> IRTT to a.b.c.d (a.b.c.d:2112)
>
>  Min Mean   Median  Max  Stddev
>  ---    --  ---  --
> RTT  11.59ms  15.73ms  14.39ms  49.34ms  3.64ms
>  send delay5.9ms   9.23ms6.8ms  43.16ms  3.48ms
>   receive delay   5.42ms6.5ms   7.59ms  17.88ms   937µs
>
>   IPDV (jitter)   1.25µs   2.52ms   4.15ms  29.16ms  2.75ms
>   send IPDV 36ns   2.41ms595µs  28.84ms  2.69ms
>receive IPDV 60ns734µs   3.55ms   9.57ms   914µs
>
>  send call time   56.3µs   70.6µs 236µs  22.7µs
> timer error  4ns   11.3µs9.59ms   187µs
>   server proc. time   6.93µs   7.62µs68.1µs  2.23µs
>
>  duration: 30.2s (wait 148ms)
> packets received/sent: 2996/2996 (0.00% loss)
>   bytes received/sent: 479360/479360
> receive/send rate: 127.9 Kbps / 127.9 Kbps
>   timer stats: 4/3000 (0.13%) missed, 0.11% error
>
> g711.json.gz
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub, or mute the thread.


-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-332043470___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Pete Heist

> On Sep 26, 2017, at 1:35 AM, Dave Täht  wrote:
> 
> While I'm obsessive, so many seem to think networks behave with gaussian
> (where the concept of stdev comes from) distributions.
> 
> Pareto distributions are closer but still inadaquate... Poisson is poison:
> 
> http://www.pollere.net/Pdfdocs/QrantJul06.pdf
> 
> but by all means, throw stdev in there. It's sometimes useful.
> 
> I am usually most interested in the outliers above the 98th percentile,
> and fine some use in seven number summaries also.

That’s really helpful feedback. I just added stddev “because ping has it”. Not 
a great reason! Would have liked to hear V Jacobsen give that talk by the way.

Instead of trying to find some useful underlying distribution (sounds like I 
won’t), would a textual histogram (that fits within my 80 column limit) be 
useful? Summarizing outliers is also possible.

Ultimately though I think it will be more interesting to look at results in 
Flent. BTW, I’m thinking about adding a simple web interface for the client 
(after first release), but that will double the executable’s size, so it should 
be separate from the command line app. Before that though, I’ll at least add 
CSV export for spreadsheets.

I’m at at around 2.2 - 2.8 megs without symbol table, depending on platform. 
That fits comfortably on my EdgeRouter X’s but needs to live on a RAM disk on 
my OM2P’s with limited flash space. upx gets it down to around 850K on raspi, 
for example, but I’ve found upx to be flakey sometimes.

> > * Read the OWAMP RFC, mostly. Good lord. At least it gave me an idea that 
> > the
> > server can (later) return received packet count and/or a bitmap of recently
> > received packets for a flow so we can distinguish between upstream and
> > downstream packet loss, which is not possible right now.
> 
> Don't let that RFC freeze you up! (talk about overenginering!)

Yes, I don’t want to be overly critical of the good thought and work put into 
it, but I was concerned in the introduction when I read about encryption and 
separation of the control and test servers before seeing anything about how 
accurate measurements should be made. I didn’t see any mention of wall vs 
monotonic clocks but I think it’s important.

I liked Russ Cox’s recent blog entry on the process for developing Go. 
Particularly the section about "Explaining Problems", and the importance of 
making sure to define the real problem that’s being solved and its 
significance- having fallen into the trap of not doing so too many times myself 
(continues to this day so something to keep watching for).

https://blog.golang.org//toward-go2 



-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-332095180___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org


Re: [Flent-users] [tohojo/flent] packet loss stats (#106)

2017-09-26 Thread Pete Heist
An update:

- JSON is working, sample attached in case there are comments / wishes.

- Median (where possible) and stddev are working.

- pflag adds 160K to executable, passing for now.

- Interval restriction on non-root users works. "No can do" in Windows so far 
(uid always -1). Tried looking at well-known Windows admin SIDs but it's 
unclear I can really get that to work right on different Windows 
versions/configs so...punt for now and no restriction in Windows.

- Spent more time than I should have making different combinations of timestamp 
(none, receive, send, both, midpoint) and clock (wall, monotonic, both) modes 
working, but at least now packets can be reduced in size by sacrificing 
timestamps or clock modes for simulating VoIP codecs that have smaller 
payloads, which was one of my goals.

- Read the OWAMP RFC, mostly. Good lord. At least it gave me an idea that the 
server can (later) return received packet count and/or a bitmap of recently 
received packets for a flow so we can distinguish between upstream and 
downstream packet loss, which is not possible right now.

- Last thing to do is the handshake (saved the best for last!) Did some more 
thinking on this to make it more robust than my previous design. Maybe about a 
week to complete, we'll see, as this mixes with life stuff too...

- Stats output now in columns for easier reading:

```
sysadmin@luke:~ $ ./irtt -i 10ms -d 30s -l 160 a.b.c.d
IRTT to a.b.c.d (a.b.c.d:2112)

 Min Mean   Median  Max  Stddev
 ---    --  ---  --
RTT  11.59ms  15.73ms  14.39ms  49.34ms  3.64ms
 send delay5.9ms   9.23ms6.8ms  43.16ms  3.48ms
  receive delay   5.42ms6.5ms   7.59ms  17.88ms   937µs
   
  IPDV (jitter)   1.25µs   2.52ms   4.15ms  29.16ms  2.75ms
  send IPDV 36ns   2.41ms595µs  28.84ms  2.69ms
   receive IPDV 60ns734µs   3.55ms   9.57ms   914µs
   
 send call time   56.3µs   70.6µs 236µs  22.7µs
timer error  4ns   11.3µs9.59ms   187µs
  server proc. time   6.93µs   7.62µs68.1µs  2.23µs

 duration: 30.2s (wait 148ms)
packets received/sent: 2996/2996 (0.00% loss)
  bytes received/sent: 479360/479360
receive/send rate: 127.9 Kbps / 127.9 Kbps
  timer stats: 4/3000 (0.13%) missed, 0.11% error
```
[g711.json.gz](https://github.com/tohojo/flent/files/1331004/g711.json.gz)

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/tohojo/flent/issues/106#issuecomment-332024540___
Flent-users mailing list
Flent-users@flent.org
http://flent.org/mailman/listinfo/flent-users_flent.org