On 8/1/2011 12:59 PM, Thane Sherrington wrote:
At 01:53 PM 01/08/2011, Anthony Q. Martin wrote:
What do you mean? they are the points where inference gets in?
That's where I run into connection issues. Other than the occasional
problem where I go in to a spot where some idiot ran the cable and
either ran it alongside power cables stretched it, most of the
connection failures are at the ends. I think you can use iPerf to
test data loss on Ethernet. Or get one of those high end cable
testers from Fluke.
Following this site:
http://openmaniak.com/iperf.php
They say this:
"The UDP tests with the -u argument will give invaluable information
about the jitter and the packet loss. If you don't specify the -u
argument, Iperf uses TCP. To keep a good link quality, the packet loss
should not go over 1 %. A high packet loss rate will generate a lot of
TCP segment retransmissions which will affect the bandwidth."
In their example, they get this:
------------------------------------------------------------
Client connecting to 10.1.1.1, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 108 KByte (default)
------------------------------------------------------------
[ 3] local 10.6.2.5 port 32781 connected with 10.1.1.1 port 5001
[ 3] 0.0-10.0 sec 11.8 MBytes 9.89 Mbits/sec
[ 3] Sent 8409 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 11.8 MBytes 9.86 Mbits/sec 2.617 ms 9/
8409 (0.11%)
That last part is the # of packets that were lost and had to be
re-sent. They got 0.11% and 1% is the upper limit on a quality link.
When I run this test I get this:
3] Server Report:
[ 3] 0.0-10.0 sec 11.9 MBytes 10.0 Mbits/sec 1.711 ms 2/
8505 (0.024%)
So, perhaps this is time dependent and/or condition dependent...or I'm
just barking up entirely the wrong tree.