Re: [j-nsp] Speed

2013-04-10 Thread Saku Ytti
On (2013-04-10 00:01 +0200), Benny Amorsen wrote:

 Yes, you can in theory cause microbursting of UDP if you want. I am just
 not sure which tool I would use to do that. Typical UDP tests like iperf
 attempt to do perfect timing of packets so bursts are avoided, and they
 seem to do a fairly good job of it.

Fair point. This is iperf:

if ( isUDP( mSettings ) ) {
...
delay_target = (int) ( mSettings-mBufLen * ((kSecs_to_usecs * 
kBytes_to_Bits)
 / mSettings-mUDPRate) );


I still think UDP is the correct way to test network, making UDP burst (or not
burst) is easy, forcing TCP to behave as you desire is harder.


Quickly looking 'nuttcp 7.2.1' seems to support bursting,
http://lcp.nrl.navy.mil/nuttcp/beta/nuttcp-7.2.1.c

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-09 Thread Saku Ytti
On (2013-04-08 23:29 +0200), Benny Amorsen wrote:

 UDP tests can be too generous on the network. A stream of perfectly
 spaced UDP packets will not show problems with microbursts. Almost all
 bulk transfer protocols are TCP, so it is important to test with TCP.

Microbursts will drop UDP has well, you'll experience this as packet loss
just the same, so you want to find value which has 0 packet loss. This same
number will indicate when TCP will start dropping (and reducing
window-size)
I see people often don't even look at packet loss numbers in iperf UDP
output

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-09 Thread Benny Amorsen
Saku Ytti s...@ytti.fi writes:

 Microbursts will drop UDP has well, you'll experience this as packet loss
 just the same, so you want to find value which has 0 packet loss. This same
 number will indicate when TCP will start dropping (and reducing
 window-size)

There will only be packet loss if you test while there is background
traffic on the link. If the only load is a perfect stream of UDP
packets, the buffers will not fill and no packets will be dropped.


/Benny
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-09 Thread Saku Ytti
On (2013-04-09 15:03 +0200), Benny Amorsen wrote:

 There will only be packet loss if you test while there is background
 traffic on the link. If the only load is a perfect stream of UDP
 packets, the buffers will not fill and no packets will be dropped.

This is completely L4 agnostic though, TCP and UDP experience same rate of
packet loss due to congestion (short of specific QoS).

Obviously microbursts can (in both TCP or UDP) scenario happen without any
background traffic. Consider you're connected to 1GE port, testing another
host in 100M port, if you limit your rate to 100M, you still causes the
100M port to congest, as incoming rate is always 1GE for variable duration
(depending on how you police the sending).



-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-09 Thread Benny Amorsen
Saku Ytti s...@ytti.fi writes:

 Obviously microbursts can (in both TCP or UDP) scenario happen without any
 background traffic. Consider you're connected to 1GE port, testing another
 host in 100M port, if you limit your rate to 100M, you still causes the
 100M port to congest, as incoming rate is always 1GE for variable duration
 (depending on how you police the sending).

Yes, you can in theory cause microbursting of UDP if you want. I am just
not sure which tool I would use to do that. Typical UDP tests like iperf
attempt to do perfect timing of packets so bursts are avoided, and they
seem to do a fairly good job of it.

In contrast, iperf TCP can get awfully bursty.


/Benny

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-08 Thread Saku Ytti
On (2013-04-08 03:46 +0200), Johan Borch wrote:

 of a single session with a RTT of only 8ms? The performance is the same if
 I use 2 switches and the clients directly connected as if i use routers
 between. Any idea what it could be?

bw * delay = window

so 

window / delay = bw

64k*8 / 0.008 = 64000kbps = 64Mbps

To achieve 40Mbps, you'd need

40M*1000/8 * 0.008 = 48kB window

make sure with tshark what your actual window size is, don't trust iperf.
Best thing is to configure OS TCP stack to window scaling and dont touch
iperf window settings, I don't know why, but they just seem to break stuff.

Also never measure network with TCP, measure network with UDP, measure TCP
stack of hosts with TCP.

-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-08 Thread Alex Arseniev
Use TCP Optimizer to increase WSCALE/RWIN on Windows hosts to achieve better 
TCP perf

http://www.speedguide.net/downloads.php
Thanks
Alex

- Original Message - 
From: Saku Ytti s...@ytti.fi

To: juniper-nsp@puck.nether.net
Sent: Monday, April 08, 2013 8:13 AM
Subject: Re: [j-nsp] Speed



On (2013-04-08 03:46 +0200), Johan Borch wrote:

of a single session with a RTT of only 8ms? The performance is the same 
if

I use 2 switches and the clients directly connected as if i use routers
between. Any idea what it could be?


bw * delay = window

so

window / delay = bw

64k*8 / 0.008 = 64000kbps = 64Mbps

To achieve 40Mbps, you'd need

40M*1000/8 * 0.008 = 48kB window

make sure with tshark what your actual window size is, don't trust iperf.
Best thing is to configure OS TCP stack to window scaling and dont touch
iperf window settings, I don't know why, but they just seem to break 
stuff.


Also never measure network with TCP, measure network with UDP, measure TCP
stack of hosts with TCP.

--
 ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-08 Thread Benny Amorsen
Saku Ytti s...@ytti.fi writes:

 make sure with tshark what your actual window size is, don't trust iperf.
 Best thing is to configure OS TCP stack to window scaling and dont touch
 iperf window settings, I don't know why, but they just seem to break stuff.

In my experience, you cannot trust iperf to not override the OS window
size. Explicit -w seems to be the only reliable solution.


/Benny

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-08 Thread Saku Ytti
On (2013-04-08 13:44 +0200), Benny Amorsen wrote:

 In my experience, you cannot trust iperf to not override the OS window
 size. Explicit -w seems to be the only reliable solution.

I remember one test I had, not long ago, where any -w value fared worse
than no -w value.
I never tsharked it, I just presumed iperf doing something stupid with
static window size and OS doing something smart.

This highlights the fact that you should not test network with TCP, always
UDP, with TCP there are so many things to go wrong which are not network
related, UDP is much more reliable indication that problem actually may be
in the network.


-- 
  ++ytti
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed

2013-04-08 Thread Benny Amorsen
Saku Ytti s...@ytti.fi writes:

 This highlights the fact that you should not test network with TCP, always
 UDP, with TCP there are so many things to go wrong which are not network
 related, UDP is much more reliable indication that problem actually may be
 in the network.

UDP tests can be too generous on the network. A stream of perfectly
spaced UDP packets will not show problems with microbursts. Almost all
bulk transfer protocols are TCP, so it is important to test with TCP.


/Benny

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Speed

2013-04-07 Thread Johan Borch
Hi

This issue is somewhat off topic :)

I have a 1Gbps wavelenght from a supplier (1310), the RTT is about 8ms and
whatever I do I can't get more than 25-40 Mbps from a single session
(iperf, 64K ws, 2 x windows 8, same result with linux clients, same with
http and ftp tests). I can get up to 1Gbps using multiple sessions (10
sessions and window size of 128K), but should'nt I be able to get more out
of a single session with a RTT of only 8ms? The performance is the same if
I use 2 switches and the clients directly connected as if i use routers
between. Any idea what it could be?

Regards
Johan
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed/Duplex Issue

2010-03-24 Thread Paul Stewart
 Did you hard-code the speed/duplex setting on both the Juniper and Cisco
switches, or just the Juniper's?

 We've been happy with auto-nego'ing all connections, including with
upstreams. Life has been much easier going that route. I can't remember the
last time anything good came out of hard-coding these settings, or when we
last did that, for that matter.

The Cisco's (customer equipment) were already hard coded as per our
instructions at the time.  Coming from the Cisco world we had a lot of
issues with auto-neg towards various makes/models of switch vendors.

Anyways, we're fixed now after understanding Juniper's approach to auto-neg
... thanks...;)

Paul



___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


[j-nsp] Speed/Duplex Issue

2010-03-23 Thread Paul Stewart
Hi folks...

 

We just cut in another couple of EX4200's into production overnight.  These
are the first deployments that don't have pure GigE ports - several ports
100/full.

 

When I did the configuration I set the ether-options for 100/full ... most
of the ports are facing Cisco switches.  All the ports that were hard coded
would not come up at all - the minute I removed the ether-options they came
up and appear to be ok.

 

Is this normal?  Also, I'm wondering how you verify what duplex the port is
running at?  Sorry for basic question but for the life of me I can't find
this in the output or the docs...;)

 

Paul

 

 

 

___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp


Re: [j-nsp] Speed/Duplex Issue

2010-03-23 Thread Mark Tinka
On Tuesday 23 March 2010 08:50:01 pm Paul Stewart wrote:

 When I did the configuration I set the ether-options for
  100/full ... most of the ports are facing Cisco
  switches.  All the ports that were hard coded would not
  come up at all - the minute I removed the ether-options
  they came up and appear to be ok.

Did you hard-code the speed/duplex setting on both the Juniper 
and Cisco switches, or just the Juniper's?

We've been happy with auto-nego'ing all connections, including 
with upstreams. Life has been much easier going that route. I can't 
remember the last time anything good came out of hard-coding these 
settings, or when we last did that, for that matter.

 Is this normal?  Also, I'm wondering how you verify what
  duplex the port is running at?  Sorry for basic question
  but for the life of me I can't find this in the output
  or the docs...;)

[edit]
t...@lab# run show interfaces ge-0/1/3 | match Duplex 
  Link-level type: Ethernet, MTU: 9014, Speed: 1000mbps, Duplex: Full-Duplex, 
MAC-REWRITE Error: None, Loopback: Disabled, Source 
filtering: Disabled, Flow control: Enabled   
 
[edit]
t...@lab#

The above is taken off an EX3200.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp