Hi Harris,

That looks pretty good to me : )

Cheers,

Ben

On 04/10/2010, at 7:06 PM, Harris Hui wrote:

> Hi Ben,
> 
> Thanks for your suggestion.
> 
> I had performed the iPerf UDP test, do you think is it normal for the 100Mbps 
> link?
> 
> Host A (10.16.xx.58) <-----> EX 4200 Switch <------> J6350 (MTU 9018) <----- 
> Fiber circuit 100Mbps (from West to East coast ~80ms) ------> (MTU 9018) 
> J6350 <------ EX 4200 Switch ------> Host B (10.26.xx.60)
> 
> 
> r...@xxxxxxx bin]# ./iperf -c 10.26.xx.60 -t 60 -u -b 100M
> ------------------------------------------------------------
> Client connecting to 10.26.xx.60, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: 126 KByte (default)
> ------------------------------------------------------------
> [ 3] local 10.16.xx.58 port 48543 connected with 10.26.xx.60 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-60.0 sec 719 MBytes 101 Mbits/sec
> [ 3] Sent 512816 datagrams
> [ 3] Server Report:
> [ 3] 0.0-60.0 sec 656 MBytes 91.7 Mbits/sec 0.206 ms 45108/512815 (8.8%)
> [ 3] 0.0-60.0 sec 1 datagrams received out-of-order
> 
> [r...@xxxxxxx bin]# ./iperf -c 10.26.xx.60 -t 60 -u -b 70M
> ------------------------------------------------------------
> Client connecting to 10.26.xx.60, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: 126 KByte (default)
> ------------------------------------------------------------
> [ 3] local 10.16.xx.58 port 25968 connected with 10.26.xx.60 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-60.0 sec 501 MBytes 70.0 Mbits/sec
> [ 3] Sent 357143 datagrams
> [ 3] Server Report:
> [ 3] 0.0-60.0 sec 501 MBytes 70.0 Mbits/sec 0.276 ms 0/357142 (0%)
> [ 3] 0.0-60.0 sec 1 datagrams received out-of-order
> 
> [r...@xxxxxxx bin]# ./iperf -c 10.26.xx.60 -t 60 -u -b 80M
> ------------------------------------------------------------
> Client connecting to 10.26.xx.60, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: 126 KByte (default)
> ------------------------------------------------------------
> [ 3] local 10.16.xx.58 port 31085 connected with 10.26.xx.60 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-60.0 sec 572 MBytes 80.0 Mbits/sec
> [ 3] Sent 408164 datagrams
> [ 3] Server Report:
> [ 3] 0.0-60.0 sec 568 MBytes 79.4 Mbits/sec 0.221 ms 2961/408163 (0.73%)
> [ 3] 0.0-60.0 sec 1 datagrams received out-of-order
> 
> Thanks
> - Harris
> 
> <graycol.gif>Ben Dale ---04/10/2010 07:56:24 AM---Hi Harris, However, 
> increasing the MTU size on both the J6350s may not be able to get a better 
> TCP throughput, because the Host
> 
> <ecblank.gif>
> From: <ecblank.gif>
> Ben Dale <bd...@comlinx.com.au>
> <ecblank.gif>
> To:   <ecblank.gif>
> Harris Hui/Hong Kong/i...@ibmhk
> <ecblank.gif>
> Cc:   <ecblank.gif>
> juniper-nsp@puck.nether.net
> <ecblank.gif>
> Date: <ecblank.gif>
> 04/10/2010 07:56 AM
> <ecblank.gif>
> Subject:      <ecblank.gif>
> Re: [j-nsp] J6350 Jumbo frame MTU and OSPF setting
> 
> 
> 
> Hi Harris,
> However, increasing the MTU size on both the J6350s may not be able to get a 
> better TCP throughput, because the Host NICs and Switchport are also using 
> MTU 1500 right? Should I change the MTU size on Host NICs and Juniper EX 
> switches to MTU 9018 in order to prevent the frame fragmentation happened 
> below 9018?
> There should be no more drops if your end devices are 1500 MTU and the "core" 
> network is 9018. As for your throughput, that is a little harder to 
> calculate, but the figures you are quoting seem quite low even with 80 ms 
> latency. 
> 
> Latency aside, you should be able to easily saturate a 100Mbps pipe with 1500 
> byte frames on a J6350 without issue (in terms of PPS). I don't believe 
> adjusting the MTU size is going to make that much difference, but it is worth 
> trying. I would be inclined to kick off iperf with a UDP test with 1500 byte 
> frames to see what throughput you can get out of the pipe first, then start 
> investigating TCP/MSS issues.
> 
> Cheers,
> 
> Ben
> 
> 

_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to