On 15 June 2013 17:58, Ron Foster <ron.fos...@baldor.abb.com> wrote:

> Rob,
>
> The traffic is between our Sap application servers running under zvm.
>

I'm surprised the traffic would be that impressive. Normally I see
bandwidth requirements with SAP for the transport data via NFS. For the
real application work, wouldn't processing in SAP and database latency on
the z/OS side be the limiting factor?


> During the latest conference call, the TCP ip folks determined that the
> SAP traffic in question generally has packets less than 8k.  So changing
> the frame size would not help.
>

It would be good to know whether this is by design or by observation. When
the MSS is 16K that gives you the MTU of 8K and that discourages large
packets.

The scenario that I referred to was where the customer just increased the
MSS to 64K and found very slow FTP transfer in one direction. The recent
qeth drivers recognize the MSS and adjust the MTU accordingly. However,
some applications use the tcp_wmem to limit the amount of data in transit.
An unhappy combination of default tcp_wmem and MTU causes less-than-full
packets in the middle of the conversation which kicks in Nagle's Algorithm.
For linux that resides in /proc/sys/net/ipv4 and z/OS has a TCPSENDBFRSIZE
parameter which has a default that is not good for large HiperSockets MSS.

But this is all about throughput and latency. I have a hard time believe
that storage creep in z/OS would be fixed by using a different MSS...

Rob

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to