Re: MTU on Hipersocket
Bad guess: the MTU used by VSE on this Hipersocket link is not 1500 (as found as default on the www), a query show 8192, same as in Linux Digging further. 2010/3/29 Kris Buelens > OK Thanks, know what to look at tomorrow. > > 2010/3/29 David Boyes > > >> We just found out that the defined MTU's differ, and I always heard that >> MTU sizes should be identical: >> >> >> >> Unless path discovery is active on both ends of the link, the sizes should >> be identical. >> >> >> Might this be an explanation? How comes it works fine (most of the time) >> with apparently different MTU values? >> >> >> >> Nothing sent a packet big enough to cause one end to need fragmentation, >> so you got lucky. It broke when you got a very large packet that needed to >> be fragmented, Path MTU discovery was off, and the sending side kept trying >> to send and got rejected. Since you’re running DRDA, that would be an easy >> scenario to do, since the results of queries would be different sizes. >> >> >> > > > > -- > Kris Buelens, > IBM Belgium, VM customer support > -- Kris Buelens, IBM Belgium, VM customer support
Re: MTU on Hipersocket
OK Thanks, know what to look at tomorrow. 2010/3/29 David Boyes > > We just found out that the defined MTU's differ, and I always heard that > MTU sizes should be identical: > > > > Unless path discovery is active on both ends of the link, the sizes should > be identical. > > > Might this be an explanation? How comes it works fine (most of the time) > with apparently different MTU values? > > > > Nothing sent a packet big enough to cause one end to need fragmentation, so > you got lucky. It broke when you got a very large packet that needed to be > fragmented, Path MTU discovery was off, and the sending side kept trying to > send and got rejected. Since you’re running DRDA, that would be an easy > scenario to do, since the results of queries would be different sizes. > > > -- Kris Buelens, IBM Belgium, VM customer support
Re: MTU on Hipersocket
We just found out that the defined MTU's differ, and I always heard that MTU sizes should be identical: Unless path discovery is active on both ends of the link, the sizes should be identical. Might this be an explanation? How comes it works fine (most of the time) with apparently different MTU values? Nothing sent a packet big enough to cause one end to need fragmentation, so you got lucky. It broke when you got a very large packet that needed to be fragmented, Path MTU discovery was off, and the sending side kept trying to send and got rejected. Since you're running DRDA, that would be an easy scenario to do, since the results of queries would be different sizes.
Re: MTU on Hipersocket
>>> On 3/29/2010 at 12:28 PM, Kris Buelens wrote: > Might this be an explanation? How comes it works fine (most of the time) > with apparently different MTU values? Absolutely it could be the problem. It will only fail if a packet exceeding the MTU gets sent, so if your traffic is mainly packets < 1492, then you won't see it right away. Also, the problem will only occur if it is the Linux system that sends the packet, since it has the larger MTU. Mark Post
MTU on Hipersocket
Not really VM, but z/VSE and Linux A customer I inherited encounters intermittent hangs of their VSE DRDA connection to UDB on Linux on system z. When they restart TCP/IP VSE all goes well again, for weeks or (lately: for a couple of days). The VSE system runs under VM on standard engines and Linux runs under VM on an IFL. We just found out that the defined MTU's differ, and I always heard that MTU sizes should be identical: VSE: DEFINE LINK,ID=OSAFNET,TYPE=OSAX,DEV=D00,MTU=1492,DATAPATH=D02,- PORTNAME=VSEPROD Linux: ifcfg-hsio, reports and MTU of 8192 Might this be an explanation? How comes it works fine (most of the time) with apparently different MTU values? -- Kris Buelens, IBM Belgium, VM customer support