Hello all,
We are a longtime zOS shop taking the plunge into the zVM / zLinux
(SUSE 10SP2) world. We had never implemented HiperSockets in our zOS
environment so our experience with this technology is rather limited.
HiperSockets were implemented specifically to support our desire to take
On Jan 15, 2009, at 8:40 AM, Rakoczy, Dave wrote:
I've spent the past few days scouring these archives looking for what
others have done. I found a few threads that addressed HiperSocket
throughput speeds between zOS and zLinux, but were using FTP as the
benchmark utility. I have a window
For bulk data transfer, make the MTU size as large as possible, and make
your packet sizes at least 40 bytes smaller than the maximum MTU to avoid
fragmentation of data packets. That allows space for the TCP header on each
packet.
Note that this will affect interactive response, so it's probably
, 2009 10:07 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: HiperSocket Performance
For bulk data transfer, make the MTU size as large as possible, and make
your packet sizes at least 40 bytes smaller than the maximum MTU to
avoid fragmentation of data packets. That allows space for the TCP
header
On 1/15/2009 at 10:35 AM, Rakoczy, Dave dave.rako...@thermofisher.com
wrote:
-snip-
Sorry for all the questions... But I've got to learn this stuff
somewhere.
No need to apologize. The reason this mailing list exists in the first place
is to share knowledge.
Mark Post
-...@vm.marist.edu] On Behalf Of
Adam Thornton
Sent: Thursday, January 15, 2009 9:56 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: HiperSocket Performance
On Jan 15, 2009, at 8:40 AM, Rakoczy, Dave wrote:
I've spent the past few days scouring these archives looking for what
others have done. I
Rakoczy, Dave wrote:
zLinux assigns the MTU size according to the IQD CHPID definition.
For sake of discussion lets say I set the CHPID to a Max Frame Size of
64K, that would give me an MTU size of 56K according to the Doc.
Where can I control the size of the packets I'll send across the
On Jan 15, 2009, at 9:35 AM, Rakoczy, Dave wrote:
zLinux assigns the MTU size according to the IQD CHPID definition.
For sake of discussion lets say I set the CHPID to a Max Frame Size of
64K, that would give me an MTU size of 56K according to the Doc.
Where can I control the size of the
Dave,
With the above settings I'm seeing throughput in the area of 15-18 Mb
per second.
We also have FDR/UPSTREAM and use hipersockets to access the zOS tape
library. Our SAP
systems run continuously except for a very small window early Sunday
morning. (Too small
to perform all of our
of investigating this alleged hipersocket performance
problem (as reported by our Linux support person up four chains of
management), I discovered an interesting issue with scp. It shows very
limitted sensitivity for interface type, MTU size, or direction of
movement), however I can't get it to run
Subject
Re: Hipersocket Performance Problem
on SLES 9
01/19/2007 04:55
AM
Please respond to
Linux on 390 Port
[EMAIL PROTECTED]
IST.EDU
(from 90-100
MB/sec w/ MTU=16376 down to 10-15 MB/sec for MTU=32084. At MTU=32085,
performance drops off a cliff, down to 400 KB/sec.
So, one problem appears to be related to FTP PUT performance with large MTU
sizes.
In the process of investigating this alleged hipersocket performance
problem
with large MTU
sizes.
In the process of investigating this alleged hipersocket performance
problem (as reported by our Linux support person up four chains of
management), I discovered an interesting issue with scp. It shows very
limitted sensitivity for interface type, MTU size, or direction of
movement
Just to bring everyone up to speed (it's been a while): I've done more
testing and so far have seen this only between two SLES9 systems. Things
run fine up to MTU=32084; at MTU=32085 and above throughput drops to
~400KB/sec. Problem has been reported to SuSE but haven't heard back from
them.
What
-390@VM.MARIST.EDU
cc
Subject
Re: Hipersocket Performance Problem on SLES 9
Mark, I saw something similar when we first went to SLES-9 (64 bit). We
had MTU sizes being negotiated down to an idiotic packet size of 1492.
Whether or not this has to do with how our OSA is configured I never did
Subject
Re:
Hipersocket Performance Problem on SLES 9
Please respond
/2006 02:28 PM
Please respond to
Linux on 390 Port LINUX-390@VM.MARIST.EDU
To
LINUX-390@VM.MARIST.EDU
cc
Subject
Re: Hipersocket Performance Problem on SLES 9
Novell can't even find the ticket # I gave you that we filed this under?
Tom Shilson [EMAIL PROTECTED
Re:
Hipersocket Performance Problem on SLES 9
Please respond to
Linux on 390 Port LINUX-390@VM.MARIST.EDU
I don't know what they are doing. I opened the incident on-line
yesterday. They came right back and request more info. I
Greetings all,
I've been using real hipersockets for a couple years now. Recently a
significant performance problem with SLES 9 and large MTU sizes was brought
to my attention. I've been using MTU=32760. I set up a test to illustrate
the problem. I built a 256 MB file on one zLinux guest (running
Hipersocket Performance Problem on SLES 9
Please respond to
Linux on 390 Port LINUX-390@VM.MARIST.EDU
Greetings all,
I've been using real hipersockets for a couple years now. Recently a
significant
SLES8A -- ZOS1 = ~56KB/sec!!
If anyone is interested, I was able to increase this to around 55MB/sec by
changing SLES8A's MTU to 20480. Anything higher and it drops back down to
K/sec. I haven't had a chance to change the ZOS side yet.
Leland
CONFIDENTIALITY NOTICE:
On Sun, 2003-10-19 at 20:18, Lucius, Leland wrote:
SLES8A -- ZOS1 = ~56KB/sec!!
If anyone is interested, I was able to increase this to around 55MB/sec by
changing SLES8A's MTU to 20480. Anything higher and it drops back down to
K/sec. I haven't had a chance to
On Sunday, 10/19/2003 at 10:01 EST, Adam Thornton
[EMAIL PROTECTED] wrote:
How much does a large MTU actually help, even when everyone supports it?
I usually leave mine at either 1492 or 1500, regardless of the allowable
interface maximum, because there's often something in the path that
On Sun, 19 Oct 2003, Adam Thornton wrote:
How much does a large MTU actually help, even when everyone supports it?
I think it has more to do with reducing processing overhead in your host
systems than wringing the last bit out of your network. Increasing the
MTU from 1500 to 2 (say) will
Kris may have hit on the reason for this one: the buffer size
for the particular device driver (which is completely
different from the MTU).
Yepper, I'm guessing he's right. I just can't find it.
I've done some more testing and I'm getting so many different results
without a noticable
I'm having an interesting problem with hipersockets that I just can't seem
to figure out. Here's the lay of the land:
ZOS1 1.4 in LPAR 1
ZOS2 1.4 in LPAR 2
ZVM 4.3 in LPAR 3
SLES8A guest under ZVM1
SLES8B guest under ZVM1
All systems are on a 10.2.32.x/25 network. All are set to use an MTU
Also, one thing I will be trying shortly is z/VM 4.4 and I
was wondering if the new QIOASSIST is available when running
on a z/800? I suspect not.
Welp, no luck on this one. z/VM 4.4 tells me I'm out of luck with:
HCP2162I QIOAssist is not available
:-(
Leland
CONFIDENTIALITY
27 matches
Mail list logo