Dave,
With the above settings I'm seeing throughput in the area of 15-18 Mb
per second.
We also have FDR/UPSTREAM and use hipersockets to access the zOS tape
library. Our SAP
systems run continuously except for a very small window early Sunday
morning. (Too small
to perform all of our backups.
On Jan 15, 2009, at 9:35 AM, Rakoczy, Dave wrote:
zLinux assigns the MTU size according to the IQD CHPID definition.
For sake of discussion lets say I set the CHPID to a Max Frame Size of
64K, that would give me an MTU size of 56K according to the Doc.
Where can I control the size of the packe
Rakoczy, Dave wrote:
> zLinux assigns the MTU size according to the IQD CHPID definition.
>
> For sake of discussion lets say I set the CHPID to a Max Frame Size of
> 64K, that would give me an MTU size of 56K according to the Doc.
>
> Where can I control the size of the packets I'll send across th
on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
Adam Thornton
Sent: Thursday, January 15, 2009 9:56 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: HiperSocket Performance
On Jan 15, 2009, at 8:40 AM, Rakoczy, Dave wrote:
>
> I've spent the past few days scouring these archives look
>>> On 1/15/2009 at 10:35 AM, "Rakoczy, Dave"
wrote:
-snip-
> Sorry for all the questions... But I've got to learn this stuff
> somewhere.
No need to apologize. The reason this mailing list exists in the first place
is to share knowledge.
Mark Post
--
day, January 15, 2009 10:07 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: HiperSocket Performance
For bulk data transfer, make the MTU size as large as possible, and make
your packet sizes at least 40 bytes smaller than the maximum MTU to
avoid fragmentation of data packets. That allows space for th
For bulk data transfer, make the MTU size as large as possible, and make
your packet sizes at least 40 bytes smaller than the maximum MTU to avoid
fragmentation of data packets. That allows space for the TCP header on each
packet.
Note that this will affect interactive response, so it's probably n
On Jan 15, 2009, at 8:40 AM, Rakoczy, Dave wrote:
I've spent the past few days scouring these archives looking for what
others have done. I found a few threads that addressed HiperSocket
throughput speeds between zOS and zLinux, but were using FTP as the
benchmark utility. I have a window this
Hello all,
We are a longtime zOS shop taking the plunge into the zVM / zLinux
(SUSE 10SP2) world. We had never implemented HiperSockets in our zOS
environment so our experience with this technology is rather limited.
HiperSockets were implemented specifically to support our desire to take
advant
OTECTED]
IST.EDU> Subject
Re: Hipersocket Performance Problem
on SLES 9
01/19/2007 04:55
AM
Please respond to
Linux on 390 Por
MTU
sizes.
In the process of investigating this alleged "hipersocket performance
problem" (as reported by our Linux support person up four chains of
management), I discovered an interesting issue with scp. It shows very
limitted sensitivity for interface type, MTU size, or direction of
PUT performance with large MTU
sizes.
In the process of investigating this alleged "hipersocket performance
problem" (as reported by our Linux support person up four chains of
management), I discovered an interesting issue with scp. It shows very
limitted sensitivity for interface t
e transfer rate (from 90-100
MB/sec w/ MTU=16376 down to 10-15 MB/sec for MTU=32084. At MTU=32085,
performance drops off a cliff, down to 400 KB/sec.
So, one problem appears to be related to FTP PUT performance with large MTU
sizes.
In the process of investigating this alleged "hipersocket perf
Just to bring everyone up to speed (it's been a while): I've done more
testing and so far have seen this only between two SLES9 systems. Things
run fine up to MTU=32084; at MTU=32085 and above throughput drops to
~400KB/sec. Problem has been reported to SuSE but haven't heard back from
them.
What
Same here. Very disappointed. Intel and z.
tom
- - - - - - - - - - - -
Toto, I have a feeling we're not in the mainframe world any more.
_/) Tom Shilson
~Unix Team / IT Server Services
Aloha Tel: 651-733-7591 tshilson at mmm dot com
ubject
Re:
Hipersocket Performance Problem on SLES 9
Please respond to
Linux on 390 Port
I don't know what they are doing. I opened the incident on-line
yesterday. They came right back and request more info. I sent them that.
Haven'
/2006 02:28 PM
Please respond to
Linux on 390 Port
To
LINUX-390@VM.MARIST.EDU
cc
Subject
Re: Hipersocket Performance Problem on SLES 9
Novell can't even find the ticket # I gave you that we filed this under?
Tom Shilson <[EMAIL PROTECTED]>
06 02:24 PM
Subject
Re:
Hipersocket Performance Problem on SLES 9
Please
ler <[EMAIL PROTECTED]>
Sent by: Linux on 390 Port
To
LINUX-390@VM.MARIST.EDU
cc
12/14/2006 08:33 AM
Subject
Hipersocket Performance Problem on SLES 9
Please respond to
Linux on 390 Port
Greetings all,
I'
Subject
Hipersocket Performance Problem on SLES 9
Please respond to
Linux on 390 Port
Greetings all,
I've been using real hipersockets for a couple years now. Recently a
significant performance problem with SLES
Greetings all,
I've been using real hipersockets for a couple years now. Recently a
significant performance problem with SLES 9 and large MTU sizes was brought
to my attention. I've been using MTU=32760. I set up a test to illustrate
the problem. I built a 256 MB file on one zLinux guest (running
>
> Kris may have hit on the reason for this one: the buffer size
> for the particular device driver (which is completely
> different from the MTU).
>
Yepper, I'm guessing he's right. I just can't find it.
I've done some more testing and I'm getting so many different results
without a noticable p
On Sun, 19 Oct 2003, Adam Thornton wrote:
> How much does a large MTU actually help, even when everyone supports it?
I think it has more to do with reducing processing overhead in your host
systems than wringing the last bit out of your network. Increasing the
MTU from 1500 to 2 (say) will y
: Re: Hipersocket performance
>
> SLES8A --> ZOS1 = ~56KB/sec!!
>
If anyone is interested, I was able to increase this to around 55MB/sec by
changing SLES8A's MTU to 20480. Anything higher and it drops back down to
"K/sec". I haven't had a c
On Sunday, 10/19/2003 at 10:01 EST, Adam Thornton
<[EMAIL PROTECTED]> wrote:
> How much does a large MTU actually help, even when everyone supports it?
>
> I usually leave mine at either 1492 or 1500, regardless of the allowable
> interface maximum, because there's often something in the path that
I may be off majorly (it's late and I've been ill all day), but since the
hipersockets are implemented as an emulation, they can run at a speed that
is close to memory access speed, which is generally much faster than your
typical network. In that case, the overhead from the IP header is not at
al
On Sun, 2003-10-19 at 20:18, Lucius, Leland wrote:
> >
> > SLES8A --> ZOS1 = ~56KB/sec!!
> >
> If anyone is interested, I was able to increase this to around 55MB/sec by
> changing SLES8A's MTU to 20480. Anything higher and it drops back down to
> "K/sec". I haven't had a
>
> SLES8A --> ZOS1 = ~56KB/sec!!
>
If anyone is interested, I was able to increase this to around 55MB/sec by
changing SLES8A's MTU to 20480. Anything higher and it drops back down to
"K/sec". I haven't had a chance to change the ZOS side yet.
Leland
CONFIDENTIALITY NO
>
> Also, one thing I will be trying shortly is z/VM 4.4 and I
> was wondering if the new QIOASSIST is available when running
> on a z/800? I suspect not.
>
Welp, no luck on this one. z/VM 4.4 tells me I'm out of luck with:
HCP2162I QIOAssist is not available
:-(
Leland
CONFIDENTIALITY
I'm having an interesting problem with hipersockets that I just can't seem
to figure out. Here's the lay of the land:
ZOS1 1.4 in LPAR 1
ZOS2 1.4 in LPAR 2
ZVM 4.3 in LPAR 3
SLES8A guest under ZVM1
SLES8B guest under ZVM1
All systems are on a 10.2.32.x/25 network. All are set to use an MTU of
30 matches
Mail list logo