Dave,
With the above settings I'm seeing throughput in the area of 15-18 Mb
per second.
We also have FDR/UPSTREAM and use hipersockets to access the zOS tape
library.  Our SAP
systems run continuously except for a very small window early Sunday
morning.  (Too small
to perform all of our backups.)  We have to run our FDR/UPSTREAM backups
with SAP up
and running.

A not so funny thing happened when we tweaked settings to make
FDR/UPSTREAM run
at maximum speed.  FDR/UPSTREAM consumed so much memory that our SAP
application
slowed way down.  We settled for lower FDR/UPSTREAM performance in order
to keep
the SAP application responsive during backups.

That was two or three years ago.  the FDR/Upstream folks may have done
something about
this.  At the time, they just said "it was how it works."

Ron




Rakoczy, Dave wrote:
Hello all,

We are a longtime zOS shop taking the plunge into the zVM / zLinux
(SUSE 10SP2) world.  We had never implemented HiperSockets in our zOS
environment so our experience with this technology is rather limited.
HiperSockets were implemented specifically to support our desire to take
advantage of  FDR/Upstreams ability to interface with our zOS tape
management tools.

During the last week or so we've been testing the different
FDR/Upstreams backup and restore processes (including the RMAN option).
The backup and restore processes work as advertised, but we are looking
for better throughput times for these processes.

Currently the HiperSocket channel is defined in the I/O Gen as having a
Maximum Frame Size of 16 KB, the MTU size on both the zOS and z/linux
interfaces is set to 8192 (see Below).   The backups are going to our
VTS which is Ficon attached to the Mainframe.

DevName: IUTIQDF4          DevType: MPCIPA
  DevStatus: Ready         CfgRouter: Non  ActRouter: Non
  LnkName: HIPR2             LnkType: IPAQIDIO    LnkStatus: Ready
    NetNum: n/a  QueSize: n/a
    IpBroadcastCapability: No
    ArpOffload: Yes                ArpOffloadInfo: Yes
    ActMtu: 8192
    ReadStorage: GLOBAL (2048K)
    SecClass: 255                  MonSysplex: No
    IQDMultiWrite: Disabled
  BSD Routing Parameters:
    MTU Size: 8192              Metric: 00
    DestAddr: 0.0.0.0           SubnetMask: 255.255.255.0
  Multicast Specific:



hsi0      Link encap:Ethernet  HWaddr 00:00:00:00:00:00
          inet addr:192.0.1.5  Bcast:192.0.1.255  Mask:255.255.255.0
          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
          UP BROADCAST RUNNING NOARP MULTICAST  MTU:8192  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:0 (0.0 b)  TX bytes:168 (168.0 b)


With the above settings I'm seeing throughput in the area of 15-18 Mb
per second.


I've spent the past few days scouring these archives looking for what
others have done.  I found a few threads that addressed HiperSocket
throughput speeds between zOS and zLinux, but were using FTP as the
benchmark utility.  I have a window this weekend where I can put in a
I/O Gen change for the Maximum Frame Size on the HiperSocket channel
define (hopefully I'm on the right track here).

So, the question I'm posing is : If you are using HiperSockets to
support FDR/Upstream, in your experience where did you find the
performance throughput sweet spot?

Thanks in advance for any input you may relay.



David Rakoczy
Operating Systems Programmer
Thermo Fisher Scientific

"He who laughs last probably made a backup."



----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
.



----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to