Re: Channel Contention

2010-02-22 Thread Brian Nielsen
Thanks, I wasn't aware of that.  It makes the channel activity displays 

much less interesting.

Brian Nielsen


On Thu, 18 Feb 2010 20:12:57 -0500, Raymond Higgs rayhi...@us.ibm.com 

wrote:

I wouldn't rely on those System Activity Display numbers with big IOs to

determine channel capacity.  That 3-4% is how busy the powerpc processor

is in the channel.  With big IOs, it's just idling, waiting on the DMA
engines.  If you add more IOs to the channel, you'll max out some piece 
of
hardware way before the SAD panel shows 50%.  With one of my 8 gig FCP
channels, I see 25% at max read bandwidth.  I think ficon would yield
similar numbers.

The processor in 2 and 4 gig channels is the same.  The processor in 8 g
ig
channels is quite a bit faster.

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
rayhi...@us.ibm.com


Re: Channel Contention

2010-02-18 Thread Brian Nielsen
Being that this is a one-time effort, I wouldn't worry too much about 
where the bottleneck is because you indicate can't do anything about it 

anyway.  Better is simply to estimate the amount of time it will take to 

some gross level of precision.  If you were going to be doing it regularl
y 
then it becomes a more important question and could drive configuration 

changes.

I know tape drives are not involved in your process, but I do know that 

DDR is very effective at driving the channel to them.  We have a bank of 
4 
3590's shared between VM and z/OS on 2 ESCON channels.  When I had 2 DDR'
s 
writing to 2 of the tapes drives it effectivley saturated the channels 

from the VM LPAR and the z/OS jobs trying to use the other 2 tape drives 

suffered horribly.  The reports from Velocity's ESAMAP made it easy to 

diagnose this cross-LPAR interference.  (I ended up moving my DDR's to a 

different time slot.)  I mention this to say that it will be good that 

there will not be other production workload going on to the DASD when you
 
do this.

BTW, the 3-4% channel utilization I mentioned in my other post is on 2G 

FICON channels.  Obviously, 4G or 8G channels would make a big difference
.

Brian Nielsen

On Wed, 17 Feb 2010 16:45:23 -0800, Schuh, Richard rsc...@visa.com wrot
e:

I will have to check with the TPF and H/W folks to get answers to most 

of  your questions - VM was not included in the planning for the install.
 
Since this is a one-time effort, I doubt that I can justify any new 
feature. I am fairly certain that the Ficon to the target disks is 4G. We
 
are on a z10 (one I can answer authoritatively). The configuration of the
 
switch is one that I absolutely cannot answer until I get an answer from 

the h/w folks. Unfortunately, they are in an earlier time zone, so I 
cannot get the answer until tomorrow.


Re: Channel Contention

2010-02-18 Thread Raymond Higgs
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
02/18/2010 10:48:52 AM:

 Brian Nielsen bniel...@sco.idaho.gov 
 Sent by: The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU
 
 02/18/2010 10:48 AM
 
 Please respond to
 The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU
 
 To
 
 IBMVM@LISTSERV.UARK.EDU
 
 cc
 
 Subject
 
 Re: Channel Contention
 
 Being that this is a one-time effort, I wouldn't worry too much about 
 where the bottleneck is because you indicate can't do anything about it 
 
 anyway.  Better is simply to estimate the amount of time it will take to 

 
 some gross level of precision.  If you were going to be doing it 
regularl
 y 
 then it becomes a more important question and could drive configuration 
 
 changes.
 
 I know tape drives are not involved in your process, but I do know that 
 
 DDR is very effective at driving the channel to them.  We have a bank of 

 4 
 3590's shared between VM and z/OS on 2 ESCON channels.  When I had 2 
DDR'
 s 
 writing to 2 of the tapes drives it effectivley saturated the channels 
 
 from the VM LPAR and the z/OS jobs trying to use the other 2 tape drives 

 
 suffered horribly.  The reports from Velocity's ESAMAP made it easy to 
 
 diagnose this cross-LPAR interference.  (I ended up moving my DDR's to a 

 
 different time slot.)  I mention this to say that it will be good that 
 
 there will not be other production workload going on to the DASD when 
you
 
 do this.
 
 BTW, the 3-4% channel utilization I mentioned in my other post is on 2G 
 
 FICON channels.  Obviously, 4G or 8G channels would make a big 
difference
 .
 
 Brian Nielsen

I wouldn't rely on those System Activity Display numbers with big IOs to 
determine channel capacity.  That 3-4% is how busy the powerpc processor 
is in the channel.  With big IOs, it's just idling, waiting on the DMA 
engines.  If you add more IOs to the channel, you'll max out some piece of 
hardware way before the SAD panel shows 50%.  With one of my 8 gig FCP 
channels, I see 25% at max read bandwidth.  I think ficon would yield 
similar numbers.

The processor in 2 and 4 gig channels is the same.  The processor in 8 gig 
channels is quite a bit faster.

Ray Higgs
System z FCP Development
Bld. 706, B24
2455 South Road
Poughkeepsie, NY 12601
(845) 435-8666,  T/L 295-8666
rayhi...@us.ibm.com

Channel Contention

2010-02-17 Thread Schuh, Richard
Currently, we have 3 LPARS, 2 support Linux and 1 for TPF testing. The current 
disk configuration is

*   a boatload of big 3390s (27-32MB) are on the Linux LPARs, These are 
connected using 4 Ficon channels. The DDRs will be done from one of the Linux 
LPARs. There will be two concurrent DDRs for this. These will be full disk 
copies. 4 channels to 210 disks.
*   Another boatload of 3330-03s where the TPF test system base disks 
reside. These are connected to the third LPAR via 8 ESCON channels to each 
array. Since the disks are not the same size, the minidisks will be copied, not 
the physical disks. The plan is to have 16 concurrent copies. There are 16 
channels serving 437 disks, roughly 15,000 minidisks.
*   The target disks are connected via 4 Ficon channels that are EMIFd to 
all LPARs. There is separation of arrays; the TPF and Linux disks are not 
intermingled; however, the same 4 channels are shared between the LPARs. There 
are only 4 channels.

The question is, will the 4 channels be a bottleneck if both the Linux and TPF 
migrations are done concurrently?



Regards,
Richard Schuh





Re: Channel Contention

2010-02-17 Thread Brian Nielsen
Here are some data points that may help you.

When I run standalone DDR to do a DASD-to-DASD copy within a single ESS 

800 connected with 4 FICON channels, each channel is about 3-4% busy as 

reported on the HMC channel activity display, and the CPU is about 1% 
busy. My quick back of the envelope calculation is that about 25 
concurrent DDR's would max out my 4 channels.  If your source and targets
 
are on different channels you could estimate getting twice that many 
before saturating 4 FICON channels.

However, my other experiences with mass I/O tells me that the cache in 

your target DASD controller is also a good candidate to be the 
bottleneck.  Initially the writes will be fast until the controller cache
 
fills up, then it will slow down once it becomes neccessary to wait for 

tracks to be destaged from cache to disk.  Therefore, while the amount of
 
cache in the controller is a critical performance factor in normal 
workloads, in this situation however much controller cache you have will 

sooner or later get swamped.  At that point you should check your DASD 

hardware performance guide for it's maximum sustained write throughput 

rates.  Compare that to your channel throughput capacity to decide which 

one is the bottleneck.  Depending on how much data you push, you may even
 
find that the bottleneck starts at the channel and then shifts to the 
controller after the cache fills up.

Brian Nielsen


On Wed, 17 Feb 2010 11:14:27 -0800, Schuh, Richard rsc...@visa.com wrot
e:

Currently, we have 3 LPARS, 2 support Linux and 1 for TPF testing. The 

current disk configuration is

*   a boatload of big 3390s (27-32MB) are on the Linux LPARs, These 

are connected using 4 Ficon channels. The DDRs will be done from one of 

the Linux LPARs. There will be two concurrent DDRs for this. These will b
e 
full disk copies. 4 channels to 210 disks.
*   Another boatload of 3330-03s where the TPF test system base disk
s 
reside. These are connected to the third LPAR via 8 ESCON channels to eac
h 
array. Since the disks are not the same size, the minidisks will be 
copied, not the physical disks. The plan is to have 16 concurrent copies.
 
There are 16 channels serving 437 disks, roughly 15,000 minidisks.
*   The target disks are connected via 4 Ficon channels that are 
EMIFd to all LPARs. There is separation of arrays; the TPF and Linux disk
s 
are not intermingled; however, the same 4 channels are shared between the
 
LPARs. There are only 4 channels.

The question is, will the 4 channels be a bottleneck if both the Linux 

and TPF migrations are done concurrently?



Regards,
Richard Schuh


Re: Channel Contention

2010-02-17 Thread Feller, Paul
 I might be more concerned that the ESCON channels might be a bottle neck.  
Some of the factors that will affect the performance on the new DASD would be 
the speed of the Ficon channels (2 gig, 4 gig or 8 gig) and the internal 
performance of the new DASD compared to the old DASD.

Paul Feller
AIT Mainframe Technical Support

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Schuh, Richard
Sent: Wednesday, February 17, 2010 1:14 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Channel Contention

Currently, we have 3 LPARS, 2 support Linux and 1 for TPF testing. The current 
disk configuration is

* a boatload of big 3390s (27-32MB) are on the Linux LPARs, These are 
connected using 4 Ficon channels. The DDRs will be done from one of the Linux 
LPARs. There will be two concurrent DDRs for this. These will be full disk 
copies. 4 channels to 210 disks.
* Another boatload of 3330-03s where the TPF test system base disks 
reside. These are connected to the third LPAR via 8 ESCON channels to each 
array. Since the disks are not the same size, the minidisks will be copied, not 
the physical disks. The plan is to have 16 concurrent copies. There are 16 
channels serving 437 disks, roughly 15,000 minidisks.
* The target disks are connected via 4 Ficon channels that are EMIFd to 
all LPARs. There is separation of arrays; the TPF and Linux disks are not 
intermingled; however, the same 4 channels are shared between the LPARs. There 
are only 4 channels.

The question is, will the 4 channels be a bottleneck if both the Linux and TPF 
migrations are done concurrently?


Regards,
Richard Schuh





Re: Channel Contention

2010-02-17 Thread Schuh, Richard
Either 4(most likely) or 8 gig, new dasd is faster and has more cache. I think 
they got 8g for the production TPF systems and 4 for VM.


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Feller, Paul
Sent: Wednesday, February 17, 2010 1:32 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Channel Contention

 I might be more concerned that the ESCON channels might be a bottle neck.  
Some of the factors that will affect the performance on the new DASD would be 
the speed of the Ficon channels (2 gig, 4 gig or 8 gig) and the internal 
performance of the new DASD compared to the old DASD.

Paul Feller
AIT Mainframe Technical Support

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Schuh, Richard
Sent: Wednesday, February 17, 2010 1:14 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Channel Contention

Currently, we have 3 LPARS, 2 support Linux and 1 for TPF testing. The current 
disk configuration is

* a boatload of big 3390s (27-32MB) are on the Linux LPARs, These are 
connected using 4 Ficon channels. The DDRs will be done from one of the Linux 
LPARs. There will be two concurrent DDRs for this. These will be full disk 
copies. 4 channels to 210 disks.
* Another boatload of 3330-03s where the TPF test system base disks 
reside. These are connected to the third LPAR via 8 ESCON channels to each 
array. Since the disks are not the same size, the minidisks will be copied, not 
the physical disks. The plan is to have 16 concurrent copies. There are 16 
channels serving 437 disks, roughly 15,000 minidisks.
* The target disks are connected via 4 Ficon channels that are EMIFd to 
all LPARs. There is separation of arrays; the TPF and Linux disks are not 
intermingled; however, the same 4 channels are shared between the LPARs. There 
are only 4 channels.

The question is, will the 4 channels be a bottleneck if both the Linux and TPF 
migrations are done concurrently?


Regards,
Richard Schuh





Re: Channel Contention

2010-02-17 Thread Schuh, Richard
I will have to check with the TPF and H/W folks to get answers to most of  your 
questions - VM was not included in the planning for the install. Since this is 
a one-time effort, I doubt that I can justify any new feature. I am fairly 
certain that the Ficon to the target disks is 4G. We are on a z10 (one I can 
answer authoritatively). The configuration of the switch is one that I 
absolutely cannot answer until I get an answer from the h/w folks. 
Unfortunately, they are in an earlier time zone, so I cannot get the answer 
until tomorrow.


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Mike Rydberg
Sent: Wednesday, February 17, 2010 11:58 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Channel Contention

Richard,

A couple question:
How is your FICON Switch fabric configured, is this a single switch or cascaded 
switches between the channels and your DASD CU's?
What is the link speed of your FICON Channels, 2Gb, 4Gb or 8Gb?
If the LPAR's are running on a z10 you might benefit from the zHPF (high 
performance FICON feature on the z10), which reduces the number of 
bi-directional exchanges for each I/O.

Also, depending on the FICON Switches in your shop, you should be able to 
monitor the FICON channel port utilization real time by using the web-based or 
CLI based switch port performance displays  (eg. Brocade portperfshow cmd).

I can give you a few tips if you wish to contact me offline.

Regards
Mike Rydberg

Brocade



From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Schuh, Richard
Sent: Wednesday, February 17, 2010 1:14 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Channel Contention

Currently, we have 3 LPARS, 2 support Linux and 1 for TPF testing. The current 
disk configuration is

* a boatload of big 3390s (27-32MB) are on the Linux LPARs, 
These are connected using 4 Ficon channels. The DDRs will be done from one of 
the Linux LPARs. There will be two concurrent DDRs for this. These will be full 
disk copies. 4 channels to 210 disks.
* Another boatload of 3330-03s where the TPF test system 
base disks reside. These are connected to the third LPAR via 8 ESCON channels 
to each array. Since the disks are not the same size, the minidisks will be 
copied, not the physical disks. The plan is to have 16 concurrent copies. There 
are 16 channels serving 437 disks, roughly 15,000 minidisks.
* The target disks are connected via 4 Ficon channels that 
are EMIFd to all LPARs. There is separation of arrays; the TPF and Linux disks 
are not intermingled; however, the same 4 channels are shared between the 
LPARs. There are only 4 channels.

The question is, will the 4 channels be a bottleneck if both the Linux and TPF 
migrations are done concurrently?


Regards,
Richard Schuh