z/OS Hot Topics

2009-03-04 Thread Jodi Everdon
Hi Everyone, 

The latest copy of z/OS Hot Topics is now available. There are many good
articles in this issue including our 10th Anniversary Special - Memories of
a mainframe. Please check it out and feel free to send us your feedback:

http://www.ibm.com/systems/z/os/zos/bkserv/hot_topics.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


z/OS Hot Topics - February 2007

2007-02-05 Thread Birger Heede

In case you have not noticed or is subscribing.

http://www-03.ibm.com/servers/eserver/zseries/zos/bkserv/hot_topics.html

Birger Heede
IBM Denmark

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


August edition of z/OS Hot Topics

2006-08-28 Thread Jodi Everdon
Check out the latest edition of z/OS Hot Topics: 
http://www-03.ibm.com/servers/eserver/zseries/zos/bkserv/hot_topics.html

There are a lot of great topics in this issue. As always, we'd love to get 
your feedback and hear about what you would like to see published in future 
editions.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


BRLM and GRS (from z/OS Hot Topics)

2007-02-05 Thread John (IBM-MAIN)
Thank you Birger for the link to the new z/OS Hot Topics newsletter.

 

I have a question concerning the byte range lock manager article.



After reading about the various evolutions of the BRLM (single environment, 
shared, recovery considerations within a sysplex), I was wondering why the 
locking mechanism was redeveloped! It seems to me that GRS is the perfect 
server (or manager - if we can call it that) to maintain any locks that the 
z/OS UNIX environment needs to serialize its resources.

 

Could someone maybe fill me in on what I am obviously missing! 

 

Thanks

 

John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: BRLM and GRS (from z/OS Hot Topics)

2007-02-06 Thread Rick Fochtman

--
Thank you Birger for the link to the new z/OS Hot Topics newsletter.

I have a question concerning the byte range lock manager article.

After reading about the various evolutions of the BRLM (single 
environment, shared, recovery considerations within a sysplex), I was 
wondering why the locking mechanism was redeveloped! It seems to me that 
GRS is the perfect server (or manager - if we can call it that) to 
maintain any locks that the z/OS UNIX environment needs to serialize its 
resources.


Could someone maybe fill me in on what I am obviously missing!

Thanks

One of the overriding considerations of using a LOCK, as opposed to 
using GRS, is pure raw speed. For something that may need to be "LOCK"ed 
for a short period but perhaps thousands of times, like the storage 
management blocks within a particular address space, GRS would be 
woefully inadequate. Similarly, for something that needs to be 
serialized only within a single image, again, GRS can be very 
slooow. Consequently, developing a LOCK protocol for a well-defined 
set of one image-only resources can save a great deal of time and 
contribute to overall efficiency.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Fw: z/OS Hot Topics Newsletter Issue 14 GA!

2006-02-20 Thread Jeffrey Deaver
>The latest issue of the z/OS Hot Topics Newsletter, Issue 14, has hit the
>stands! If you didn't receive it already, you can obtain a hardcopy
version
>from http://www.ibm.com/shop/publications/order/ (order number
>GA22-7501-10) or download the PDF from
>http://www.ibm.com/servers/eserver/zseries/zos/bkserv/hot_topics.html.
>(You'll find past z/OS Hot Topics Newsletter issues at those URLs as
well.)
>We're always open to comments and ideas for future article topics, so
>please send any our way via this email address. Enjoy!!


Got the above email, but the link is not on the webpage yet.  The pdf,
however, is there at 

  http://publibz.boulder.ibm.com/epubs/pdf/e0z2n161.pdf

Already sent them a note about it.

Jeffrey Deaver, Senior Analyst, Systems Engineering
651-665-4231

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Hal Merritt
Getting the 'star' treatment is the all new 3390 Model A introduced in
z/os 10. This is a logical DASD volume with a slight increase in the
architectural limit of 65,520 cylinders to 268,434,453 cylinders. Gulp.
Did he say 268 mega cylinders? Yup. 

What is amusing is that many of us know and love the 3390A device as an
'alias' device for PAV. Wonder how they are going to reconcile that

Kinda wished they'd called it the 3390 Model T :-)) 



 

-Original Message-
From: RACF Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of
Ulrich Boche
Sent: Thursday, August 14, 2008 7:44 AM
To: [EMAIL PROTECTED]
Subject: z/OS Hot Topics 19

There are a few quite interesting articles on RACF topics (mostly
related
to z/OA V1R10) in the z/OS Hot Topics Newsletter 19.

Here's where to find it:

http://www-03.ibm.com/systems/z/os/zos/bkserv/hot_topics.html

--
Ulrich Boche
SVA GmbH, Germany
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Fw: z/OS Hot Topics Newsletter Issue 14 GA!

2006-02-20 Thread John Eells

[EMAIL PROTECTED] wrote:


Got the above email, but the link is not on the webpage yet.  The pdf,
however, is there at 

  http://publibz.boulder.ibm.com/epubs/pdf/e0z2n161.pdf

Already sent them a note about it.



You and (I'm told) a LOT of other people.  The missing link 
should be added tonight.


--
John Eells
z/OS Technical Marketing
IBM Poughkeepsie
[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Fw: z/OS Hot Topics Newsletter Issue 14 GA!

2006-02-20 Thread Steve Comstock

Jeffrey Deaver wrote:

The latest issue of the z/OS Hot Topics Newsletter, Issue 14, has hit the
stands! If you didn't receive it already, you can obtain a hardcopy


version

from http://www.ibm.com/shop/publications/order/ (order number



GA22-7501-10) or download the PDF from
http://www.ibm.com/servers/eserver/zseries/zos/bkserv/hot_topics.html.
(You'll find past z/OS Hot Topics Newsletter issues at those URLs as


well.)


We're always open to comments and ideas for future article topics, so
please send any our way via this email address. Enjoy!!




Got the above email, but the link is not on the webpage yet.  The pdf,
however, is there at 

  http://publibz.boulder.ibm.com/epubs/pdf/e0z2n161.pdf

Already sent them a note about it.

Jeffrey Deaver, Senior Analyst, Systems Engineering
651-665-4231



Yeah, well, it cratered my Acrobat and Firefox; something about
a level of Acrobat not supported by my reader. I've got 6 (I
know 7 is out, but, yikes, I'm not that out of date, am I?)

kind regards,

-Steve Comstock

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Hal Merritt
> Sent: Thursday, August 14, 2008 10:18 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Here we go again (was z/OS Hot Topics 19)
> 
> Getting the 'star' treatment is the all new 3390 Model A introduced in
> z/os 10. This is a logical DASD volume with a slight increase in the
> architectural limit of 65,520 cylinders to 268,434,453 
> cylinders. Gulp.
> Did he say 268 mega cylinders? Yup. 
> 
> What is amusing is that many of us know and love the 3390A 
> device as an
> 'alias' device for PAV. Wonder how they are going to 
> reconcile that
> 
> Kinda wished they'd called it the 3390 Model T :-)) 

Nah, call it the 3390EL for "Excessively Large". I cannot imagine
backing that monster up. But, then, compared to my PC's 500 Gb drive???

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Lizette Koehler
So if you go to the new EAV - I wonder how many DR Sites are actually ready
for them?  At Share I have not heard any information on that side of the
equation.  I am sure companies with internal DR Sites will be fine.  But
what about those companies that contract out for this service.  Do you think
those Service providers are really ready?

The ISV Vendors have indicated that they are ready to support EAVs.  That is
a good thing.

Lizette

> 
> Getting the 'star' treatment is the all new 3390 Model A introduced in
> z/os 10. This is a logical DASD volume with a slight increase in the
> architectural limit of 65,520 cylinders to 268,434,453 cylinders. Gulp.
> Did he say 268 mega cylinders? Yup.
> 
> What is amusing is that many of us know and love the 3390A device as an
> 'alias' device for PAV. Wonder how they are going to reconcile that
> 
> Kinda wished they'd called it the 3390 Model T :-))
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Hal Merritt
Yup- planning is well underway at my shop. In fact, we have even
selected the volume serial for the first one: "C:" 

Just kidding. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Lizette Koehler
Sent: Thursday, August 14, 2008 10:29 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

So if you go to the new EAV - I wonder how many DR Sites are actually
ready
for them?  At Share I have not heard any information on that side of the
equation.  I am sure companies with internal DR Sites will be fine.  But
what about those companies that contract out for this service.  Do you
think
those Service providers are really ready?

The ISV Vendors have indicated that they are ready to support EAVs.
That is
a good thing.

Lizette

> 
> Getting the 'star' treatment is the all new 3390 Model A introduced in
> z/os 10. This is a logical DASD volume with a slight increase in the
> architectural limit of 65,520 cylinders to 268,434,453 cylinders.
Gulp.
> Did he say 268 mega cylinders? Yup.
> 
> What is amusing is that many of us know and love the 3390A device as
an
> 'alias' device for PAV. Wonder how they are going to reconcile
that
> 
> Kinda wished they'd called it the 3390 Model T :-))
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Lizette Koehler
> Sent: Thursday, August 14, 2008 10:29 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: Here we go again (was z/OS Hot Topics 19)
> 
> So if you go to the new EAV - I wonder how many DR Sites are 
> actually ready
> for them?  At Share I have not heard any information on that 
> side of the
> equation.  I am sure companies with internal DR Sites will be 
> fine.  But
> what about those companies that contract out for this 
> service.  Do you think
> those Service providers are really ready?
> 
> The ISV Vendors have indicated that they are ready to support 
> EAVs.  That is
> a good thing.
> 
> Lizette

cha-ching! I can hear the DR vendors ringing up the cash registers now!

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Edward Jaffe

McKown, John wrote:

Getting the 'star' treatment is the all new 3390 Model A introduced in
z/os 10. This is a logical DASD volume with a slight increase in the
architectural limit of 65,520 cylinders to 268,434,453 
cylinders. Gulp.
Did he say 268 mega cylinders? Yup. 


Nah, call it the 3390EL for "Excessively Large". I cannot imagine
backing that monster up. But, then, compared to my PC's 500 Gb drive???
  


500 GB is nothing compared to the architectural limit for EAV. We're 
talking about over two hundred terabytes per volume!


Fortunately, IBM has taken a sensible approach and merely quadrupled the 
maximum volume size for the first generation of EAV over the existing 
54GB per volume limit for shark/ESS.


Once the performance issues are identified and dealt with, much larger 
individual volume sizes should be possible. We could be looking 
terabyte-sized volumes within just a few years!


--
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Chase, John
> -Original Message-
> From: IBM Mainframe Discussion List On Behalf Of Edward Jaffe
> 
> McKown, John wrote:
> > Nah, call it the 3390EL for "Excessively Large". I cannot imagine 
> > backing that monster up. But, then, compared to my PC's 500 
> Gb drive???
> 
> 500 GB is nothing compared to the architectural limit for 
> EAV. We're talking about over two hundred terabytes per volume!
> 
> Fortunately, IBM has taken a sensible approach and merely 
> quadrupled the maximum volume size for the first generation 
> of EAV over the existing 54GB per volume limit for shark/ESS.
> 
> Once the performance issues are identified and dealt with, 
> much larger individual volume sizes should be possible. We 
> could be looking terabyte-sized volumes within just a few years!

Just think  A one-pack sysplex!

-jc-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Compton, John
..and then think...a disk failure on a one-pack sysplex. Scary!

John Compton

Phone Cork: +353 (0)21 231 4641;

Phone VOIP: 214-775-3641


-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf
Of Chase, John
Sent: 14 August 2008 17:25
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

> -Original Message-
> From: IBM Mainframe Discussion List On Behalf Of Edward Jaffe
> 
> McKown, John wrote:
> > Nah, call it the 3390EL for "Excessively Large". I cannot imagine 
> > backing that monster up. But, then, compared to my PC's 500 
> Gb drive???
> 
> 500 GB is nothing compared to the architectural limit for 
> EAV. We're talking about over two hundred terabytes per volume!
> 
> Fortunately, IBM has taken a sensible approach and merely 
> quadrupled the maximum volume size for the first generation 
> of EAV over the existing 54GB per volume limit for shark/ESS.
> 
> Once the performance issues are identified and dealt with, 
> much larger individual volume sizes should be possible. We 
> could be looking terabyte-sized volumes within just a few years!

Just think  A one-pack sysplex!

-jc-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Ron Hawkins
Ed,

The initial size limit of the EAV is still quite small compared to common
usage on other platforms and applications.

It is quite common for Open Systems to have just one to LUNs (volumes) per 8
disk parity group, so volume sizes of 100 to 500GB are quite common. Some
applications like film rendering want as large a single LUN as possible, so
1TB LUNs are already common in that line of business.

Backup and restore based on full volume dumps may  become an issue. I think
MVS is the only OS I come across where this is a common method. Most backup
software in the Open Systems world operates off volume level clones, but
backs up at the file level. If you expect to go to Extended Volumes in big
way then a file level backup strategy should be given some consideration.

Ron


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Edward Jaffe
> Sent: Thursday, August 14, 2008 8:43 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> McKown, John wrote:
> >> Getting the 'star' treatment is the all new 3390 Model A introduced
> in
> >> z/os 10. This is a logical DASD volume with a slight increase in the
> >> architectural limit of 65,520 cylinders to 268,434,453
> >> cylinders. Gulp.
> >> Did he say 268 mega cylinders? Yup.
> >>
> > Nah, call it the 3390EL for "Excessively Large". I cannot imagine
> > backing that monster up. But, then, compared to my PC's 500 Gb
> drive???
> >
> 
> 500 GB is nothing compared to the architectural limit for EAV. We're
> talking about over two hundred terabytes per volume!
> 
> Fortunately, IBM has taken a sensible approach and merely quadrupled
> the
> maximum volume size for the first generation of EAV over the existing
> 54GB per volume limit for shark/ESS.
> 
> Once the performance issues are identified and dealt with, much larger
> individual volume sizes should be possible. We could be looking
> terabyte-sized volumes within just a few years!
> 
> --
> Edward E Jaffe
> Phoenix Software International, Inc
> 5200 W Century Blvd, Suite 800
> Los Angeles, CA 90045
> 310-338-0400 x318
> [EMAIL PROTECTED]
> http://www.phoenixsoftware.com/
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Ron Hawkins
John,

Not really. A disk failure - two disk failures actually - on an Array Group
of 3390-3 can affect over 300 volumes. I would much rather manage restoring
4-10 large volumes than 300+ small ones.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Compton, John
> Sent: Thursday, August 14, 2008 9:40 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> ..and then think...a disk failure on a one-pack sysplex. Scary!
> 
> John Compton
> 
> Phone Cork: +353 (0)21 231 4641;
> 
> Phone VOIP: 214-775-3641
> 
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Schwarz, Barry A
I think the original point was more about logical volume failures (VVDS
deleted, VTOC or VTOCIX hosed).  The larger the volume, the greater the
impact of a failure.

After time, I wonder what fragmentation would look like.

-Original Message-
From: Ron Hawkins [mailto:snip] 
Sent: Thursday, August 14, 2008 10:09 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

John,

Not really. A disk failure - two disk failures actually - on an Array
Group of 3390-3 can affect over 300 volumes. I would much rather manage
restoring 4-10 large volumes than 300+ small ones.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-14 Thread Hal Merritt
Some performance improvements in that process are included. 

Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool only for
the duration of the I/O. Talk about instant gratification :-)  



-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Schwarz, Barry A
Sent: Thursday, August 14, 2008 1:47 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

I think the original point was more about logical volume failures (VVDS
deleted, VTOC or VTOCIX hosed).  The larger the volume, the greater the
impact of a failure.

After time, I wonder what fragmentation would look like.

-Original Message-
From: Ron Hawkins [mailto:snip] 
Sent: Thursday, August 14, 2008 10:09 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

John,

Not really. A disk failure - two disk failures actually - on an Array
Group of 3390-3 can affect over 300 volumes. I would much rather manage
restoring 4-10 large volumes than 300+ small ones.

 

NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immediately advise the sender by reply email and delete all copies.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-17 Thread R.S.

Edward Jaffe wrote:

McKown, John wrote:

Getting the 'star' treatment is the all new 3390 Model A introduced in
z/os 10. This is a logical DASD volume with a slight increase in the
architectural limit of 65,520 cylinders to 268,434,453 cylinders. Gulp.
Did he say 268 mega cylinders? Yup. 

Nah, call it the 3390EL for "Excessively Large". I cannot imagine
backing that monster up. But, then, compared to my PC's 500 Gb drive???
  


500 GB is nothing compared to the architectural limit for EAV. We're 
talking about over two hundred terabytes per volume!


Fortunately, IBM has taken a sensible approach and merely quadrupled the 
maximum volume size for the first generation of EAV over the existing 
54GB per volume limit for shark/ESS.


Once the performance issues are identified and dealt with, much larger 
individual volume sizes should be possible. We could be looking 
terabyte-sized volumes within just a few years!




Wow!
Finally we have the size limit which was available for Windows for years!
What a progress!
2TB was an architectural limit for PC hard drive partitions *for years*, 
it was considered as an limitation and *relieved* several years ago!


Mainframe is trying to catch up the rest of industry. Big volumes, SAN 
features - all new things are introduced in open system world and then 
they try to adapt it into mainframe world.



I couldn't resist

BTW: I consider EAV as a big change *in our mainframe niche*. Outside of 
of the niche nobody cares.



--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237

NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2008 r. kapita zakadowy BRE Banku SA  wynosi 
118.642.672 zote i zosta w caoci wpacony.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread John Eells

Hal Merritt wrote:
Some performance improvements in that process are included. 


Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool only for
the duration of the I/O. Talk about instant gratification :-)  



HyperPAV (yes, it costs more for the DS8000 feature) generally requires 
the use of fewer PAV aliases, which chew up fewer subchannels. For Big 
Honkin' Volumes...er, that is, EAVs, I'd expect that having the systems 
manage the PAV aliases dynamically would be a reasonably big plus, 
because I'd expect most or all EAVs to have higher peak data rates than 
their non-EAV counterparts on the average.


--
John Eells
z/OS Technical Marketing
IBM Poughkeepsie
[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ron Hawkins
John,

Big Honkin' volumes in 'NIX land usually operate with Queue Depths of 8 or
16, and occasionally 32. I don't see any reason why HiperPAV would be any
different.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of John Eells
> Sent: Monday, August 18, 2008 8:12 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Hal Merritt wrote:
> > Some performance improvements in that process are included.
> >
> > Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool only
> for
> > the duration of the I/O. Talk about instant gratification :-)
> 
> 
> HyperPAV (yes, it costs more for the DS8000 feature) generally requires
> the use of fewer PAV aliases, which chew up fewer subchannels. For Big
> Honkin' Volumes...er, that is, EAVs, I'd expect that having the systems
> manage the PAV aliases dynamically would be a reasonably big plus,
> because I'd expect most or all EAVs to have higher peak data rates than
> their non-EAV counterparts on the average.
> 
> --
> John Eells
> z/OS Technical Marketing
> IBM Poughkeepsie
> [EMAIL PROTECTED]
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Blaicher, Chris
Ron,

I don't know 'NIX internals, but I would find a queue depth of 4, let
alone 16 or 32 as totally unacceptable in z/OS land.  That is what PAV
is for, to reduce the queue depth.

Given enough PAV addresses, you should never see much if and queue
depth.

Chris Blaicher
Personal opinion only.

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Ron Hawkins
Sent: Monday, August 18, 2008 10:43 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

John,

Big Honkin' volumes in 'NIX land usually operate with Queue Depths of 8
or
16, and occasionally 32. I don't see any reason why HiperPAV would be
any
different.

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of John Eells
> Sent: Monday, August 18, 2008 8:12 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Hal Merritt wrote:
> > Some performance improvements in that process are included.
> >
> > Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool only
> for
> > the duration of the I/O. Talk about instant gratification :-)
> 
> 
> HyperPAV (yes, it costs more for the DS8000 feature) generally
requires
> the use of fewer PAV aliases, which chew up fewer subchannels. For Big
> Honkin' Volumes...er, that is, EAVs, I'd expect that having the
systems
> manage the PAV aliases dynamically would be a reasonably big plus,
> because I'd expect most or all EAVs to have higher peak data rates
than
> their non-EAV counterparts on the average.
> 
> --
> John Eells
> z/OS Technical Marketing
> IBM Poughkeepsie
> [EMAIL PROTECTED]
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ron Hawkins
Chris,

Yes, you obviously don't know 'NIX Internals :-)

Q Depth is the number of concurrent IO that can be scheduled on a SCSI
resource. Typically it is a setting on a HBA. It is not a count of Queued IO
in the OS.

In SCSI the TID is pretty close to being the same as a UCB, but there is no
attempt to queue on the TID in the OS. The throttle on the number of
concurrent IO from an OS to a LUN is the Q-Depth. With Q-Depth set to 8 you
can have 8 concurrent IO scheduled on a LUN. With two HBA and Multi-path
software you can have 16.

I believe the name evolved from how the target device will queue the
requests in pre cache days. This still happens for cache-miss IO requests on
the Disk Drives for CKD and FCP - it's all SCSI by the time it gets to the
HDD.

Don't misunderstand the name. Q-depth is named for what happens on the
target device. If I took the label PAV literally I would be waiting for my
IO to arrive on a yummy piece of meringue with whipped cream, peaches, and a
dollop of passion fruit.

Ron


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Blaicher, Chris
> Sent: Monday, August 18, 2008 9:07 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Ron,
> 
> I don't know 'NIX internals, but I would find a queue depth of 4, let
> alone 16 or 32 as totally unacceptable in z/OS land.  That is what PAV
> is for, to reduce the queue depth.
> 
> Given enough PAV addresses, you should never see much if and queue
> depth.
> 
> Chris Blaicher
> Personal opinion only.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ron Hawkins
> Sent: Monday, August 18, 2008 10:43 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: Here we go again (was z/OS Hot Topics 19)
> 
> John,
> 
> Big Honkin' volumes in 'NIX land usually operate with Queue Depths of 8
> or
> 16, and occasionally 32. I don't see any reason why HiperPAV would be
> any
> different.
> 
> Ron
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> > Behalf Of John Eells
> > Sent: Monday, August 18, 2008 8:12 AM
> > To: IBM-MAIN@BAMA.UA.EDU
> > Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> >
> > Hal Merritt wrote:
> > > Some performance improvements in that process are included.
> > >
> > > Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool
> only
> > for
> > > the duration of the I/O. Talk about instant gratification :-)
> > 
> >
> > HyperPAV (yes, it costs more for the DS8000 feature) generally
> requires
> > the use of fewer PAV aliases, which chew up fewer subchannels. For
> Big
> > Honkin' Volumes...er, that is, EAVs, I'd expect that having the
> systems
> > manage the PAV aliases dynamically would be a reasonably big plus,
> > because I'd expect most or all EAVs to have higher peak data rates
> than
> > their non-EAV counterparts on the average.
> >
> > --
> > John Eells
> > z/OS Technical Marketing
> > IBM Poughkeepsie
> > [EMAIL PROTECTED]
> >
> > -
> -
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN
> INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Blaicher, Chris
Ron,

Education time for me- 

Given a queue depth of 8, do they have 8 actually doing data transfer
concurrently?  Given the right configuration a PAV device can get a
BUNCH of I/O's doing concurrent data transfers from the same device.

What is a HBA and a TID?  LUN, I assume, is a Logical UNit.

Just trying to broaden my understanding.

Chris

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Ron Hawkins
Sent: Monday, August 18, 2008 1:17 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

Chris,

Yes, you obviously don't know 'NIX Internals :-)

Q Depth is the number of concurrent IO that can be scheduled on a SCSI
resource. Typically it is a setting on a HBA. It is not a count of
Queued IO
in the OS.

In SCSI the TID is pretty close to being the same as a UCB, but there is
no
attempt to queue on the TID in the OS. The throttle on the number of
concurrent IO from an OS to a LUN is the Q-Depth. With Q-Depth set to 8
you
can have 8 concurrent IO scheduled on a LUN. With two HBA and Multi-path
software you can have 16.

I believe the name evolved from how the target device will queue the
requests in pre cache days. This still happens for cache-miss IO
requests on
the Disk Drives for CKD and FCP - it's all SCSI by the time it gets to
the
HDD.

Don't misunderstand the name. Q-depth is named for what happens on the
target device. If I took the label PAV literally I would be waiting for
my
IO to arrive on a yummy piece of meringue with whipped cream, peaches,
and a
dollop of passion fruit.

Ron


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Blaicher, Chris
> Sent: Monday, August 18, 2008 9:07 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Ron,
> 
> I don't know 'NIX internals, but I would find a queue depth of 4, let
> alone 16 or 32 as totally unacceptable in z/OS land.  That is what PAV
> is for, to reduce the queue depth.
> 
> Given enough PAV addresses, you should never see much if and queue
> depth.
> 
> Chris Blaicher
> Personal opinion only.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ron Hawkins
> Sent: Monday, August 18, 2008 10:43 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: Here we go again (was z/OS Hot Topics 19)
> 
> John,
> 
> Big Honkin' volumes in 'NIX land usually operate with Queue Depths of
8
> or
> 16, and occasionally 32. I don't see any reason why HiperPAV would be
> any
> different.
> 
> Ron
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> > Behalf Of John Eells
> > Sent: Monday, August 18, 2008 8:12 AM
> > To: IBM-MAIN@BAMA.UA.EDU
> > Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> >
> > Hal Merritt wrote:
> > > Some performance improvements in that process are included.
> > >
> > > Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool
> only
> > for
> > > the duration of the I/O. Talk about instant gratification :-)
> > 
> >
> > HyperPAV (yes, it costs more for the DS8000 feature) generally
> requires
> > the use of fewer PAV aliases, which chew up fewer subchannels. For
> Big
> > Honkin' Volumes...er, that is, EAVs, I'd expect that having the
> systems
> > manage the PAV aliases dynamically would be a reasonably big plus,
> > because I'd expect most or all EAVs to have higher peak data rates
> than
> > their non-EAV counterparts on the average.
> >
> > --
> > John Eells
> > z/OS Technical Marketing
> > IBM Poughkeepsie
> > [EMAIL PROTECTED]
> >
> >
-
> -
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN
> INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

-

Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ted MacNEIL
>I don't know 'NIX internals, but I would find a queue depth of 4, let alone 16 
>or 32 as totally unacceptable in z/OS land.  That is what PAV is for, to 
>reduce the queue depth.

Wrong answer.
What service is being delivered.
Is it acceptable?
  YES ---> Monitor
  NO   ---> Fix whatever the problem is
 Monitor

Looking at things like Queue Depth when there is no problem is an "if it ain't 
broke; don't fix it" issue.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ron Hawkins
Chris,

Yes there can be many concurrent IO transfers occurring to and from Cache. I
actually find high Q-depth numbers, and ALIAS for that matter, become more
important for cache miss IO. Cache miss is magnitudes longer than a cache
hit and will quickly starve the Volume of PAV, or the LCU of HyperPAV, so
that concurrent IO transfer will stall and UCB queuing will occur. Low cache
hits require more alias than high cache hit.

Disk drives can support processing a seek for read while concurrently
writing into the SCSI buffer - sorta like two IO at once but not all the
time.

TID is the Target ID, and is analogous to the Device Address. HBA is the
Host Bus Adapter and could be considered a channel.

The Queue referred to in Queue depth can also process IO out of order based
on ordering schemes and priority. The default queue for disks was to process
the IO in 1st to last track order, but nowadays disk drives use a travelling
salesman technique (think DFSORT Blockset) to sort the queue and reduce
seeking. At one stage not all Array vendors enabled this queue management on
the disk drives (I don't know about current models).

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Blaicher, Chris
> Sent: Monday, August 18, 2008 12:40 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Ron,
> 
> Education time for me-
> 
> Given a queue depth of 8, do they have 8 actually doing data transfer
> concurrently?  Given the right configuration a PAV device can get a
> BUNCH of I/O's doing concurrent data transfers from the same device.
> 
> What is a HBA and a TID?  LUN, I assume, is a Logical UNit.
> 
> Just trying to broaden my understanding.
> 
> Chris
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ron Hawkins
Ted,

I totally disagree. Configuring Q-Depth correctly from the get-go is just as
important as configuring PAV and HYPERPAV correctly. I call it preventative
maintenance.

I guess the approach you prescribe goes with your tag line :-) 

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ted MacNEIL
> Sent: Monday, August 18, 2008 12:58 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> >I don't know 'NIX internals, but I would find a queue depth of 4, let
> alone 16 or 32 as totally unacceptable in z/OS land.  That is what PAV
> is for, to reduce the queue depth.
> 
> Wrong answer.
> What service is being delivered.
> Is it acceptable?
>   YES ---> Monitor
>   NO   ---> Fix whatever the problem is
>  Monitor
> 
> Looking at things like Queue Depth when there is no problem is an "if
> it ain't broke; don't fix it" issue.
> 
> -
> Too busy driving to stop for gas!
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ted MacNEIL
>Ted,

>I totally disagree. Configuring Q-Depth correctly from the get-go is just as 
>important as configuring PAV and HYPERPAV correctly. I call it preventative
maintenance.

Well, we'll all disagree then.
If response (application) is acceptable, why waste your time?

>I guess the approach you prescribe goes with your tag line :-) 

No, it comes from too many tasks; too little time.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Blaicher, Chris
Thanks for the education.

Chris

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Ron Hawkins
Sent: Monday, August 18, 2008 3:55 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

Chris,

Yes there can be many concurrent IO transfers occurring to and from
Cache. I
actually find high Q-depth numbers, and ALIAS for that matter, become
more
important for cache miss IO. Cache miss is magnitudes longer than a
cache
hit and will quickly starve the Volume of PAV, or the LCU of HyperPAV,
so
that concurrent IO transfer will stall and UCB queuing will occur. Low
cache
hits require more alias than high cache hit.

Disk drives can support processing a seek for read while concurrently
writing into the SCSI buffer - sorta like two IO at once but not all the
time.

TID is the Target ID, and is analogous to the Device Address. HBA is the
Host Bus Adapter and could be considered a channel.

The Queue referred to in Queue depth can also process IO out of order
based
on ordering schemes and priority. The default queue for disks was to
process
the IO in 1st to last track order, but nowadays disk drives use a
travelling
salesman technique (think DFSORT Blockset) to sort the queue and reduce
seeking. At one stage not all Array vendors enabled this queue
management on
the disk drives (I don't know about current models).

Ron

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Blaicher, Chris
> Sent: Monday, August 18, 2008 12:40 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Ron,
> 
> Education time for me-
> 
> Given a queue depth of 8, do they have 8 actually doing data transfer
> concurrently?  Given the right configuration a PAV device can get a
> BUNCH of I/O's doing concurrent data transfers from the same device.
> 
> What is a HBA and a TID?  LUN, I assume, is a Logical UNit.
> 
> Just trying to broaden my understanding.
> 
> Chris
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ron Hawkins
Ted,

So if you were consolidating 2300 x 3390-3 onto 128 x 3390-54 you advocate
doing zero analysis, setting up alias based on a rule of thumb, and only do
something if or when the application misses a service level. . I'm glad it's
your dog.

I had the impression this kind of capacity planning is the kind of the
practice that separates MF from 'NIX and Windoze

I never interpreted your tag line like that, though it makes sense. I took
it as a reference to ignoring all warnings - like your petrol gauge.

Ron



> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ted MacNEIL
> Sent: Monday, August 18, 2008 2:03 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> >Ted,
> 
> >I totally disagree. Configuring Q-Depth correctly from the get-go is
> just as important as configuring PAV and HYPERPAV correctly. I call it
> preventative
> maintenance.
> 
> Well, we'll all disagree then.
> If response (application) is acceptable, why waste your time?
> 
> >I guess the approach you prescribe goes with your tag line :-)
> 
> No, it comes from too many tasks; too little time.
> 
> -
> Too busy driving to stop for gas!
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-18 Thread Ted MacNEIL
>So if you were consolidating 2300 x 3390-3 onto 128 x 3390-54 you advocate 
>doing zero analysis, setting up alias based on a rule of thumb, and only do
something if or when the application misses a service level. . I'm glad it's 
your dog.

Never said that.
Of course I would do the work.
And, I don't believe in rules of thumb.
As a matter of fact, CMG Canada asked me to contribute to their ROT doc, and I 
refused.

Once the work is done (and it shall be), there is a point of diminishing 
returns.

>I had the impression this kind of capacity planning is the kind of the 
>practice that separates MF from 'NIX and Windoze

I've been a capacity analyst for over 27 years.
I just believe that, once the staff-work has been done, you have to prioritise!

I'm sorry if I was not clear.
And, that is my issue, NOT yours.

The whole purpose of communication is not to ensure you're understood; rather 
to ensure you're not misunderstood.

PS: the tag-line is stolen from the guy who wrote "7 habits".

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-19 Thread Arthur Gutowski
On Thu, 14 Aug 2008 10:18:23 -0500, Hal Merritt 
<[EMAIL PROTECTED]> wrote:

>Kinda wished they'd called it the 3390 Model T :-))

T for Terabytes?  Cute.  223GB to start, but Terabytes, and possibly 
Petabytes seem to be the goal.  At the zOS Goody Bag pitch at SHARE, the 
observation was made that the disk guys were just jealous of 16-exabyte 
processor storage.  I cannot get my head around who would need a volume 
this large, but then again, 20 years ago, 640K was all we would ever need.

More interesting for us was moving PPRC secondaries into MCSS 1 with PAVs.  
Since the trauma of going to 5-byte UCB is unbearable (going from 3 to 4 was 
eventful enough for some), this seems to be a reasonable compromise for UCB 
constraint relief.

Regards,
Art Gutowski
Ford Motor Company

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-19 Thread Mark Zelden
On Tue, 19 Aug 2008 08:55:26 -0500, Arthur Gutowski <[EMAIL PROTECTED]> wrote:


>Since the trauma of going to 5-byte UCB is unbearable (going from 3 to 4 was
>eventful enough for some), this seems to be a reasonable compromise for UCB
>constraint relief.
>

MSS (multiple subchannel sets) is sort of like having a 5 byte UCB. 

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-19 Thread Hal Merritt
One of my few remaining brain cells is overheating :o)

In most of my tiny known universe, a Queue Depth is a way to express
work units waiting on a resource. So, from an application point of view,
yes, I suppose a queue depth could be thought of as concurrent I/O. But
not in z land. When we say concurrent we really mean concurrent: fully
parallel. Nobody waiting on anybody.  
 
I find it odd (and confusing) that *nix would use such a name to
describe a parallel process. Or does it? Are those I/O's really
concurrent? 

 
 
-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Ron Hawkins
Sent: Monday, August 18, 2008 1:17 PM
To: IBM-MAIN@BAMA.UA.EDU
Subject: Re: Here we go again (was z/OS Hot Topics 19)

Chris,

Yes, you obviously don't know 'NIX Internals :-)

Q Depth is the number of concurrent IO that can be scheduled on a SCSI
resource. Typically it is a setting on a HBA. It is not a count of
Queued IO
in the OS.

In SCSI the TID is pretty close to being the same as a UCB, but there is
no
attempt to queue on the TID in the OS. The throttle on the number of
concurrent IO from an OS to a LUN is the Q-Depth. With Q-Depth set to 8
you
can have 8 concurrent IO scheduled on a LUN. With two HBA and Multi-path
software you can have 16.

I believe the name evolved from how the target device will queue the
requests in pre cache days. This still happens for cache-miss IO
requests on
the Disk Drives for CKD and FCP - it's all SCSI by the time it gets to
the
HDD.

Don't misunderstand the name. Q-depth is named for what happens on the
target device. If I took the label PAV literally I would be waiting for
my
IO to arrive on a yummy piece of meringue with whipped cream, peaches,
and a
dollop of passion fruit.

Ron


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Blaicher, Chris
> Sent: Monday, August 18, 2008 9:07 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> Ron,
> 
> I don't know 'NIX internals, but I would find a queue depth of 4, let
> alone 16 or 32 as totally unacceptable in z/OS land.  That is what PAV
> is for, to reduce the queue depth.
> 
> Given enough PAV addresses, you should never see much if and queue
> depth.
> 
> Chris Blaicher
> Personal opinion only.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Ron Hawkins
> Sent: Monday, August 18, 2008 10:43 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: Here we go again (was z/OS Hot Topics 19)
> 
> John,
> 
> Big Honkin' volumes in 'NIX land usually operate with Queue Depths of
8
> or
> 16, and occasionally 32. I don't see any reason why HiperPAV would be
> any
> different.
> 
> Ron
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> > Behalf Of John Eells
> > Sent: Monday, August 18, 2008 8:12 AM
> > To: IBM-MAIN@BAMA.UA.EDU
> > Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> >
> > Hal Merritt wrote:
> > > Some performance improvements in that process are included.
> > >
> > > Also, a (extra cost?) 'hyper PAV' that uses a 'ucb' from a pool
> only
> > for
> > > the duration of the I/O. Talk about instant gratification :-)
> > 
> >
> > HyperPAV (yes, it costs more for the DS8000 feature) generally
> requires
> > the use of fewer PAV aliases, which chew up fewer subchannels. For
> Big
> > Honkin' Volumes...er, that is, EAVs, I'd expect that having the
> systems
> > manage the PAV aliases dynamically would be a reasonably big plus,
> > because I'd expect most or all EAVs to have higher peak data rates
> than
> > their non-EAV counterparts on the average.
> >
> > --
> > John Eells
> > z/OS Technical Marketing
> > IBM Poughkeepsie
> > [EMAIL PROTECTED]
> >
> >
-
> -
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN
> INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archiv

Re: Here we go again (was z/OS Hot Topics 19)

2008-08-19 Thread Ron Hawkins
Can you say "sibling pend?"

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
> Behalf Of Hal Merritt
> Sent: Tuesday, August 19, 2008 12:28 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: [IBM-MAIN] Here we go again (was z/OS Hot Topics 19)
> 
> One of my few remaining brain cells is overheating :o)
> 
> In most of my tiny known universe, a Queue Depth is a way to express
> work units waiting on a resource. So, from an application point of
> view,
> yes, I suppose a queue depth could be thought of as concurrent I/O. But
> not in z land. When we say concurrent we really mean concurrent: fully
> parallel. Nobody waiting on anybody.
> 
> I find it odd (and confusing) that *nix would use such a name to
> describe a parallel process. Or does it? Are those I/O's really
> concurrent?
> 
> 
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Here we go again (was z/OS Hot Topics 19)

2008-08-27 Thread Shmuel Metz (Seymour J.)
In
<[EMAIL PROTECTED]>,
on 08/18/2008
   at 09:03 PM, Ted MacNEIL <[EMAIL PROTECTED]> said:

>If response (application) is acceptable, why waste your time?

You're begging the question again.  The issue isn't whether to waste time,
but whether to monitor. Perhaps your management enjoys surprises; some
don't.

He never said that he planned to waste his time, so your question was
bogus.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see  
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html