I agree with Sam.  PDSE data sets had some major problems during the
first 10 years after they were introduced, but IBM has made some major
improvements over the past 3 or 4 years.  We have over 32,900 PDSE data
sets in our main production sysplex, and we are heavy users of this
technology.  In the past 3 years, we have only had 2 significant issues
with PDSE data sets.  When compared to 7 significant VSAM issues we have
experienced, I would say that is pretty good (VSAM has been out for 35
years, but I do not see as much ranting about VSAM as I do about PDSE
data sets).

PDSE data sets have solved dozens of problems for us, and we are happy
with the technology.  If you have not tried them in the past 2 years,
consider giving PDSE data set another chance.


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Knutson, Sam
Sent: Friday, March 23, 2007 6:38 AM
To: [email protected]
Subject: Re: Start of a PDSE rant was Re: OA03767 PDS/E Restriction

I can show you my scars from an unplanned IPL last June caused by an
SMSPDSE failure:-( See APAR OA15185.  Still I think my opinion of PDSE
is much more positive than yours.  From our experience I can suggest a
few things.

If you are at z/OS R6 or higher consider use the restartable SMSPDSE
address space.

Install recommended service (ask IBM PDSE Level 2) and implement the
Partitioned Data Set Extended Restartable Address Space (SMSPDSE1) which
is a feature that was available at z/OS R6.
http://www.redbooks.ibm.com/abstracts/tips0531.html   

Implement private storage monitoring for SMSPDSE end critical address
spaces.  We use CA-SYSVIEW 11.5 and RMF doing this for NETSPOOL FSS
tasks, CICS Temp Storage regions, DB2 production DBM1 asids, SMSPDSE,
SMSPDSE1 and a few others who push the limits of what we can make
available in PVT/EPVT or have had a past history of private area storage
problems.

We have had some discussions with IBM but have not yet completed the
requisite virtual paperwork to open a marketing request to allow for the
existing PDSE storage monitoring to be extended to monitor it's own
private storage.  When I do that I will post the request # back into
this thread so others can reference it.  

In general PDSE is a 'Good thing'!   Using PDSE has allowed us to solve
a host of performance problems and avoid extra data set management steps
with in house and OEM software and it has been quite reliable in it's
current incarnation save the one incident last year.  I certainly
consider PDSE handled correctly to not be anymore of an integrity
problem than VSAM, IMS database, DB2 database, etc.   We have  3671 PDSE
data sets sitting in our normal DASD pools as of this morning and we
don't have problems with them.  Quite the contrary PDSE is the default
when we reallocate some libraries in our Endevor environments.
Typically when we get an x37 problem with a PDS the first thing someone
says is 'We should take time to reallocate all of the xyz libraries as
PDSE...'

GEICO worked with PDSE when it was first introduced (DFP 3.2 and some
flavor of MVS/ESA IIRC) and it was not ready for heavy lifting back
then. We had issues especially with large PDSE data sets used by many
address spaces with PDSE sharing and corruption. IBM PDSE developers
have done a lot since then with some good plans for the future based on
presentations at SHARE and other public forums.   My experience was that
from DFSMS 1.1 on PDSE has really matured enough to use for any critical
function you would use a PDS for save a few minor documented
restrictions.  PDSE here is doing just fine every day. I used it and
other native DFSMSdfp functions to save over $200K for a software
package proposed by an outside firm to solve a performance problem about
two years ago.

If anyone is avoiding PDSE based on old FUD you are doing yourself and
your employer no favors. 

A final thought is that running so lean and mean that finding 1/2 of a
CP suddenly occupied results in failure to meet business objectives is a
good argument to provision sufficient capacity for problems.  WLC
charging and capping can be used to insure you don't use or pay for all
installed capacity.  Capacity can be available on demand with CBU or
CUoD.   There was a good article "Always there when you need z: Top ten
best practices for near continuous availability" BY HARRIET MORRILL One
of the ten practices was to provide enough capacity to handle the
unexpected. 

http://publibz.boulder.ibm.com/epubs/pdf/e0z2n170.pdf

Lesson 7: Regularly adjust capacity to protect peak needs People think
of capacity as a performance issue, but capacity and performance are
availability issues. Slowdowns can be viewed as a type of outage.
Additionally, backup systems require extra capacity to carry on the work
of a failing one. From a capacity perspective, experience teaches us
that as soon as a system is set in place, it is obsolete. Typically
utilization goes up. Best-of-breed clients monitor capacity regularly
and add it when needed to be sure that adequate failover capacity is
maintained over time. They exploit System z9 concurrent upgrade and
downgrade capabilities.

Good Luck!

        Best Regards, 

                Sam Knutson, GEICO 
                Performance and Availability Management 
                mailto:[EMAIL PROTECTED] 
                (office)  301.986.3574 

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to