Preview: z/OS V1.12 - September 2010

2010-02-09 Thread Michael W. Moss
http://www-01.ibm.com/common/ssi/rep_ca/8/897/ENUS210-008/ENUS210-
008.PDF

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Cap software CPU utilization

2009-08-14 Thread Michael W. Moss
All of the previous advice has been correct, but maybe there are other options.

Have you investigated the AutoSoftCapping product from zCOST 
Management?  This solution does introduce a "soft capping" technique, so you 
know that you will never exceed the high-watermark setting, namely MSU.  
Thus this is way to safeguard that your software bill is never higher than 
expected.  It also dynamically allocates MSU resources to the workload 
requiring them the most (E.g. Production), based on user customizable 
parameters, safeguarding mission-critical performance characteristics and SLA 
objectives.

Try the following links:

http://www.zcostmanagement.com/pages/products.php
http://www.value-4it.com/products/zCOST.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Convert Service Units to MIPS and calculate cost

2009-08-15 Thread Michael W. Moss
You've stumbled into a potential minefield here, with the MSU to MIPS 
calculation, of which there are many factors.

Is your interest perhaps WLC based, so you can provide MSU billing options for 
software?  If so, the IBM PWD route should throw some light on what is 
happening in this area and how SCRT submissions might extend to the ISV 
community.

There are some good links out there you might want to study regarding MSU 
to MIPS conversion:

http://www.watsonwalker.com/PR060818.pdf
http://www.mxg.com/downloads/the%20myth%20of%20msu.doc

Kudos to Cheryl and Jim for this information.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: "Lookafter" tool

2009-08-18 Thread Michael W. Moss
I’m not sure there’s any free tool custom designed for your exact 
requirements, but I think you have the function and some publicly available 
resources to get you started.

If you have DFSORT then there’s an ICETOOL tips and tricks document that 
has some examples where you can start:

http://www-
947.ibm.com/systems/support/storage/software/sort/mvs/tricks/index.html

So in here it creates a report based on Type 14 (Read/Open) records, Type 15 
will be for Update activity, which maybe is more what you’re after.  So other 
SMF records on interest for you might be Type 60 (Catalog Activity), Type 30 
(Job Activity) and maybe Type 80 (Security Activity).  The SMF Manual will 
describe the record types:

http://publibz.boulder.ibm.com/epubs/pdf/iea2g290.pdf

You could try the following links for more DFSORT/ICETOOL examples:

http://www-01.ibm.com/support/docview.wss?rs=114&uid=isg3T781#ex
http://ibmmainframeforum.com/viewforum.php?f=28

So using these links, you can follow the RACF SMF route, finding IRRADU00 
(RACF SMF Data Unload Utility) and some example source code for using 
ICETOOL to analyze this data.

A lot of folks use MXG, MICS or something similar to analyze SMF, but I guess 
as a PWD ISV ISIS might not have access to these 3rd party software 
products, but hopefully you do have access to DFSORT and thus ICETOOL.

You have the raw data in SMF, you just need to process it somehow.  Maybe 
some folks out there have some ICETOOL code they can share with you.

On Mon, 17 Aug 2009 11:23:30 +0200, Miklos Szigetvari 
 wrote:

>Hi
>
>Searching for some kind of free tool, which can "Looafter" a  user did
>in some time period.
>I think to get from the SMF records or database, he/she has edited a
>dataset, submiited a job, started a program etc etc ..
>
>--
>Miklos Szigetvari
>
>Development Team
>ISIS Information Systems Gmbh
>tel: (+43) 2236 27551 570
>Fax: (+43) 2236 21081
>
>E-mail: miklos.szigetv...@isis-papyrus.com
>
>Info: i...@isis-papyrus.com
>Hotline: +43-2236-27551-111
>
>Visit our Website: http://www.isis-papyrus.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: IBM Software Pricing

2009-09-21 Thread Michael W. Moss
This paper commissioned by IBM might be the source as the figures are just 
about as noted:

http://www-935.ibm.com/services/us/cio/optimize/optit_wp_ibm_systemz.pdf

On Thu, 17 Sep 2009 11:03:23 -0500, Eric Bielefeld  wrote:

>Craig had a couple of other comments that I thought were good.  The first
>was that the compound annual growth rate for mainframe MIPS since 2003 
>was 20%.  The other thing was that mainframe costs tend to be broken up as
>follows:
>
>25% for Labor
>25% for Hardware
>40% for Software
>10% for all other costs
>
>Craig had gotten both of those statements from an IBM presentation he had
>attended, although I can't remember who gave that presentation.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware withdrawal: IBM System z9

2009-11-18 Thread Michael W. Moss
With sub-capacity and the promise of lower software costs via VWLC pricing, 
would anybody like to comment as to why VWLC pricing isn’t being adopted?  
Is it ELA/ESSO (IBM Contract) related, or the uncertainty of the software bill?

AFAIK, for DR situations and looping transactions, naming but 2, there are 
caveats that mean IBM will not be punitive for extraordinary situations where 
the VWLC bill is higher via an SCRT submission, as documneted in their 
Planning for Sub-Capacity Pricing manual.  Coupled with the ability of “soft 
capping” via a 3rd party software product (E.g. zCOST AutoSoftCapping) and 
thus guaranteed VWLC high-watermark costs, it seems somewhat of a 
paradox that customers aren’t adopting VWLC?

Mikey Moss.

On Wed, 18 Nov 2009 06:27:14 -0600, Jim Elliott, IBM 
 wrote:

>There are a LARGE number of customers still not running with VWLC pricing.
>For those customers, a "downgrade" can save software charges. Plus of 
course
>the maintenance charges drop as they are based on the "capacity marker".
>
>Jim Elliott
>Consulting Sales Specialist - System z and Linux Champion
>IBM Canada Ltd.
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
>Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Tapeless???

2009-11-18 Thread Michael W. Moss
Many of the disk vendor CAS (Content-Addressed Storage) solutions, so EMC 
Centera, HDS CAP, naming but two can connect to tapeless VTL’s from 
vendors such as Bus-Tech.  These solutions provide WORM, but faithfully 
emulate tape, so no need to re-engineer or change the application, just 
implement the solution.  These tapeless VTL’s also break the connection with 
physical tape, so DR for 2, 3 or even n sites is easy, whether full, 
application, 
or any subset, as vaulting is as per the disk subsystem deployed, which could 
be high-end, or even mid-range distributed.  For the avoidance of doubt, the 
back-end is IP or FC, so NFS or LUN’s, not 3390.  Even if a customer requires 
physical tape connection for ISV software, the Bus-Tech solution provides 
connection to a physical tape drive, for such low usage.

Tapeless is possible, you just need to think differently from the traditional 
physical tape model.  If you think only of IBM or Sun ATL/VTL, you might miss 
out on these outboard tapeless VTL solutions…

Mikey Moss.
On Wed, 18 Nov 2009 10:16:59 -0500, van der Grijn, Bart (B) 
 wrote:

>I'm also intrigued by how those of you that have gone tapeless address
>the traditional tape needs.
>- How do you store backups and archives in your environment? Do they go
>to virtual tape but never leave the cache?
>- Do you not send backups off-site?
>- For DR, do you rely only on a disk mirror? Does that mean you mirror
>your archives and backups as well?
>
>Thanks,
>Bart
>
>-Original Message-
>From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>Behalf Of William Bishop
>Sent: Wednesday, November 18, 2009 10:06 AM
>To: IBM-MAIN@bama.ua.edu
>Subject: Re: Tapeless???
>
>Peer-to-peer to a cold site.
>
>Nowdays, you can also use tape libraries in a Grid environment.
>
>Thanks
>
>Bill Bishop
>
>Specialist
>Mainframe Support Group
>Server Development & Support
>Toyota Motor Engineering & Manufacturing North America, Inc.
>bill.bis...@tema.toyota.com
>(502) 570-6143
>
>
>
>Pat Mihalec 
>Sent by: IBM Mainframe Discussion List 
>11/18/2009 10:02 AM
>Please respond to
>IBM Mainframe Discussion List 
>
>
>To
>IBM-MAIN@bama.ua.edu
>cc
>
>Subject
>Re: Tapeless???
>
>
>
>
>
>
>I have a question for you that have gone tapeless. How do you DR your
>system?
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
>Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: URGENT : DFHSM QUESTION : IEC030I B37-04

2009-11-18 Thread Michael W. Moss
I guess you should review the various HSM messages to determine why the 
PDO (Problem Determination) files are filling so quickly, if that is the case, 
but 
you can live without PDO output in your critical situation.  It could also be 
that the DASD volumes containing the PDO files are full, so the PDO files can’t 
extend, hence your problem.  I would let your FREEVOL processes run to 
release your required disk space.

One unlikely thought that occurs.  If you were using SVA or iterations thereof, 
RVA/Iceberg, then this has Free Space release considerations when it 
becomes full or near full, which is a case or running their clean-up routines.

On Wed, 18 Nov 2009 09:24:23 -0800, willie bunter 
 wrote:

>Hallo All,
> 
>Because of critical space problems I migrated (issued command HSEND 
FREEVOL MVOL(SL1101) AGE(0) TARGETLEVEL(Ml2)) a Ml1 vol to ML2.  The 
process terminated after an hour.  Next I issued command HSEND FREEVOL 
MVOL(SL1101) TARGETLEVEL(ML1).  Presently it is executing.  However, I 
noticed in the STC there were various abend messages (when I issued the 
FREEVOL for TARGETLEVEL ML2 & TARGETLEVEL ML1.  Would this cause a 
problem?  
>IEC030I B37-
04,IFG0554A,DFHSMY,DFHSMY,ARCPDOX,9BEB,SMP103,HSM.HSMPDOXY
>S DFHSMPDO    
>ARC0037I DFSMSHSM PROBLEM DETERMINATION OUTPUT DATA  
327  
>ARC0037I (CONT.) SETS SWITCHED, ARCPDOX=HSM.HSMPDOXY, 
>ARC0037I (CONT.) ARCPDOY=HSM.HSMPDOYY 
>
>I noticed that the dump switches from X to Y.  Can I stop the MIGRATE to 
stop this?  Or should I let it run its course.
> 
>Thanks.


  
__
Looking for the perfect gift? Give the gift of Flickr! 

http://www.flickr.com/gift/
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
>Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Hardware withdrawal: IBM System z9

2009-11-22 Thread Michael W. Moss
I couldn’t agree more, different strokes for different folks, and so not 
everyone is best suited to VWLC, and yes let’s perform due diligence to 
determine the best z/OS software pricing model.

The one area I think we need clarification is the “all” z/OS products are VWLC 
eligible observation.  I thought it was a subset, what some might call “core” 
products as per:

www-03.ibm.com/systems/z/resources/swprice/reference/exhibits/mlc.html

OK, I think most people here know the vagaries of NALC and the 
encouragement to get new workloads to the Mainframe.  So, on the other 
legacy hand, where Mainframe workloads started life, the core z/OS products, 
z/OS, CICS, DB2, IMS, COBOL, et al; if the legacy long-term Mainframe user is 
still investing in the platform, has mission-critical apps that have some 
combination of z/OS, CICS, COBOL and maybe some other eligible VWLC 
products (E.g. DB2, IMS, MQ, TWS, QMF, SA, Domino, PL/1, et al), why 
doesn’t VWLC stack-up for them, as the big hitters from a price viewpoint are 
z/OS, CICS, DB2 and IMS?

Then, don’t get me started on the debate of SCRT submissions, where kudos 
to IBM they do accept them, hence VWLC, but few ISV’s do (I know there are 
exceptions, many contribute to this forum) subscribe to SCRT submissions, 
largely because they can’t include their product in the SCRT Type 89 record 
creation process…

If lower or fairer software costs for all are an objective of the Mainframe 
Community, I just wonder if the ISV’s and IBM are really listening, or 
customers are adopting the best software pricing model for themselves.  In 
2003 the IBM Mainframe charter was all about value, innovation and 
community, and so is the “user community” asking for “z/OS (z/VS, z/VSE, 
z/TPF, zLinux) Value” from their software portfolios?

Hence full-circle and back to my original question that I will rephrase a 
little 
this time “why aren’t qualified customers committed to the IBM Mainframe 
platform deploying VWLC pricing mechanisms”?

On Thu, 19 Nov 2009 06:18:00 -0600, Jim Elliott, IBM 
 wrote:

>Mickey:
>
>Every customer should do a proper analysis of their total software bill to
>determine which pricing model is best for them. While VWLC may be best for
>most, I do have customers where due to usage characteristics, a combination
>of PSLC and ULC works out better. We see this often where a customer has a
>product which has ULC (Usage License Charge for DB2, CICS, IMS and MQ) 
which
>has low utilization. When you go to VWLC all z/OS IBM products go to VWLC.
>
>It is very important to read all the info at
>http://www.ibm.com/systems/z/resources/swprice/.
>
>Jim

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: BCPii and z890

2009-12-13 Thread Michael W. Moss
There was an article regarding HMC and BCPii (Page 46) in the August 2009 
issues of the z/OS Hot Topics Newsletter and this also suggests z9 and z10, 
not z890 (http://publibz.boulder.ibm.com/epubs/pdf/e0z2n1a0.pdf).

On Sun, 13 Dec 2009 22:02:59 +1000, Shane  
wrote:

>Haven't tried, but it doesn't look like it - the manual specifically
>mentions support on z9 and z10.
>
>Shane ...
>
>On Sun, 2009-12-13 at 13:33 +0200, Gadi Wrote:
>
>> I am in the process of installing z/OS 1.11 on our z890.
>>
>> During IPL I encountered some message related to BCPii.
>>
>> After a bit of searching I found the Section for setting up BCPii in
>> the Callable Services for HLL manual.
>>
>> The instructions say to Define the BCPii Community Name on the Support
>> Element, but when I try to follow the instructions, they don’t match
>> the options on the SE.
>>
>> Can BCPii be set up on a z890?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: So About those Consoles on the z10 HMC

2009-12-13 Thread Michael W. Moss
This change from OS2 to Linux was part of z9 and has been propagated to 
z10.  There’s even a short description of this evolution at Wikepedia 
(http://en.wikipedia.org/wiki/IBM_System_z9).

There are some considerations for HMC Remote Operations.  Try looking at the 
following:

Appendix C of the Hardware Management Console Operations Guide (SC28-
6873-00)
https://www-
304.ibm.com/servers/resourcelink/lib03010.nsf/DF559B0D8BE29E5685257481006
03BC2/$File/SC28-6873-00.pdf

or

Remote Operation section of the IBM System z9 Enterprise Class Technical 
Guide Redbook (SG24-7124-02)
http://www.redbooks.ibm.com/redbooks/pdfs/sg247124.pdf

You should be able to achieve what you need, it’s just a different way of 
doing things.

On Sun, 13 Dec 2009 09:11:25 -0800, Ed Long  
wrote:

>Happy Holidays to one and all.
>
>So, we have a new z/10 BC which replaced a z890 and will also soon replace 
a 7060.
>
>
>Being a small ISV we have limited system programming and no operator staff. 
We use the HMC as our console; on the old OS2 systems, PCOMM did an 
adequate job of allowing us to see the console when the HMC was remoted.
>
>Not so on the linux x3270 based z10. The 3270 emulation sessions don't 
appear when we remote the HMC console. Is there a configuration setting we 
missed or is this simply a feature?
>
>Thanks for your help..
>Edward Long

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Do we need to apply this USERMOD (EDGUX100) for RMM when we migrate z/OS from 1.9 to 1.11

2010-06-23 Thread Michael W. Moss
This is a DFSMSrmm facility generally used for ignoring duplicate or 
undefined tape Volume Serial Numbers (VOLSER’s).  So in all likelihood, 
yes you should apply this USERMOD and you will need to update the 
DFSMS FMID accordingly to HDZ1B10.

Historically EDGUX100 was typically used to emulate EXPDT=98000 
processing from other Tape Management Subsystems such as CA-1 
and CA-Dynam/TLMS.

Please refer to the following IBM manual for EDGUX100 information:

http://publib.boulder.ibm.com/infocenter/zos/v1r9/index.jsp?
topic=/com.ibm.zos.r9.idarc00/c8259.htm

On Thu, 24 Jun 2010 11:40:06 +0800, ibmnew  
wrote:

>Dear all
>
> We are migrating z/OS from 1.9 to 1.11
> I found there was a usermod(FRED001) on the z/OS 1.9 .Below is 
the information for the usermod
>FRED001 M.C.S. ENTRIES = ++USERMOD (FRED001) REWORK
(2009183) . /* SPEFIFY USERMOD NAME */
>++VER (Z038) FMID(HDZ1190) /* insert correct FMID */
>PRE (UA44931) .
>++JCLIN .
>//EDGUX100 EXEC 
PGM=IEWL,PARM='LET,NCAL,RENT,REUS,REFR,LIST,XREF'
>//SYSLMOD DD DISP=SHR,DSN=SYS1.LINKLIB,UNIT=3390,
>// VOL=SER=BMRSC1
>//SRCLIB DD DISP=SHR,DSN=ABCRMM.PROD.HSKP.JCL,UNIT=3390,
>// VOL=SER=CMRMM1
>//AEDGMOD1 DD DISP=SHR,DSN=SYS1.AEDGMOD1,UNIT=3390,
>// VOL=SER=BMDLC1
>//SYSPRINT DD SYSOUT=*
>//SYSLIN DD *
>INCLUDE AEDGMOD1(EDGUX100)
>ENTRY EDGUX100
>NAME EDGUX100(R)
>++SRC(EDGUX100) TXLIB(SRCLIB) DISTLIB(ASAMPLIB) .
>++SAMP(EDGUX100) TXLIB(SRCLIB) DISTLIB(ASAMPLIB) .
>
> Do we need to apply this USERMOD (FRED001) for RMM on the z/OS 
1.11 when we migrate z/OS from 1.9 to 1.11
>
>If we need apply the usermod,  Do we just chang FMID  from 
HDZ1190 to HDZ1B10 ? Please see it below
>
> )++VER (Z038) FMID(HDZ1B10) /* insert correct FMID */
>
>Thanks a lot!
>
>Best Regards,
>
>Jason Cai

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: BCPii Sample code - is there any?

2010-07-15 Thread Michael W. Moss
There are some pseudo code examples in Chapter 14 of the z/OS 
V1R11.0 MVS Callable Services for HLL manual (SA22-7613-05):

http://publibz.boulder.ibm.com/epubs/pdf/iea2c150.pdf

Nothing ready to run though.

On Wed, 14 Jul 2010 20:10:24 +0100, Graham Harris 
 wrote:

>I am failing miserably to find any sample code for getting started with
>retrieving HMC information via BCPii (i.e. using HWICONN / 
HWIQUERY).
>
>Does anyone know if anything is 'out there'?
>
>Any pointers would be appreciated.
>
>Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: VTFM vs TMM

2010-11-23 Thread Michael W. Moss
Maybe phrasing your question differently would be what are the 
advantages and disadvantages of physical tape data versus a tape-on-
disk approach?

TMM is a methodology, which temporarily stages tape data on Level 0 
disk and then the TMM pool is managed by DFSMShsm or equivalent to 
consolidate this data on physical tape.  The resulting physical tapes 
will then have data sets with varying expiration criteria and so will 
require recycling periodically and from a DR/BC viewpoint will require 
duplicating, as and if required.

IBM Virtual Tape for Mainframe (VTFM) essentially is a virtual tape 
solution, emulating 3480/3490/3590 drives and allocating tape data to 
physical z/OS DASD.  Of course, there are many other z/OS virtual tape 
solutions, with a tape-on-disk type concept, CA VTape being a 
software example, requiring physical tapes for data destaging, with  
Bus-Tech MDL/EMC DLm, Luminex, Universal Software, Intercom being 
appliance solutions that allocate tape data to FC/NAS disk arrays, 
without subsequent data destaging to physical tape, and of course 
IBM TS7700 (VTS), Oracle/StorageTek VSM and FSC CentricStor being 
solutions that combine a disk cache and physical tape.

So thinking of Sherlock Holmes “when you have eliminated the 
impossible, whatever remains, however improbable, must be the 
truth”, maybe you could review all of the tape-on-disk options, which 
include TMM?

Some advantages of those solutions, and so for the avoidance of 
doubt, Bus-Tech MDL/EMC DLm, Luminex, Universal Software, Intercom, 
et al, is that the resulting ML2 type data, can be easily recycled, as 
the “tape” data is on cost-efficient FC/SAN disk, and thus easier data 
replication for BC/DR is also possible.  Equally, ML1 type operations 
could also be eliminated, with all of the resource considerations (E.g. 
CPU, z/OS class DASD) associated with that process.  Thus for the 
avoidance of doubt, avoid ML1 disk costs and zSeries CPU cycles, by 
eliminating ML1 from the storage hierarchy and go direct to ML2, where 
compression is performed outboard of the Mainframe and tape data 
allocated on less expensive FC/IP disk arrays, potentially with the 
benefits of deduplication.

All that said, maybe even TMM can co-exist with such a tape-on-disk 
methodology.

As with any IT solution, identify your business requirements first and 
then research what products best fit your business requirements with 
the best ROI and TCO attributes.  So maybe VTFM isn’t for you and 
maybe TMM can be approached from a different viewpoint for you by 
utilizing other virtual tape technologies.


On Mon, 22 Nov 2010 11:30:08 -0500, techie well wisher 
 wrote:

>IBM has VTFM (which is diligent/copycross, etc). Why do we need this 
product
>or use this product while we can directly intercept and direct
>allocations to a particular storage group (such TMMGROUP) with disk 
volumes,
>let's say a dedicated set aside pool from a storage device?  With 
extended
>dataclas attribute, the datasets in this group could be really huge 
(several
>gigabytes). To me, this product adds unnecessary complexity. With 
this, we
>don't need PAT (parallel tape access), because all the datasets in this
>group are disk datasets, accessible by multiple address spaces/jobs. 
Pleaset
>let me know your thoughts or am I missing something here?
>
>TW

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: FileAid Empty File Check

2006-02-19 Thread Michael W. Moss
Hi Steve,

A similar question was asked before on a different list:

http://mvshelp.net/vbforums/showthread.php?t=21311

Maybe a different utility such as DFSORT/SyncSort or IDCAMS might be the
answer…

Regards, UK Mikey.
On Sun, 19 Feb 2006 17:46:28 -0600, Steve Burks <[EMAIL PROTECTED]>
wrote:

>I have been using a FileAid Step in JCL to check for an empty file and
then
>executing a subsequent block of JCL if the return code is 0. This works
>great for RECFM=FB datasets. For some reason it will not work on RECFM=FBA
>or RECFM=VB datasets. Is there a way to make these work?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: discrete profiles for tape protection.

2006-02-26 Thread Michael W. Moss
Hi,

The DFSMSrmm observation I feel might be worth considering, as per the
z/OS V1R7.0 DFSMSrmm Implementation and Customization Guide (dgt2c840)
manual.  Section 11.9.1 Recommendations for Using RACF Tape Profile
Processing states:

1. The maximum number of entries for data sets that a TVTOC can contain is
500.

Attention: Processing that creates large numbers of TVTOC entries and
large access lists, for example, could result in an attempt to exceed the
maximum profile size.

2. The maximum number of volumes that any data set on the tape with an
entry in the TVTOC can span is 42.

3. The maximum number of volumes that any data set on tape without a|
TVTOC can span is limited only by the maximum profile size.

Basically this all relates to TAPEVOL and TAPEDSN usage with either the
DFSMSrmm options TPRACF(P) or TPRACF(A).  I don’t really understand why
there is different behaviour for IBM VTS versus StorageTek VSM, but I
wonder whether there might be some legacy exit code in the form of ADSP or
PROTECT=YES which might be VTS dependant?

Ultimately I guess you might want to review your installations TAPEDSN and
TAPEVOL settings and decide on what they might be.

Regards, UK Mikey.

On Sun, 26 Feb 2006 12:12:28 -0600, John Benik <[EMAIL PROTECTED]> wrote:

>We have recently begun a tapecopy process from IBM VTS's to STK VSM's.  We
>have run into a few files that on the IBM side exceeded the max vol count
>for discrete profiles, but when trying to copy them to STK there is an
>issue and the discrete profile only allows them to go 42 volumes.  On the
>IBM side some of these went to 90 volumes.  When I questioned our
>outsourcer about this, they said they seemed to remember something being
>setup special for the IBM Vts's.  I had our security department look at
>this and they stated that the tapevol profiles are the same.  Since we are
>an RMM shop is this something that we would have to control in RMM?  Also
>are discrete profiles something we should be using or can we eliminate
>them and still be protected in our tape environment?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Tape stacking, media conversion and data archival past end of mainframe

2006-03-09 Thread Michael W. Moss
Hi Len,

I have certainly used the BrightStor CA-1 Tape Management Copycat Utility
in the past to perform media conversion, basically you just select the
output device you require and the old data is input on a compatible device
type while your target output device is something of your choice.

http://www3.ca.com/solutions/ProductOption.aspx?ID=1016

Performing such a cartridge media migration to a newer format such as 3590
might make more sense and should be easily achievable as you have 3590
devices at your site, so just a case of connecting up to the IBM Mainframe.

>From a regulatory viewpoint, if you have to keep data then you have to be
able to read it; so if your data is required for regulatory purposes, even
after the IBM Mainframe has been eradicated from your site, this is
something that you will have to address.  Generally folks do some sort of
data conversion from one platform to another, as the data created on one
platform (I.E. IBM Mainframe) is generally created in a proprietary format
and cannot be read on another platform (E.g. UNIX, Linux, Windows),
regardless of any data creation format considerations such as ASCII and
EBCDIC et al…

It looks like you’re from an education installation and so you might be
able to find a solution using a spirit of reciprocity with a philanthropic
or like-minded installation?  For example, several years ago I came across
a similar scenario and the education facility was able to find
some “common ground” with a local business and so an “exchange” was
arranged where the education facility “parked” their data in a separate
IBM Mainframe virtual tape pool on a minimal LPAR and so the data is
retained indefinitely and exercised by the recycling/reclamation functions
of the virtual tape solution.  A further “access” facility supplements
this should the data be required.  In return the education facility had
something to offer the business in the form of graduate and lecturer
access.  So maybe a question of if you don’t ask you don’t get, but I
suspect IBM might also play a pivotal part in this and indeed have some
other ideas that might not involve any “outsourcing” type costs!

Ultimately the challenge seems to be with the data itself and if it’s
required for regulatory requirements, then the retention and indeed access
of said data seems mandatory.  When we replace IT systems, we generally
convert the data, and of course we might go from one RDBMS (E.g. DB2) to
another (E.g. Oracle) when we decommission the IBM Mainframe, but if there
is supplemental data on tape, we need to do something with this as well.
When we use data movement utilities such as DSS and HSM then data is
stored in a proprietary format, but generally it’s a “readable” extract of
database and application related data (E.g. Sequential File, PDS Library,
Etc.).  In these cases generally one would imagine that some sort of data
conversion should take place, and so this would go right back to the
pivotal question:

“If the data is required for regulatory reasons, why isn’t it being
converted from the IBM Mainframe to the other platform?”

You certainly have options, but in today’s increasingly regulatory bound
world, Production and Business data deletion is on the decrease, and I
guess you’ve decided to retain the data; so you just now need to find a
solution that will safeguard data retention and access…

Regards, UK Mikey.

On Thu, 9 Mar 2006 18:16:01 -0600, Len Rugen <[EMAIL PROTECTED]> wrote:

>I'm in the process of shrinking our pool of 3490 carts by about 30%.  We
have CA-1, so I'm using COPYCAT.  It works OK to stack to a like device,
but as best as I can tell, it doesn't do media conversion.  We also have a
3494 and 3590's.
>
>I also have users requesting that data be retained for 7 years, 10 years
or even "forever".  The problem is that the mainframe could well be gone
before forever comes, it will probably be gone in 5 years, but we have
been saying that for awhile.  Today, the only 3490 drives are attached to
the mainframe.  We also have 3590 drives on AIX for ADSM (Whatever Tivoli
calls it now).
>...
>Any thoughts?
>
>Thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Data conversion from 3380 to 3390 - H E L P

2006-03-18 Thread Michael W. Moss
Hi Willie,

Clearly the biggest issue is the geometry change from 47K to 56K; so you
can’t use any “physical” type utilities (E.g. Full Dump & Restore) to
perform the data move unless you configure the 3390 devices in 3380
emulation mode.  The major issue will regard whether there are any
programs/JCL/CLIST,REXX et al that have hard-coded block sizes that are
device dependant.  This should be easily identifiable by scanning the
entire source portfolio, presuming you know where it all is.  If your shop
has stood still for a long-time and there really has been nothing but
running the workload that never fails, then configuring the 3390 devices
in 3380 emulation mode might be the way forward.

Most of use old-timers will remember going from 3330-3350-3380 and at the
3380 stage we probably decided enough was enough so we removed device
dependencies; but then along came DFSMS and 3390, the use of FBA RAID DASD
subsystems emulating CKD/ECKD and so we’ve never had another DASD geometry
change since then…

Best regards, UK Mikey.

On Sat, 18 Mar 2006 04:56:12 -0800, willie bunter <[EMAIL PROTECTED]>
wrote:

>Hi,
>
>  I am seeking help regarding the conversion from 3380 to 3390.  I
performed this in 1992. I scanned the archives but there was a negligible
amount of information.  Can anybody please help me?  Any advice will be
greatfully appreciated.
>
>  Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: How to identify what is on a duplexed HSM migration tape marked as Scratch

2006-04-18 Thread Michael W. Moss
Hi Fred,

There’s a DFSMShsm manual that outlines one scenario that might help:

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2M710/2.6?DT=20020116123849

Equally you could try some reverse logic; so instead of using the tape as
the be all and end all, use the DFSMShsm MCDS as the focus, and list every
single ML2 MCDS (MCR) entry and cross-reference said report for any obvious
errors.  Of course there are lots of ways for achieving this, MXG, DFSMShsm
commands, et al.

Then of course if the tape hasn’t been overwritten we can list the TTOC
record itself, which as we all know is a map of what’s on tape which only
has the one data set!  Basically you need to refer to the relevant Chapter
(TTOC – Chapter 50) of the DFSMShsm Diagnosis Guide and Reference
(LY35-0115-nn : DGT2R430)

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R430/CCONTENTS?DT=20050203115120

Of course the link above only provides a text reference to the actual
documentation required, as diagnosis resources are licensed, so you will
need to enter your IBM Resource Link USERID, Password and Key Code, which
will be associated with your site.  Once in the manual, this will give you a
mapping macro for the TTOC and if memory serves me correctly you should very
easily be able to identify all data sets on the tape from the TTOC contents.

With regard to why tapes might have scratched, DFSMShsm will not scratch
them without some instruction, implicit or explicit to do so; one example
might be recycling the original volume.  From a CA-1 viewpoint changing the
OUTCODE is just doing that, and has nothing to do with the expiration date
and therefore the Scratch and Clean process, but of course if we return a
vaulted tape to the default Data Centre (DC) location; it’s easy to
inadvertently change the expiration date to force such a move.

Best Regards, 

UK Mikey. 

www.value-4it.com 
Maximizing The Business Value-4IT 
Resources Within Your Organization 

On Wed, 19 Apr 2006 15:02:29 +0930, Fred Schmidt <[EMAIL PROTECTED]> wrote:

>Hi fellow listers,
>
>We have suddenly seen a large number of tapes appear on our daily scratch
>volume list. We are a bit nervous about releasing them for scratch, since
>they are HSM migration tape duplex copies and we don't see the original
>tape of the duplex pair in this list. We expected that both tapes of a
>duplex pair would be marked scratch at the same time.
>
>We would like to identify some datasets on these tapes and check that they
>really do no longer exist as catalog entries with VOLSER of MIGRAT* on the
>system.

>Regards,
>Fred Schmidt
>Senior Systems Programmer
>Department of Corporate and Information Services (DCIS)
>Data Centre Services (DCS)
>Northern Territory Government
>
>Email  [EMAIL PROTECTED]
>Phone(08) 89 99 6891
>Fax (08) 89 99 7493

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Our Datacenter Status

2006-04-18 Thread Michael W. Moss
Hi Eric,

May I be the first of many to wish you all the very best in your future 
endeavours.  Seemingly for as many stories we read regarding the Mainframe 
resurgence, inevitably M & A, consolidation, infrastructure and other 
events generate data centre shutdown events and consequences for 
associated personnel.  On the plus side you have amazing skills as it 
would seem that you like many of us have worked in a smaller shop and so 
have had experience of all Systems Management disciplines, including OLTP, 
RDBMS, Scheduler, Networks, et al; so the IBM Mainframe Systems Programmer 
generally has both technical and process skills that could be readily 
available for a modicum of cross-training into other technologies; should 
you not be able to find an IBM Mainframe Systems Programmer role in your 
area.  Maybe this is something you have probably already considered?

As a limey I have no real idea of IBM Mainframe installations in your 
location, but I speculate that the truly enlightened employer would not be 
overpowered by receiving a copy of your résumé for their consideration, 
while maybe you could express a desire to learn new technologies, as and 
when required, which may include programming and/or Systems Administration 
for UNIX/Linux/Wintel environments.  In my experience, the genuinely 
enlightened and caring employer, therefore the ones that we would like to 
work with, will value your experience and background and if you have the 
drive and interpersonal skills to engage such an organization, then 
hopefully you will get what you deserve very, very soon.

Keep smiling, keep trying and keep being you and allow the realm of 
possibility to exist.

Best Regards,

UK Mikey.

www.value-4it.com
Maximizing The Business Value-4IT
Resources Within Your Organization

On Tue, 18 Apr 2006 16:27:29 -0500, Eric Bielefeld  wrote:

>I just thought I'd give you all a status on our datacenter.  The time
>is short.  My last day, unless I hear otherwise soon, is 4-28.
>Yesterday at noon we shut down all of the CICSs, which were still
>running in read only mode.  I was amazed at how many messages were
>coming from VTAM saying that people were trying to log on to
>applications that are down.  I counted 128 attempts to log into CICSs
>this morning from 6:00 to maybe 11:00.
>

>Eric Bielefeld
>Sr. Systems Programmer
>P&H Mining Equipment
>414-671-7849
>Milwaukee, Wisconsin

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Monitoring tool for STK 9310

2006-04-23 Thread Michael W. Moss
Hi Roland,

The 9310 PowderHorn LSM can support a multitude of tape allocations, 
including native MVS via base HSC, virtual tape via VSM and Open Systems 
via CSC/NCS/ACSLS and so has a good inbuilt repository of information as 
these functions generate their own SMF records and monitoring events (E.g. 
WTO), which can be reported on using any tool such as DFSORT/ICETOOL, 
MXG/SAS, Rexx, et al.  I know that those sites not wishing to pay for a 
software product use the MXG mapping macros for after-the-event report 
production or the mapping macros provided by StorageTek as a base.

Software products that I know of are listed as:

ExLM (Expert Library Manager) @ 
http://www.storagetek.com/products/product_page63.html
ExLM software provides content management for mainframe automated tape 
environments. It works with StorageTek Host Software Component software, 
StorageTek Virtual Storage Manager system and other tape management 
software to fully manage tape operations.

ExPR (Expert Performance Reporter) @ 
http://www.storagetek.com/products/product_page64.html
ExPR software allows storage administrators to track tape subsystem 
activity. Tracking enables planning for capacity and resource allocation 
strategies. Information is accessed in real time. Two years of historical 
data can be displayed.

There used to be a product called Beta 54 
(http://www.clay.com/beta/beta54nowavailable.html) that was quite nifty, 
but it doesn’t seem to be listed now.  Back in 2000 ASG and Beta Systems 
did some deal with regard to some of the Beta Systems products, and maybe 
Beta 54 got dropped around then.  I think at one time there was talk of 
StorageTek integrating Beta 54 into ExPR, but I’m not sure if this 
happened.

Bottom Line: Sometimes products can just complicate matters if you really 
don’t take the time to understand what your solution is and how it should 
be performing.  For somebody who understands HSC and VSM, then analyzing 
the SMF records and the subsystem control data sets tells you all you need 
to know about 9310 capacity and performance, and arguably this is the way 
to go.  ExPR and ExLM will automate Performance Reporting and Library 
Management, but rely on the premise that you know what you’re doing, and 
so a little information might be a dangerous thing.  The simple rules of 
thumb for Capacity and Performance management should be applied, as per 
tape subsystem operations, for example:

* Mounts (Private, Scratch, Logical, Physical – Mapped to drive/robotic 
performance)
* Mount Time (Robotic Search + Robotic Load + Label Find)
* Spare Slots (LSM capacity – Slots Used)
* Scratch Count (LSM slots used – LSM private slots or LSM scratch slots)
* Cleaning Cartridges (Number, Times Used)
* Virtual Storage Cache Use (High Watermark, Low Watermark, Migration 
Efficiency)
* Virtual Storage Resource Used (Disk Space, MVC Counts, Logical Volumes, 
Etc.)

This list is by no means exhaustive, but knowing what resources to measure 
and how to measure them is of fundamental importance.  As due diligence, 
reporting on these metrics should be performed, so if you deploy a 
software product such as ExPR, then you can have confidence in the 
accuracy of the reports being produced.  If a job’s worth doing it’s worth 
doing well and so just buying a software product might not solve the 
problem…

Best Regards, UK Mikey.

On Sat, 22 Apr 2006 10:48:59 -0400, Roland Chung <[EMAIL PROTECTED]> 
wrote:

>Hi Listers, my client is looking for a software package for monitorring
>the STK tape libraries. They are at z/OS 1.5.
>
>Any suggestion will be appreciated.
>
>Thanks in advance.
>
>--
>With best regards,
>
>...Roland Chung

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: RMM Restore CDS

2006-04-25 Thread Michael W. Moss
Try http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/DGT2C840/17.4?DT=20050714162634

As you're using DSS, try http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/DGT2C840/17.4.6

On Tue, 25 Apr 2006 13:47:14 -0700, Hjelm, Norm <[EMAIL PROTECTED]> 
wrote:

>In compliance with our current Disaster Recovery procedures, I've been
>able to use the following JCL to backup the RMM CDS and Journal files
>directly to tape as the last step in a larger JCL stream that backups up
>all of our essential catalog files and such.  Now I'm in need of JCL to
>restore the files from the afore mentioned tape.  I'd like to test this
>by restoring the CDS and Journal files to a new name so that I don't
>overwrite the existing PROD files.  I've looked over the manuals but
>have yet to find exactly what I'm looking for.  Any help or suggestions
>would be greatly appreciated.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: RMM scratch tape management

2006-04-26 Thread Michael W. Moss
Hi Kees and Brian,

Hopefully the following dialog might help clarify matters for you, which I 
feel is more of a generic “process” with TMS and VTL/ATL scratch 
synchronization, as opposed to a DFSMSrmm and Bus-Tech MAS issue:

1) Both the MAS and DFSMSrmm keep scratch inventories and over time a 
scratch request might get issued by a TMS, maybe DFSMSrmm that is 
satisfied by the MAS, but the job gets cancelled and so the MAS thinks a 
tape is non-scratch (Private), but DFSMSrmm still knows it’s scratch; over 
time the MAS and TMS inventories, in this case the DFSMSrmm CDS will be 
out-of-line.

2) So periodically, generally at the end of the batch processing when the 
installation runs their TMS house-keeping in this case the DFSMSrmm 
EDGHSKP (EXPROC, VRSEL, DSTORE, et al) utility to produce scratch and 
vault lists, they synchronize their TMS, in this case the DFSMSrmm CDS 
with their VTL, in this case MAS scratch inventories.  Because this 
process is performed at a point of low tape activity (E.g. Ideally the 
installation wouldn’t want to omit the picking of vaulted tapes because 
they were still being created) then the chances of synchronization issues 
are minimized; but there is a minor possibility that scratch mounts are 
still being performed while the scratch synchronization process.  Of 
course this “challenge” applies to all VTL (E.g. Bus-Tech MAS, IBM VTS, 
StorageTek VSM, Etc.) and indeed ATL (E.g. IBM 3494, StorageTek 9310, Etc) 
implementations.  The user VTL and ATL should be mindful of this, but…

3) The TMS inventory (E.g. DFSMSrmm CDS, CA-1 TMC, CA-Dynam/TLMS VMF, 
Etc.) is the “Master” inventory and if the tape is marked “Private” non-
scratch in the TMS, then even if the VTL or ATL performs a scratch mount 
for this tape volume, whether logical (E.g. VTL) or physical (E.g. ATL), 
then the mount process will not be satisfied because a “Private” tape has 
been used for a “Scratch” mount.  Therefore the tape will be dismounted 
and another scratch mount issued, which will only be satisfied when 
a “Valid” scratch mount is performed.  Potentially an IEC507D message 
might be issued because the Expiry Date on the tape had not been met; but 
not if the tape was “Private” from a TMS viewpoint.  For the avoidance of 
doubt, a “Valid” scratch mount is one where the TMS and VTL/ATL scratch 
flags are both valid.  Conversely if the VTL/ATL scratch flag is a 
negative, so the tape is “Private”, then the logical or physical tape 
volume will never be mounted.

4) In conclusion, TMS and VTL/ATL synchronization has been forever thus.  
Generally we run scratch synchronization processes at the end of batch 
processing, as this is when we generally run TMS house-keeping, as the 
completion of batch signals the “right time” to vault tapes and replenish 
scratch levels.  However, if we get low on scratch volumes outside of this 
once a day period, we can always perform TMS “Scratch and Clean” processes 
with an associated TMS and VTL/ATL scratch synchronization process to 
release scratch volumes for further processing, while we ponder whether to 
increase our logical (E.g. VTL) and/or physical (E.g. ATL) scratch 
inventories to avoid such an issue in the future.

Best Regards, UK Mikey.

On Wed, 26 Apr 2006 09:30:56 +0200, Vernooy, C.P. - SPLXM 
<[EMAIL PROTECTED]> wrote:

>"Kees Vernooy" <[EMAIL PROTECTED]> wrote in message news:...
>> On Fri, 11 Jun 2004 09:18:54 +0100, Perryman, Brian 
<[EMAIL PROTECTED]> > wrote:
>>
>> >Hi folks
>> >
>> >Hopefully Mike Wood will jump in here..
>> >
>> >Is it possible to get RMM to specify what volume is used when a 
scratch > tape is requested? We have a virtual tape device 
(BusTech's 'Mainframe > Appliance for Storage'). It pretends to be a bank 
of sixteen 3490 tape > drives. When a tape mount for a scratch volume is 
requested, this device > intercepts the 'PRIVAT' message and picks a 
volume which it thinks is a > scratch volume, and mounts it.
>> >
>> >This gives me a problem in that I have to keep the MAS scratch list in 
> step with what RMM thinks are scratch (or more importantly, NOT scratch).
>> There is a BusTech utility batch program to do this, running off an RMM 
> scratch report. For
>> >each tape in the RMM scratch list, it tells the MAS to scratch the 
volume.
>> This is working ok most of the time.
>> >
>> >However there is a slight window of risk. If between the time the 
report > was generated and the time the scratch requests go in, a scratch 
tape is > used to satisfy a mount request, it can get scratched inside the 
MAS even
>> though the tape is now in use. RMM of course won't know about this.
>> >
>>
>> Brian,
>>
>> I found this thread in the archives, we are looking at the BusTech MAS
>> device at the moment. You know the device by now, I saw in a later 
thread > that you were satisfied by the box. Are you still?
>>
>> I am interested in the potential window of risk you mentioned. Did you
>> expirience any problems with this and if so,

Re: Problem with RMM and VRSes

2005-11-05 Thread Michael W. Moss
H,

There could be many potential situations to explain this, but two that
spring readily to mind are documented in the DFSMSrmm manuals, as per:

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R320/6.3?
SHELF=DGT2BK31&DT=20031211125737&

If you use the parmlib member EDGRMMxx OPTION VRSEL(OLD) operand, you can
chain vital record specifications as follows:

6.3.1 What You Can Chain When You Specify VRSEL(OLD)

* Data set vital record specifications contain the retention information
for the data set.
* Any name vital record specification chained to the data set vital record
specification can only contain movement information.
* Both data set vital record specification and name vital record
specifications can contain movement information.
* Vital record specifications are chained using NEXT.
* Release options are not supported but can be defined.

Setting the parmlib member EDGRMMxx OPTION VRSEL(NEW) operand, you can
specify vital record specifications as follows:

6.3.2 What You Can Chain When You Specify VRSEL(NEW)

* Both data set vital record specifications and name vital record
specifications can contain retention information.
* The name vital record specification can use any retention type.
* Both data set vital record specification and name vital record
specification can contain movement information.
* Vital record specification chains are made by using the NEXTVRS operand
or the ANDVRS operand.
* Release options are fully supported.

 Basically this is all about Boolean logic and you need to consider
what your overall or individual VRS policy should be, for each and every
rule.  As with most things, generally the 80/20 notion applies; hopefully
the 80 is the “norm” and the 20 is the “exception”. 

Secondly you may want to consider the “chaining” commentary contained here:

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R320/6.6.4?
SHELF=DGT2BK31&DT=20031211125737

Retain 3 cycles of the data set that matches the data set name mask.
Retain each additional cycle of the data set for at least 3 days. Retain
the latest cycle in the home location, the next cycle in storage location
REMOTE, and the remaining cycles in the home location.




RMM ADDVRS DSN('WK.**') –
 CYCLES -
 COUNT(1) -
 LOCATION(HOME) –
 NEXTVRS(REMC1)
RMM ADDVRS NAME(REMC1) –
 CYCLES -
 COUNT(1) -
 LOCATION(REMOTE) –
 NEXTVRS(HOMC1)
RMM ADDVRS NAME(HOMC1) –
 CYCLES -
 COUNT(1) -
 LOCATION(HOME) –
 NEXTVRS(DAYS3)
RMM ADDVRS NAME(DAYS3) –
 DAYS -
 COUNT(3) -
 LOCATION(HOME)

 With DFSMSrmm it is prudent to be explicit and define the cycle you
require without any presumptions.  Ultimately I feel this is a good thing
as this translates to “Plain English”, which ultimately is what policies
should be, defining a business requirement explicitly .

Disclaimer: Of course if you have recently converted from another Tape
Management Subsystem (TMS), then part of the DFSMSrmm conversion process
will assign the tape volume (VOLSER) from the existing TMS location
(vault) to the DFSMSrmm location (LOCDEF) and only do this once; so only
newly created or updated volumes will benefit from movement actions via
the DFSMSrmm EDGHSKP VRSEL, DSTORE and EXPROC processing.

HTH, MWM.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CA-1 to RMM Conversion

2005-11-27 Thread Michael W. Moss
Hi James,

I have sent you an Email with several documents including a project plan,
so this should get you going.  Major observations:

1) Garbage In = Garbage Out
Cleaning up all TMC volume (VOL) and data set (DSNB) chaining errors is
mandatory, but also perform a housekeeping exercise to identify and delete
any other obsolete or erroneous TMC resources (E.g. Locations, Volumes, et
al).

2) DFSMSrmm runs in parallel (Warning Mode) with any other TMS
Use this feature wisely.  Perform the actual conversion as many times as
you need to, but run in parallel for at least a week, ideally a month,
anything longer might be overkill.  This allows all personnel (E.g.
Operations, Storage Administration, et al) an opportunity to acclimatise.
There will be differences in the way that DFSMSrmm performs expiration
(E.g. Scratch via EXPROC) and vaulting (E.g. Move via DSTORE).

3) Decide on your reporting strategy
CA-1 has its own built in reports and we have probably used these for
years and take them for granted.  DFSMSrmm has a plethora of reporting
options but doesn’t have a Generalised Report Writer (E.g. TMSGRW) and so
your options are numerous (E.g. DFSMSrmm House-Keeping Reports, Sort,
Rexx, SAS, et al).  You just need to decide which methodology is best for
your installation.  Generally I would recommend creating a flat file from
the DFSMSrmm CDS via the EDGHSKP REPTEXT function and this file has
numerous mapping macros (E.g. MXG, ASM DESCT, et al) and so the world is
your oyster so to speak…

Regards, UK Mikey.

PS.  I have performed nn DFSMSrmm conversions since the early 1990’s and
if I can do them, it should be a walk in the park for you!

On Mon, 28 Nov 2005 10:50:43 +0800, James Smith <[EMAIL PROTECTED]>
wrote:

>Folks
>
>Does anyone out there have a CA-1 to RMM conversion plan that they would
be willing to share?
>
>Lots of great information in the conversion Redbook but it would be nice
to see a 'real, live' plan.
>
>Jim S

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Optimising use of 3592 Drives

2005-11-30 Thread Michael W. Moss
Hi Mike,

Seemingly filling IBM Mainframe tape volumes has always been a problem
since the release of the 3480 cartridges, and without compression (ICRC,
IDRC, et al) this was only ~208 MB capacity and of course now with the
higher capacity cartridge media (E.g. 3592, 9n40, Etc.) we have capacities
of ~900GB to utilize.

Of course some functions will fill media such as back-end VTL (E.g. VTS,
VSM, Etc.), HSM type functions (E.g. DFSMShsm, FDR/ABR, ASM2, Etc) and
RDMS (E.g. Beta 93, CONTROL-D, CA-View/Deliver, Output Manager, Etc.), but
you’re exactly right, when you move away from these tape fillers you’re
left with the challenge of filling your cartridge media.

Some might still deploy the traditional post-processor routes of Tape
Mount Management (TMM) and Tape Stacking (E.g. Tape/Copy, CARTS TS,
Copycat, Etc.), but of course this means processing the data twice, during
creation and then during consolidation, but then of course we do this to
some extent with recycle/reclamation type functions.

As tape capacities have increased, so have disk capacities and the cost of
disk is far less now, and the use of advanced T0 functions (E.g.
FlashCopy, SnapShot, TimeFinder, ShadowImage, Etc.) is now far more
pervasive.  So yes, I would advocate using these facilities to create a
SWPOC (System Wide Point of Consistency) for backup/dump operations that
might be readily available for DR type recoveries (E.g. Recover to a point
in time – End of the On-Line Day and/or End of Batch Processing).  This
way the elapsed time of the data movement (E.g. DFSMSdss, FDR, Etc.) job
is not an issue, while you might introduce Cold Site DR simplification
benefits.

There are some other techniques that may or may not be suitable for the
typical IBM Mainframe Date Centre such as the Extended High Performance
Data Mover (ExHPDM) software from StorageTek.  Quite simply this software
is designed to allow a single tape cartridge to accept output from
multiple disks simultaneously, which can dramatically reduce backup time
and use less hardware and fewer cartridges.  Of course software is not
free, but a short-term ROI might be feasible for simplified DR and the
optimisation of higher capacity cartridges.

In conclusion, the perpetual IT conundrum and challenge where higher
capacities just move the resource management challenges elsewhere.  We can
now store and retain far more data in this information era, but just now
we need to safeguard that we never lose ~900 GB of data on one cartridge,
as this is ~4500 more capacity than a ~208 MB 3480! However, we also need
to optimize the ~900 GB single media capacities to get more bang for our
buck.

So as with anything in life, you can’t please all of the people all of the
time, and you need to counterbalance risk and reward, and in this case if
you do fill your ~900 GB cartridge, make sure all of your business
critical files are duplexed.  Why?  Because ATL’s can still damage
cartridges with their “hands/grippers”, and more likely, humans might
damage them if they’re moved from the data centre to a remote vault…

Regards, UK Mikey.

On Wed, 30 Nov 2005 15:34:30 -0600, Michael Pratt <[EMAIL PROTECTED]>
wrote:

>Hi,
>
>I have been searching for some details on the best ways to take advantage
>of 3592 drives.  We have a number of these installed inside a 3494
>however - and this is more procedural than a criticism of the hardware -
>really not been able to take full advantage of the capacity (or
>performance of the drives).  If anything, they have made our lives more
>difficult.
>
>As a back-end solution for the VTS, and as a destination for HSM
>migration/auto-backup processing, no problem.  They perform this task well
>and we do see a benefit.  But, when it comes to full volume dump
>processing, it is another story.
>
>We have tried specifying STACK(99)which, whilst it sounds like a good idea
>caused us a number of problems due to the length of time it took for the
>jobs to complete.  Now we are limiting things to approx 25 volumes but the
>dump jobs still take hours to complete - and in an environment with nearly
>10TB of dataanyway, like I have said I believe we need to review our
>processes.  This is an FDR environment I am referring to, and whilst we
>did attempt to use the VTS as a staging location for the full vols, found
>that the migration to the 3592's was too slow - apparently there is a
>later version of FDRTSEL that will help here (software is quite back-level
>still - migration is pending) but depending on this without sufficient
>data to back up the claim places a few too many eggs in that basket for my
>liking.
>
>Is anyone experiencing similar issues?  How are you getting around it?  Is
>using a snap-shot utility followed by backing up the offline volumes via
>the FDR or other utility the best way to go?  Does anyone else have other
>suggestions?
>
>Mike.

--
For IBM-MAIN subscribe / signoff / archive access i

Re: DFHSM question - VTOC backups

2005-12-02 Thread Michael W. Moss
Hi John,

Yes you can delete these files which are generated from the AUTODUMP
facility of DFSMShsm and are controlled by the DUMPCLASS(VTOCCOPIES)
setting.  In the old days before DFSMS then defining a volume as Primary
to be managed by DFHSM would dictate whether the volume would have Backup,
Migration and/or Dump functionality associated.  Of course with DFSMS we
now do this at the DFSMS construct (E.g. STORGRP) level.  So it would seem
that the Level 0 (Primary) DASD volume no longer exists, but the VTOCCOPY
files do.

The DFSMShsm design of this process is extracted verbatim:

“When a volume dump of a level 0 volume is directed to a dump class that
has a VTOCCOPIES value greater than zero, a VTOC copy data set is created.
Only one VTOC copy data set is created for each dump generation. If more
dump copies of a given volume associated with the same dump class are kept
than the specified limit of VTOCCOPIES, the VTOC copies created for the
older dumps are excess. If the VTOC copy is in excess for all other dump
copies in the same generation, the VTOC copy is deleted and uncataloged
during this phase of automatic dump.”

You might also want to check if you have any legacy tape volumes from
dumps of these non-defined DASD volumes and determine whether you need
these or not.  Hopefully they will have previously cycled out naturally as
well.

Regards, UK Mikey.

On Thu, 1 Dec 2005 15:06:17 -0600, McKown, John
<[EMAIL PROTECTED]> wrote:

>I have a lot of datasets of the form:
>
>HSMBAK.VTOC.Tnn.Vvolser.Dn
>
>The "volser" above belongs to volumes which no longer exist. For these
>volumes, I have sent the command:
>
>HSEND DELVOL volser PRIMARY
>
>I 100% get back "volume not defined". Can I just delete these datasets?
>
>--
>John McKown
>Senior Systems Programmer
>UICI Insurance Center
>Information Technology

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: 3270 Session Manager survey

2005-12-02 Thread Michael W. Moss
Hi Bruce,

It looks like you have a good list there.  One thing you might want to
consider is whether the product fully supports SNA (E.g. 3270) and IP
(E.g. TN3270 and TN3270E) protocols.  Of course the majority of the
products on your list have been in use for quite some time, but some may
have not benefited from the R & D that others may have.  From my travels I
see a lot of TPX and SuperSession, but then I’m largely UK and EMEA based.

There’s a report on the web (http://www.north-ridge.com/network.php) that
is a little old but might be worth reviewing.  There of course will be
some information with the Industry Analysts (E.g. Gartner, META, Etc.) and
less expensive information from folks such as Xephon and z/Journal.

If it were me I would look to the customer base numbers the maturity and
support structure of the company, the R & D over the last 5 years or so in
the product and make sure that it supports all of today’s requirements,
which would seem to include SNA, IP, GUI (E.g. SOA, Web Portal), Single
Sign-On (E.g. Security (Authorization, Authentication, Pass-Ticket, Etc.),
Scripting, Multiple Language Support (E.g. Globalisation), naming but a
few.  Ultimately I suspect this might reduce your list down to several
players and then ultimately as with any decision, cost will then be an
influencing factor.

Regards, UK Mikey.

On Fri, 2 Dec 2005 12:08:17 +1100, Bruce Jefferies
<[EMAIL PROTECTED]> wrote:

>Hi Listers,
>I've been asked to find a list of 3270 Session Managers and have come up
>with the following.
>
>I'd really appreciate any feedback regarding the listed products (and any
I
>may have missed).
>Our use is simply to be able to jump between 3270 sessions and to be able
>to modify the list
>of sessions displayed depending on who the user is.
>
>I'm a little concerned some of these may be 'functionally stablised' and
we
>don't want to get
>lumbered with a dud.
>All war stories appreciated - gotcha's etc, please.
>
>If I get enough responses, I'll post a summary!
>
>SuperSession
>BIM/Window
>Netview Access Services
>Session Manager for z/OS
>VTAM/Switch
>Intersession
>Multsess
>Net-Pass
>TPX
>VTAM/Windows
>Solve:Access
>
>Thanks in advance.
>Bruce

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: STK9840A/B tape data restored from 9840C tape drives

2005-12-04 Thread Michael W. Moss
Hi Bob,

Good for you that you built a replacement MVS system at your DR location.
I agree with Mark’s comments and absolutely there is backwards
compatibility for 9840x type drives.  Generally this is the case for IBM
Mainframe tape drives and only when the drive format changes is backwards
compatibility not maintained (E.g. 3420 – 3480, 3480/3490 – 3590, RedWood –
 9n40x, Etc.), but then we know this when we commission the new technology.

There’s some information regarding the StorageTek 9840 VolSafe feature
being 9840x drive/media specific, but I guess this doesn’t apply to you:

http://www.imation.com/support/products/data_center_tape.html#06
http://www.storagetek.com/products/product_page2441.html#applications

For sure tape media needs to acclimatise when it moves from one place to
another, as per the above StorageTek link and this suggests that the
Relative humidity (storage up to four weeks) figure is 5% to 80%, which
hopefully was OK for your situation?

The IOS1000I message is pretty generic and the rest of the codes in the
message would be useful:

I would be surprised if this was an FDR/ABR software issue and I suspect
if it’s happening for many tapes then it’s probably a HCD (E.g.
IOCP/MVSCP) configuration issue.  Seemingly VolSafe has implications, as
will replicating the tape I/O path for your scenario, which would seem to
include HCD & HSC/VTCS for defining the 9840C drive in emulation mode.
Stating the obvious, I think your best bet is to engage StorageTek and ask
for their support, but by all means send us an example IOS1000I message,
and I’m sure the IBMMAIN folks will do their best to assist you.

Keep up the good work…

Regards, UK Mikey.

On Sun, 4 Dec 2005 18:54:22 -0500, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:

>I work for a federal agency that ran like hell from Katrina and her ugly
>sister Rita; we are currently processing out of a Sungard site where we
built
>an entire MVS computer center in 5 weeks.
>
>We have 9840 tapes that were created on STK 9840 A/B tape architecture
(Gen'ed
>as IBM 3590's tape devices) and we are unable to read the data off some of
>these tapes utilizing STK 9840C tape drives.
>
>My question is this?
>Has anyone experienced any problems reading backup tape data  (using
FDRABR)
>that was created on STK 9840A/B tape architecture then try to read the
data
>from STK 9840C tape architecture??
>
>
>The problems we are experiencing are many but I do not understand why if
I am
>attempting to restore a DSN from the same tape and it is mounted on a
>different 9840C drive each time; it may fail 4 times
>getting an IOS000I error message
>then on the 5th time the DSN will restore successfully.
>
>Please understand these tape were created in August 05 and set in New
Orleans
>in STK silos for a few weeks with the air conditioning turned off before
we
>were able to get them.
>
>STK/SUN and IBM have been very helpful in this Regional Disaster: in
addition,
>there are many other companies that have assisted our agency in this
tragedy.
>
>Bob Cosby
>SEB
>504-426-2460
>[EMAIL PROTECTED]

We routinely do this at DR drills without any problems. Using the 9840Cs
gives us extra drives since the A/Bs are limited. As a matter of fact I
just got back from a drill this morning where we did this.  This was
for native drives.   At our data center this had to work because we
converted our back end VSM drives (RTDs) from 9840B to 9840C earlier
this year.   There are still a lot of MVCs that are in 9840B format
that are read on the 9840C drives thoughout the day every day.

If you are working with the vendor then this shouldn't be news to you.
I don't know why you are having problems, but it's not because you're doing
something that isn't supported.

Regards,

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America and Farmers Insurance Group

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: STK9840A/B tape data restored from 9840C tape drives

2005-12-05 Thread Michael W. Moss
Hi Bob,

Thanks for the update and something more to look at….

Hmmm, we have an IOE (I/O Error) and Read Error Detected which I suppose
is not that surprising and the possible scenarios would seem to be listed
here:
http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/IEA2M960/SPTM012917

And the IEC147I (613-04) tells us this is a tape positioning error:
http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/IEA2M760/SPTEE147I

I couldn’t find anything exactly the same on the IBM support site and I
don’t have access to the StorageTek CRC support function, but there was
something “in the area” for devices running in emulation mode, which is
similar to your 9840C 3590 emulation requirements:

http://www-1.ibm.com/support/docview.wss?rs=0&dc=DB550&uid=isg1OW55479

It looks like you’re getting past tape label processing (E.g. VOL1, HDR1,
HDR2, Etc.), but then failing on the file “open”, which suggests that
maybe this isn’t a media error, especially as sometimes you’re finally
reading the tape, but on the 5th try…

Best I can guess is that the 9840C is not driving the required hardware
commands for processing this media, which from your commentary seems to
require 3590 emulation mode.  Do you have mixed use of native, 9840B, 3590
and 3490E for the 9840C and so sometimes you swap to a compatible drive,
but other times you just keep swapping to non-compatible drives and
eventually give up?  I’m presuming you’re using an ATL (E.g. 9310) and if
so, is HSC driving allocations to the correct drives, which for you would
seem to be a 9840C drive in 3590 emulation mode?

On Mon, 5 Dec 2005 05:49:50 -0500, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:

>Below are the error messages I got each time I swapped the tape to a
different 9840C drive.
>
>I finally gave up.  I am sending the tape to STK to be repaired as I have
done 4 other tapes with the same/similar problem.

>Any insight would be very helpful.
>And yes we use Vol-Safe; also which is another problem.
>I will review the below info you sent.
>Thanks
>
>ERROR
>IOS000I 0447,62,IOE,27,0E00,,**,R00760,NFLDT10 180
> 084010D050105350 0127FF00 0311003CD5355B90 4104230230741010
> READ ERROR DETECTED
>IEC147I 613-04,IFG0195B,NFLDT10,STEP0020,TAPE#,0447,R00760,FDRABR.VDEV
>A31.B105192A
>SWAP TO 442
>0442  3590  A  SYSA  NF692LDT  009A  R00760   PRIV/RSERV
>  SYSB, SYSC
>IOS000I 0442,54,IOE,4F,0600,,**,R00760,*MASTER* 312
> 084010D050105350 014FFF00 03110037F4355B90 4104230227281010
> READ ERROR DETECTED
>0442  3590  A  SYSA  NF692LDT  009A  R00760   PRIV/RSERV

>Got the below message each time I swapped the tape to a new drive

>IEF196I IGF503I  ERROR ON 0445, SELECT NEW DEVICE
> IGF503I  ERROR ON 0445, SELECT NEW DEVICE
> SMC0108 NO COMPATIBLE DRIVE FOUND FOR SWAP PROCESSING
> IEF196I IGF509I  SWAP 0445 - I/O ERROR
>*IGF509I  SWAP 0445 - I/O ERROR
> IEF196I 3 IGF509D  REPLY DEVICE, OR 'NO'
>*0143 IGF509D  REPLY DEVICE, OR 'NO'

>Job canceled
>CANCEL   NF692LDT,A=009A
> IEE301I NF692LDT  CANCEL COMMAND ACCEPTED

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: STK9840A/B tape data restored from 9840C tape drives

2005-12-06 Thread Michael W. Moss
Hi Bob,

Whenever I’ve done DR and I have tested with a lot of customers and
invoked in anger on two previous occasions, at SunGard the Tape Hardware
(E.g. 9310 ATL and 9840C Drives) has never been the same as at the Primary
Data Centre location, but it’s been good enough for DR.  So generally we
have to do a HSC LIBGEN or use what SunGard give us and his sometimes
dictates we need to do some device mapping to meet our requirements.  I
would think this applies to you as well and so this might mean some “Tape
Allocation” considerations.  If we have a really simple tape environment
with only one device, then simple, we map our one device (E.g. 3590)
accordingly.  I suspect that maybe you need to map the SunGard 9840C
drives to 3590, and maybe varying 3590 device types (E.g. Standard
Length/Media3/HPCT & Extended Length/Media 4/EHPCT).  You might also need
to cater for 3490 as well with the varying types (E.g. Standard
Length/Media1/CST & Extended Length/Media 2/ECCST), and for your scenario
all with 9840C device emulation.

I really wish I can pin down your exact problem, and if I were doing
problem diagnosis then I would follow this trail:

 Allocation Logic 

1) What tape device type do I require for input – Catalog Look-Up
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/dgt2i240/B.2.10
The above link will help you identify which device type is required from
an AMS LISTCAT of the file requiring input.

2) Am I allocating the correct device – MVS Device Query
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/DGT2J101/4.1
The above link will help you identify what device types and associated
media are compatible with your HCD defined devices.

3) Double-Check what devices are required for Input & Output - DFSMS
http://www.redbooks.ibm.com/redbooks/SG242229/ (Table 2-1)
The above link describes all possible tape characteristic settings,
including the often overlooked Media ID (I.E. 7th Character of the Tri-
Optic TAPE VOLSER Label), which sometimes, especially with HSC can drive
allocation.

4) Double-Check how associated software deploys tape drives
There are no links here, but you have said you’re using FDR/ABR.  For
input I can’t think of any initialisation/installation settings that are
required, but for output then for sure you need to define what devices are
going to be used, but generally with FDR/ABR then we create a file (E.g.
DSN=) to a pre-defined output device.

Presuming all of the above are seemingly OK, then allocation can’t be the
issue.

 Media Integrity 

You state that your tapes have not been in the ideal conditions for a few
weeks, so:

A) Is the media contaminating the 9840C tape drives?
Potentially the media might be slightly humid and so is causing some
temporary issues with the 9840C drive heads.  Maybe the HSC cleaning
frequency could be increased to a low value such as cleaning the drives
after every 5 or so uses.  Optionally perform manual cleans after every
use of the drive as the next use of the drive presuming a wet/damaged tape
might proliferate the problem.

B) Is the media usable/readable?
http://www.innovationdp.fdr.com/products/fatsfatar/index.cfm
As a previous IBMMAIN lister suggested, maybe you could consider using the
Innovation FATS/FATAR products which are an efficient method of
determining the readability of data on existing tapes and allows users to
quickly evaluate and correct tape related problems, as per the above link.

Ultimately as with any problem I think this is a process of elimination,
and it would seem to me that there can only be allocation or media issues.

Regards, Uk Mikey.

On Mon, 5 Dec 2005 12:02:48 -0500, [EMAIL PROTECTED] <[EMAIL PROTECTED]>
wrote:

>The 9840's are gen'ed as 3590's see below
>UNIT TYPE STATUS
>0440 3590 F-NRD
>and we access them thru an esoteric UNIT=TSTK9840
>
>to access regular 3490 tape or virtual 3490 tape we use the esoteric
>UNIT=TAPEP and let SMS drive the allocation.
>Real tape gen'ed as
>UNIT TYPE STATUS
>03A0 3490 O-NRD
>
>Vitural tape
>UNIT TYPE STATUS
>1200 349L O-NRD
>
>HSC VERSION SOS6100
>Bob Cosby
>SEB
>504-426-2460
>[EMAIL PROTECTED]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Need some RVA T82 GEN ADVICE

2005-12-28 Thread Michael W. Moss
Hi Dave,

Seasons greetings to you.  Wow an RVA, now there’s a blast from the past,
but don’t worry I remember this box from 1992 when it was an Iceberg…

The complete set of manuals for an RVA of this time period is:

* IBM RAMAC Virtual Array Storage Planning, Implementation and Usage
Guide, GC26-7170
* IBM RAMAC Virtual Array Storage Physical Planning Guide, GC26-7169
* IBM RAMAC Virtual Array Storage General Information, GC26-7167
* IBM RAMAC Virtual Array Storage Introduction, GC26-7168
* IBM RAMAC Virtual Array Storage Operation and Recovery, GC26-7171

I had a quick look at the IBM web site and couldn’t easily find them.

You might want to look at this sites archives and just look for T82:

http://bama.ua.edu/cgi-bin/wa?A1=ind0011&L=ibm-main

Best place you can look is
http://www.redbooks.ibm.com/redbooks/pdfs/sg244951.pdf, which has a
plethora of references for the T82.

>From an I/O viewpoint the manual states:

“Up to four 3990 Model 3 Storage Controllers can be defined, each with up
to 64 3390 and/or 3380 volumes, for a total of 256 devices.”

“An eight-path RVA subsystem can process eight concurrent data transfer
operations, and an additional eight I/O operations. This would require
that 16 host channels are attached. It is less likely that an RVA T82 will
be channel constrained, and customers may be more likely to attach eight
channels only to the subsystem. Additional channels will provide
additional channel processing capacity, but it is most likely that
customers will consider a maximum of 10 channel connections to a single
RVA T82.”

“IOCDS, or the HCDGEN, must have a logical control unit (LCU) defined for
each group of 64 functional devices. Each LCU should have two CNTLUNIT
macros, one for each cluster. See “RVA IOCDS Definition Example” on page
438, and the RVA Planning, Implementation and Usage Guide for further
information about
IOCDS for the RVA.”

So, based on the above, maybe you need to give a little thought as to your
physical versus logical configuration.  If I remember rightly, but this
was a long time ago, RVA 2 was all about “Turbo” and performance gains via
SnapShot and throughput, bit from a subsystem viewpoint with faster HDA’s
and bigger caches and physical connections.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFHSM connected set

2005-12-28 Thread Michael W. Moss
Hi,

Maybe the FIXCDS function would work:

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2R440/1.8.2

For example:

FIXCDS Y VOLSER DISPLAY - Displays the type Y dump volume record
FIXCDS X VOLSER DELETE - Deletes a type X backup volume record from the
BCDS

On Tue, 27 Dec 2005 14:38:16 -0600, Tom Brannon <[EMAIL PROTECTED]> wrote:

>Anyone remember seing a PATCH that was for removing a tape from
>a "connected set"?  I know I have it someplace but just can't find it.
>
>I've got a tape that RECYCLE thinks is a member of a connected set but
when
>I do a LIST TTOC on it there is no previous or subsequent tape indicated.
>
>Tom

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Abend S414

2005-12-28 Thread Michael W. Moss
On Wed, 28 Dec 2005 19:38:39 +0100, Jon Renton <[EMAIL PROTECTED]> wrote:

>Hello,
>
>I am getting the follwoing abend:
>
>IEC212I
>414-08,IGG0201B,DBM05310,DFDSS,XRSDLOG,687C,SYSS45,SSUXRSP.XRS999.XRSLOG01
>
>The IBM documentation is not very descriptive. Can anyone help me to
>understand/solve the problem?
>
>TIA
>
>Regards
>Jon
Hi Jon,

I think this is just a fairly straightforward challenge regarding the
processing of your sequential data set, which seemingly is
SSUXRSP.XRS999.XRSLOG01.

http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/IEA2M751/SPTEE212I

The message diagnosis states:

Explanation: The error occurred during processing of a CLOSE macro
instruction for a data set on a direct access device or tape.

Return Code Explanation 08: For a QSAM data set either an I/O error
occurred while flushing the buffers during close processing or a close was
issued in the caller's SYNAD routine.

There are quite a few similar hits at IBM searching for IEC212I 414-08
IGG0201B, for example:

http://www-1.ibm.com/support/docview.wss?uid=swg21221112
http://www-1.ibm.com/support/docview.wss?uid=isg1OA04013

I think you need to concentrate on what state the offending data set is in
when you’re trying to process it.  Maybe it has not benefited from total
pre-allocation (E.g. Allocated, Primed, EOF, Buffers Closed, et al) or
maybe it’s an “open” state, which requires some action (E.g. Deallocation,
Buffers Closing, et al).  Arguably maybe the program performing the
operation could do the “clean up” processing, but then maybe the program
performing the operation isn’t the problem.  It’s hard to tell from the
information what operation is being performed, but if it’s a DS operation,
then maybe look into this a little further.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: CPU Contention Issue

2006-01-02 Thread Michael W. Moss
Hi Jacky,

The information you ask for regarding Peter Enrico might be available here
in the form of Technical Presentations and many other links:

http://www.schunk-associates.com/

There is also a whole lot of information at the IBM web site, so just
search for Peter’s name and/or WLM…

With regard to a CPU running at 100%, then potentially this is what one
might want to do, run the box at capacity to get “maximum bang for your
buck”.  However, generally we will find that our “Workload” suffers when
this is the case.  I’m not so sure that upgrading the CPU is a technology
issue, but certainly it's a business issue.  So at the really basic level
this is your business meeting their Service Level Agreements (SLA’s), for
example:

* 99.5% of CICS transactions complete in less than 1 second
* Critical Path Batch Processing completes from 00:00 to 06:00
* DR vaulting completes by 06:00 every day

These are really simplistic and pretty crude, but ultimately some business
applications might be more critical than others, and WLM goals should meet
such SLA’s accordingly.  Maybe you have some rogue processes that are
consuming resource, or maybe there might be performance advantages in
upgrading to 64-bit if you haven’t already done so, and the latest
versions of software in general.

Bottom Line: Ultimately your business should “tell you” when the time for
a CPU upgrade is when SLA’s aren’t being delivered.  Optionally a CPU
upgrade might be performed to lower the TCO; so upgrading to a latest
generation box might actually reduce TCO over a several year period,
generating tangible cost ROI, while generally delivering those tangible
business benefits, which can only be described in “Plain English” via an
SLA and translated into “IBM Mainframe Speak” by WLM…

Regards, MWM

On Sun, 1 Jan 2006 16:18:01 +0530, Jacky Bright <[EMAIL PROTECTED]>
wrote:

>This is with reference to RMF Report analysis regarding MVS Busy and LPAR
>Busy parameter ...
>
>Attached is the RMF Report.
>
>Almost everyday we are facing CPU Contetion issues.
>
>From the report we can infer tht for most of the intervals there were 100%
>CPU Contention however is it possible to get exact reason for the CPU
>Contenction from RMF Report ??
>
>Wht could be the reasons for CPU contention ?
>
>Are there any performance related parameters which need to consider while
>analysing these RMF reports ??
>
>I heard some MVS performance parameters like ratio of LPAR to MVS Busy (by
>Peter Enrico) can anyone send me any reference material (by Peter Enrico)
in
>tht regard ??
>
>For some intervals ther is MVS Busy is 100 % where as LPAR Busy is < 100%.
>Can we say tht thr is requirement of upgradation of CPU on the basis of
>these RMF Reports ??
>
>We have 2 CICS regions out of which one CICS region uses C++ programmin
>while other uses COBOL programmin recently it has been observed tht COBOL
>CICS region is hoggin the CPU and which in turn affects C++ CICS region.
>
>Under what conditions we can say tht there is requirement of CPU
Upgradation
>??
>
>what should be the policy of organisation for S/390 CPU Upgradation ?
>
>
>
>Jacky

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


SNA 3270 to IP TN3270 Conversion – Data Stream Encryption

2006-01-14 Thread Michael W. Moss
Hi Team,

I wondered if anybody had any thoughts on the above.  I don’t want the
names of products to help do this, this question specifically regards
whether the 3270 data stream, in the instance from CICS should be
encrypted.  So, for SNA 3270 the data path is somewhat of a “Private”
network, where pseudo encryption exists because the X.25 data frames are
proprietary and notoriously hard if not impossible to decipher.  However,
when using an IP data path, by definition there is a “Private to Public”
data path evolution, and of course TN3270 does not encrypt the data path
by default.  OK so the IP Network will have a Firewall and therefore a
good level of Security, but we’re homing in on the CICS data path…

Thus, hypothetically speaking, for a business critical call centre type
workload, required ~24*7*365, with data that is sensitive, including Name,
Address, Financial and Government details, would you encrypt the TN3270
data path or not; so please answer:

1 – Yes
2 – No

Thank you; I will then publish the results after a week or so.

It would seem to me that SOX, HIPPA, FSA, UK Data Protection Act, et al
might have some influence here!

So, why am I asking this question?  I’m working with a client on something
totally different, but I heard they were performing this conversion.  I
asked them on their thoughts regarding data stream encryption, and by them
I mean the highest levels of management, including the Security & Risk
Manager; their reply was, hmmm, didn’t realise that, but hey, we don’t
care, we’ve got Firewall protection.  So if it were your business would
you consider encrypting business critical and sensitive data, which could
be done with a very simple implementation, for example:

TN3270 protocols can easily use SSL (Secure Sockets Layer) functionality,
which means that all data is encrypted before it is sent to the client.
Encrypted data received from the client is the decrypted before the data
is sent to VTAM, but the flows between Telnet and VTAM are unchanged.

To use SSL, TN3270 protocols must have a private key and access to an
associated server certificate. The TN3270 server can obtain the server
certificate and the keys from three different places:

* A KDB file stored in an MVS data set
* A KDB file stored in an HFS file
* The RACF database

SSL client authentication provides additional authentication and access
control by checking client certificates at the server. This support
prevents a client from obtaining a connection without an installation-
approved certificate.

There are three ascending levels of client authentication that can be
defined for TN3270 protocols:

Level 1 is the simplest level. To pass authentication, the Certificate
Authority (CA) that signed the client certificate must be considered
trusted by the server (that is, a certificate for the CA that issued the
client certificate is listed as trusted in the server's keyring). If the
client is using a self-signed certificate, then the client certificate
needs to be in the server’s keyring as a trusted CA certificate. You
implement this level of authentication by coding CLIENTAUTH SSLCERT in the
TCP/IP profile.

Level 2 authentication includes level 1 authentication. In addition, level
2 checking verifies that the client certificate has been registered with
RACF (or other SAF-compliant security product that supports certificate
registration). To implement level 2, the client certificate has to be
added to the RACF database and associated with a RACF user ID. The TCP/IP
configuration file must have a CLIENTAUTH SAFCERT statement.

Level 3 authentication level provides, in addition to level 1 and level 2
support, the capability to map access to TN3270 ports with a SERVAUTH
class profile in RACF.

Many thanks in anticipation, Michael.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Does anyone use or remember DRMS?

2006-01-21 Thread Michael W. Moss
Hi Bil,

DRMS; that’s a blast from the past…

There are a few products out there performing the same type of function,
basically working on the premise of identifying critical files for backup
and vaulting with the utilities deployed by your installation.  Take a
look at:

DR/VFI - http://www.drvfi.com/products.html
ASAP & ALL/Star - http://www.mainstar.com/products/dr/index.asp
Softek DR Manager - http://www.softek.com/en/products/drmanager/

There are others out there, but I would say that these are the main three,
all offering pretty similar capabilities.

Regards, UK Mikey.

On Sat, 21 Jan 2006 21:49:55 -0500, William McKinley <[EMAIL PROTECTED]>
wrote:

>Does any remember DRMS? or use it.
>
>A requesting jobstep puts a filename into a queue, then ends immediately.
>The master part of DRMS runs periodically to backup all of the data, and
>vault the tapes.
>
>I am looking for a replacement method of backing up data without
>elongating the jobstep.
>
>What other methods are available?
>
>Regards,
>Bil McKinley
>SLF Consulting Services, inc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Recall evrything from an ML2 tape

2006-05-29 Thread Michael W. Moss
Yes is the simple answer.

You will need to create some sort of job to do this.  You can used HSEND 
LIST or interrogate the MCDS (MCR) records to find out everything on ML2.  
Sort this list by VOLSER and then create some job streams to issue RECALL 
commands for all of the files on a tape.  HSM will then know that there 
are numerous files waiting for the RECALL and so will keep the tape 
mounted.

Optionally you say you’re converting to ML2 DASD; if so, the easiest 
option would be just to recycle the entire ML2 tape volume and presumably 
your new HSM allocation logic will create the recycled files on ML2 DASD.

On Mon, 29 May 2006 17:24:11 +0200, Miklos Szigetvari 
<[EMAIL PROTECTED]> wrote:

>Hi
>
>Can I recall evrything from an ML2 tape with one mount ?
>
>(We are converting Ml2 tapes to ML2 dasd, and we need to recall
>everything from a few houndred Ml2 tapes,but  we have 2 tape units,
>  and a collegue as roboter )
>
>--
>Mit freundlichen Grüßen / Best regards
>
>Miklos Szigetvari
>
>ISIS Information Systems GmbH
>Alter Wienerweg 12
>2344 Maria Enzersdorf
>Austria

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Migration to RMM from Zara

2005-08-23 Thread Michael W. Moss
You're correct that EDGTVEXT replaces ARCTVEXT and that is the way forward
for the target DFSMSrmm environment.

To maintain DFSMShsm consistency when running DFSMSrmm and Zara in
parallel you will need to call the respective exits in turn.  Therefore as
Zara is in control, this ARCTVEXT exit will need to run as per normal to
maintain DFSMShsm and Zara consistency, but a straightforward "stub/link"
will be required to then call the EDGTVEXT exit to pass the same
parameters to DFSMSrmm, maintaining DFSMShsm and DFSMSrmm consistency.

BTW, this has worked for me on previous DFSMSrmm migrations, as most Tape
Management Subsystems have their own iterations of ARCTVEXT to maintain
DFSMShsm consistency.

Regards, MWM.

On Tue, 23 Aug 2005 08:33:47 +0100, Mark Wilson <[EMAIL PROTECTED]> wrote:

>Hi,
>
>I am in the process of migrating a system from Zara to DFSMSrmm on a zOS
1.4
>system.
>
>I have a question re HSM and ARCTVEXT.
>
>We are currently running the Zara version of ARCTVEXT and understand that
>when we go to RMM in Production I will no longer need the exit as the
>interface is now handled by the EDGTVEXT interface.
>
>What I need to understand is what do I do whilst I am running Zara & RMM
in
>parallel?
>
>Do I need a:
>
>* ARCTVEXT router exit to call the Zara ARCTVEXT and then RMM
>ARCTEVEXT; if required I would only need the RMM version whilst in
parallel?
>* Just the Zara ARCTVEXT and RMM will be updated via the EDGTVEXT
>interface?
>* Or something completely different?
>
>Any help gratefully received.
>
>Kind Regards
>Mark Wilson
>
>Mobile: +44 (0) 7768 617006
>
>This year's annual GSE conference will take place on
>Tuesday 4th& Wednesday 5th October 2005.
>At the Stratford upon Avon Moat House Hotel
>
>Details can be found at:  
>http://www.gse.org.uk/tyc/
>
>
>--
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
>Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html