Re: Xerox LCDS to Postscript/PDF converters

2012-05-08 Thread Gilbert Cardenas
Thank you Martin and Timothy, I'll look into your suggestions.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Xerox LCDS to Postscript/PDF converters

2012-05-07 Thread Gilbert Cardenas
Hello all, we are in the process of moving one application from a content 
management server to an in house written application and we will be needing to 
send Xerox mainframe reports to this new application.

I am looking for any suggestions for a low cost, easy to use z/OS or windows 
converter that can accept a mainframe LCDS stream and convert it to postscript 
or preferably PDF with indexing or bookmarks. 

This is not a huge application (hence the low cost initiative) with about 150 
separate report entries) however, there will be a few thousand iterations of 
these created weekly.

As previously mentioned, we don't really care where the conversion takes place 
either z/OS or Win2K as long as we achieve the end results mentioned above.

If you have any recommendations or ideas, I would appreciate the feedback.

Best regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: SMS/ISMF Pool Storage Group Screen(s)

2012-02-07 Thread Gilbert Cardenas
Sorry, I was in too much of a rush to catch the van pool and did not get my 
point across clearly. 

The setting I was trying to point out is the : Alloc/Migr Threshold 
Track_Managed... 

It would be so much easier if I could add a screen print but I haven't figure 
out how to do that in this forum just yet. 

Since we don't have a z/os 1.9 lpar anymore, I can't verify it but as far as 
I'm aware, this setting was not in the definition/alter screens for ISMF 
storage pool screen when we were on 1.9. 

Someone was kind enough to respond offline and point out to me that: 
I believe you're referring to the EAV support in z/OS 1.10 I see the fields you 
are referring to in 1.10 and 1.12 from the 1.10 manual :
In addition to the break-point value setting, the storage group also :
| lets you specify a setting for track-managed free space thresholds 
| (high and low). These thresholds are factored into the SMS volume 
| selection algorithms and DFSMShsm space management. 
Since we skipped from 1.9 to 1.11, I was just caught off guard because I did 
not see anything in the migration guide. Thanks Bobbie and others who replied! 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


SMS/ISMF Pool Storage Group Screen(s)

2012-02-06 Thread Gilbert Cardenas
Can someone please clarify something for me.  Apparently, there were some 
parameters that were introduced to SMS/ISMF and these parameters were not 
present in the 1.9 release but are there now in the 1.11 release.

In particular I am talking about the Pool Storage Group Define or Alter 
screen(s).  In Z/OS 1.9, I do not remember having the following settings:
Allocation/migration Threshold : High..85 (1-99) Low . . 1 (0-99)
Alloc/Migr Threshold Track-Managed: High..85 (1-99) Low . . 1 (0-99)

Were these parameters introduced in 1.11 and if so were there some PTFs 
required to implement these?

We have been on z/os 1.11 since around Aug/Sept of last year and we were 
cruising along just fine until recently.  All of a sudden we have a pool 
filling up and not migrating datasets like they used to.

I have not been able to locate any documentation warning of this change either 
in any release guides or migration notes.  It just all of a sudden appears in 
the SMS implementation guide but perhaps I was not looking in the right place.

I would appreciate any info on this topic if you have some.

Thanks.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Spool data set browse (SDSB) question

2011-10-13 Thread Gilbert Cardenas
If you used the Rexx/SDSF api, you could possibly use the SYSLOG prefix and 
allocate the DD and then do an execio using LIFO.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: spool to spool output transfer

2010-12-01 Thread Gilbert Cardenas
I had a similar requirement a few months back because for some reason, they 
have NJE locked up real tight around here.

I created a rexx routine that basically ftps the spool file to the intended 
lpar 
by ip address.

I initiate the process by performing an SE on the spool entry and then typing 
the Edit Macro/REXX at the command line.  For lack of creativity, I called my 
script LPR and follow it by the destination name of the lpar such as PROD, 
DEV, QA etc. The esoteric name then gets converted by the rexx to the ip 
address of the lpar and I can ftp to the same lpar where the command was 
initiated if needed.

I place a jcl skeleton (iebgener) and spool data into an mvs file 
(RECFM=VBA,LRECL=300,BLKSIZE=27900) 
and then FTP the file to the desired lpar to create the new spool entry.

QUOTE SITE FILETYP=JES
MODE B
TYPE E
QUOTE SITE JESLRECL=254
PUT '||'FIL2FTP'

Although it is not fully automated and the original characteristics of the 
spool 
entry are not kept, it has worked fine for all intents and purposes.
I'm positive there is much room for improvement but my only requirement was 
to be able to print a report/sysout for programmers from lpars that do not 
have printers set up so it works just fine for me.  Offloading to a spool 
offload 
dataset and then reloading was too cumbersome so this was much easier.



On Tue, 30 Nov 2010 13:09:03 -0500, Jousma, David 
david.jou...@53.com wrote:

All,

Looking for ideas for doing spool to spool transfer of output NOT using
NJE.Issue is transferring output between two different MAS-plex of
the same node-name.  NJE would work if multi-hopped, NODE-A connected to
NODE-B, NODE-B connected to NODE-C, and finally NODE-C connects to the
other NODE-A, but that is too many hops in my opinion.  

Looking for other creative, supportable methods to solve this.   Already
thinking about:

-  Automated spool offload to dataset, FTP to remote site, spool
reload
-  ??

Assumptions:

-  Maintain print characteristics
-  both spools have the same node name, that's why is not the first
option
-  cannot change node name due to external customer connections
-  existing external connections are using Enterprise Extender, and
the IP's of the separate hosts ARE different, so no conflict externally.





_
Dave Jousma
Assistant Vice President, Mainframe Services
david.jou...@53.com
1830 East Paris, Grand Rapids, MI  49546 MD RSCB1G
p 616.653.8429
f 616.653.8497


This e-mail transmission contains information that is confidential and may be 
privileged.
It is intended only for the addressee(s) named above. If you receive this e-
mail in error,
please do not read, copy or disseminate it in any manner.  If you are not the 
intended 
recipient, any disclosure, copying, distribution or use of the contents of 
this 
information
is prohibited. Please reply to the message immediately by informing the 
sender that the 
message was misdirected. After replying, please erase it from your computer 
system. Your 
assistance in correcting this error is appreciated.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFHSM QUESTION - FINDING LIST OF DSNS

2010-08-19 Thread Gilbert Cardenas
Willie, you should be able to at least look for some of the datasets that were 
moved successfully by using the :
LIST LEVEL (YOUR.DSN.HLQ) MCDS SYSOUT(X)
This might give you enough peace to know that they were moved.

HTH,
Gil.

On Wed, 18 Aug 2010 10:49:20 -0700, willie bunter 
williebun...@yahoo.com wrote:

Dave,
 
It is set to ACTLOGMSGLVL(EXCEPTIONONLY) which would explain the 
reason.  If I set it to FULL would I be able to retrieve the list of dsns or is 
it 
lost?


--- On Wed, 8/18/10, Gibney, Dave gib...@wsu.edu wrote:


From: Gibney, Dave gib...@wsu.edu
Subject: Re: DFHSM QUESTION - FINDING LIST OF DSNS
To: IBM-MAIN@bama.ua.edu
Received: Wednesday, August 18, 2010, 10:12 AM


 -Original Message-
 From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of willie bunter
 Sent: Wednesday, August 18, 2010 8:25 AM
 To: IBM-MAIN@bama.ua.edu
 Subject: DFHSM QUESTION - FINDING LIST OF DSNS
 
 Good Day All Readers,
 
 I migrated (SMS) managed volume because there was ZERO DSCBS 
available
 for a ML0 volume -  HSEND MIGRATE VOLUME SMC001 DAYS(0).  I did this 
as
 a quick fix.  I plan to expand the VTOC of the volume on the weekend.
 My question is after the volume was migrated :
 ARC0523I SPACE MANAGEMENT ENDED ON VOLUME SBP203, 0635 
DATA SET(S)
 MIGRATED/DELETED, 007694 TRACK(S) FREED,MIGRATED
 I was trying to find the dsns that were either migrated or deleted.  I
 checked the MIGRATION LOG but I was unsuccessful.  Is there some place
 else where I can find them?

What's your SETSYS ACTLOGMSGLVL(FULL | EXCEPTIONONLY | REDUCED) 
set at? If not FULL, then no messages for successful migrations.

Dave Gibney
Information Technology Services
Washington State University


 
 Thanks.
 
 
 
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


VTOC Copy Data Set Entry for a Volume on an ML1

2010-05-11 Thread Gilbert Cardenas
What is the proper procedure for moving a VTOC Copy Data Set Entry 
for a Volume from an ML1?   Freevol and migrate do not appear to work.

We are migrating from a Shark to a DS8700 and I need to migrate all 
the ML1 datasets to the new units.

I've seen the question asked in a previous post but the question was 
never really answered since the solution was to delete the old vtoc 
copies.

I have cleared out all the old VTOC copies but these are recent:

DFHSMP.VTOC.T364101.VPS4A04.D10112
DFHSMP.VTOC.T433501.VPS4A04.D10113

I appreciate any feedback,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


LPR from multiple mainframe regions to single host

2009-12-04 Thread Gilbert Cardenas
Does anyone know if performing an LPR from multiple mainframe hosts 
to a single lpr host queue definition would cause contention issues?

I have 3 VPS mainframe printers that are lpr'ing to the same remote 
host queue and there appears to be some contention going on.

If lpr'ing from multiple locations to a single remote host queue is a 
problem, how do you get around it?  Define multiple queues...one for 
each mainframe region?

Your feedback would be appreciated,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: LPR from multiple mainframe regions to single host

2009-12-04 Thread Gilbert Cardenas
Sure, the problem (I think) is that I have 3 lpar regions that are lpr'ing 
to a common server (same IP address,port and queue).

The catch is that the server lpr destination (HOUSTON) is not really a 
printer but software that is emulating a printer.  

So the mainframe VPS definition for each lpar is for example:
COMMTYPE=(TCPIP,LPD)
TCPPRTR=HOUSTON

What I am trying to distinguish is whether lpr'ing from 3 disparate 
sources to 1 destination is common practice.  If it is then I can focus on 
the printer emulating software.

Hope this clears it up a bit.
Gil.


On Fri, 4 Dec 2009 09:15:53 -0600, Pat Mihalec 
pat_miha...@rush.edu wrote:

I have VPS and I can print to any printer from both of my Lpars. Can 
you
be clearer on how you have the definitions?


Pat Mihalec
Rush University Medical Center
Senior System Programmer
(312) 942-8386
pat_miha...@rush.edu
P   Please consider the environment before printing this email.



From:
Gilbert Cardenas gilbertcarde...@grocerybiz.com
To:
IBM-MAIN@bama.ua.edu
Date:
12/04/2009 09:12 AM
Subject:
LPR from multiple mainframe regions to single host
Sent by:
IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu



Does anyone know if performing an LPR from multiple mainframe hosts
to a single lpr host queue definition would cause contention issues?

I have 3 VPS mainframe printers that are lpr'ing to the same remote
host queue and there appears to be some contention going on.

If lpr'ing from multiple locations to a single remote host queue is a
problem, how do you get around it?  Define multiple queues...one for
each mainframe region?

Your feedback would be appreciated,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Monitor lpr printer on Windows from Mainframe

2009-11-25 Thread Gilbert Cardenas
Thanks Linda and Jim,  I have asked the VPS admin to take a look at 
exit08.  We are on VPS V2 R10 and I also saw a note in the manual 
that stated:

The printer keyword ERTABMEM and corresponding error table can replace
similar functions performed by VPS User Exit 08.

So he is going to check this out as well.  While I was waiting, I was 
also able to come up with a jcl routine that issued an LPQ to the printer 
and with some rexx code extract the status of the printer.  I then used 
our mainframe Control-M scheduler to execute the jcl and either stop or 
start the printer via VPS if the printer was not available.  I used a 
Control-M condition to keep tabs of the status of the printer.  So far it  
seems to be working and will do until the exit or ERTABMEM can be 
implemented.

Thanks for all comments,
Gil.


On Wed, 25 Nov 2009 06:37:40 -0600, Jim Marshall 
jim.marsh...@opm.gov wrote:

We use VPS to route print from the mainframe to the windows server
and when the server goes down for maintenance etc, the vps printer
goes into an error status cause it can't communicate with the printer.
I can't rely on the network folks to communicate what they are doing 
to
me so I really need this to be automated.

VPS indeed has an EXIT08 which can redrive the request for 
connection at
some interval. In the VPS LPR/LPD you just code say TCPMRD=15 
(min) and
the printer will indeed timeout. Without coding it then the printer will 
never
timeout. This is because VPS makes the initial request and waits for a
response. Eventually the printer may become available but, unlike SNA 
where
the 3X74 controller would notify VPS, nothing is sent to VPS saying it 
is now
available. I always code some timeout. True if the whole thing is not 
available
again, then you go into a loop and eventually (we hope), the printer 
becomes
available.

In VPS 1.8 EXIT08 was implemented in exit code. Oh yes, the exit 
code needs
to be told the TCP/IP error code so it knows to retry this type of error. 
But in
VPS 2.0, the strategy is available in parameters although I have not 
examined
them yet to see how easy it is. Hey this is what one doing printing 
suffers
from in the IP world of printing. As a side bar, if indeed the printer is 
set for 15
minutes timeout and there is very long print actually printing, since the
response does not come to the very end, then even though it is 
printing, the
printer will TIMEOUT and when it restarts, it starts over (thank you LPD
protocol). I try to stay away from large printouts and LPR/LPD protocol 
unless
there is no other way. This is why we always TRY to use SOCKET 
printing,
with timeout coded too, but doing checkpointing just like JES2 does 
(JES3
too).

Send me an e-mail offlist and be glad to send you the one I have run 
for
almost 20 years as a guide.  I threw in all kinds of extra IP error 
codes as I
tripped over them.

jim

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Monitor lpr printer on Windows from Mainframe

2009-11-24 Thread Gilbert Cardenas
I have the need to monitor the status of an lpr printer running on a 
windows server from the mainframe.  

We use VPS to route print from the mainframe to the windows server 
and when the server goes down for maintenance etc, the vps printer 
goes into an error status cause it can't communicate with the printer.
I can't rely on the network folks to communicate what they are doing to 
me so I really need this to be automated.

I have developed a jcl that uses IKJEFT01 to issue an LPQ command 
against the printer and some REXX code to extract the status.  I can 
schedule the job to run in 15 minute intervals and return a code of zero 
if all is well or one if the printer is inaccessible.

I'm stuck with how to keep track of the status of the printer.  Is there 
some global variable I could set somewhere that would maintain the 
value of the status of the printer(s) such as up or down.

Is there a better way to do this?  Without purchasing new software of 
course.  Can the HEALTH CHECKER be used for this application?

Any Ideas?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Monitor lpr printer on Windows from Mainframe

2009-11-24 Thread Gilbert Cardenas
We don't have SA or DRS/TCPIP but if there is a VPS exit that can do 
this automatically then I will definitely let the VPS administrator look 
into this to see if we can use this instead.

Thanks for all the suggestions,
Gil.


On Tue, 24 Nov 2009 11:45:28 -0500, David Andrews 
d...@lists.duda.com wrote:

On Tue, 2009-11-24 at 11:09 -0500, Gilbert Cardenas wrote:
 We use VPS to route print from the mainframe to the windows server
 and when the server goes down for maintenance etc, the vps printer
 goes into an error status cause it can't communicate with the 
printer.
 I can't rely on the network folks to communicate what they are 
doing to
 me so I really need this to be automated.

What do you need automated?  VPS provides an exit point (#8) that 
you
can use to selectively retry failed connections.

--
David Andrews
A. Duda and Sons, Inc.
david.andr...@duda.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Extracting GDG Base

2009-11-24 Thread Gilbert Cardenas
You could also use IGGCSIRX in SYS1.SAMPLIB and the option H or B to 
extract all gds/gdgs.  I think it is documented in DFSMS Managing 
Catalogs.



On Tue, 24 Nov 2009 15:08:23 -0600, McKown, John 
john.mck...@healthmarkets.com wrote:

 -Original Message-
 From: IBM Mainframe Discussion List
 [mailto:ibm-m...@bama.ua.edu] On Behalf Of Jacky Bright
 Sent: Tuesday, November 24, 2009 2:49 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Extracting GDG Base

 Business Operation Department has requested to extract all
 GDG base dataset
 names from the system. Is there any utility by which I can have 
these
 details  from the catalog ?

 JAcky

I don't know of a way to do it system wide, but you can get all GDG 
bases from a specific catalog via:

//IDCAMS EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
 LISTC GDG CAT(catalog.name)
/*
//

You need one such LISTC for each catalog. If you leave off the CAT
(catalog.name) the LISTC looks only in the master catalog.

--
John McKown
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential 
or proprietary information. If you are not the intended recipient, please 
contact the sender by reply e-mail and destroy all copies of the original 
message. HealthMarkets(r) is the brand name for products underwritten 
and issued by the insurance subsidiaries of HealthMarkets, Inc. -The 
Chesapeake Life Insurance Company(r), Mid-West National Life 
Insurance Company of TennesseeSM and The MEGA Life and Health 
Insurance Company.SM



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Expand Dataset Allocation Using DFDSS

2009-10-30 Thread Gilbert Cardenas
I was asked if I could expand the allocation of a Partition Sequential  
dataset and I said sure no problem but then I was given the caveat of 
only using DFDSS.  

Has something to do with DB2 which I know nothing about so I said I 
would look into it.

If I use the COPY dataset option, I can't find a way to pass the new 
space parameters.  I found the TGTAlloc but nothing to pass the new 
cylinder assignments.

I then tried using the DUMP dataset option and I was successful by 
pointing the OUTDDNAME to a new dataset with the new attributes 
however, the dataset being copied was changed from VB to U.  Is there 
another parameter that I missed?

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


MCDS could not be backed up (HSM Autobackup)

2009-10-16 Thread Gilbert Cardenas
Hello all, I recently switched the CDS backups from tape to disk and all 
had been running smooth until this morning when I got the following 
error message:

ARC0744E MCDS COULD NOT BE BACKED UP, RC=0020, 132
ARC0744E (CONT.) REAS=0016. MIGRATION, BACKUP, FRBACKUP, 
DUMP, AND
ARC0744E (CONT.) RECYCLE HELD.

I believe I have found and corrected the problem and have run a 
BACKVOL CDS command to backup the cds's however, what bugs me is 
that when I look up the ARC0744I message via Quickref or Lookat, it 
tells me what RC=0020 means but it gives no indication of where to 
find the reason code.

I just want to confirm that what I thought was the problem was really 
the problem.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: MCDS could not be backed up (HSM Autobackup)

2009-10-16 Thread Gilbert Cardenas
That was my original thought as well but that wasn't the problem.  

When I first converted the CDS to disk, I wondered if I had to make the 
pre-allocated datasets large enough to hold the maximum size they 
would get but it appears that when the new disk backup is created, the 
allocation is dynamically created to the current size of the file.  
Someone please correct me if I'm wrong.

The actual problem was that the MCDS backup files had been migrated.  
I didn't want to post it so that I wouldn't impact the feedback I got but 
what I really want is to be able to confirm that REAS=0016 tells me 
exactly that and where I can find that documented.

Thanks,
Gil.

On Fri, 16 Oct 2009 07:40:09 -0500, Staller, Allan 
allan.stal...@kbm1.com wrote:

The CDS backup files on disk must be pre-allocated and large enough 
to
hold the contents of the CDS either in unloaded format or repro
format. Most likely the backup file did not exist or ran out of space.

There are other messages in the log that will clarify exactly what
happened.

HTH,

snip
Hello all, I recently switched the CDS backups from tape to disk and all

had been running smooth until this morning when I got the following
error message:

ARC0744E MCDS COULD NOT BE BACKED UP, RC=0020, 132
ARC0744E (CONT.) REAS=0016. MIGRATION, BACKUP, FRBACKUP,
DUMP, AND
ARC0744E (CONT.) RECYCLE HELD.

I believe I have found and corrected the problem and have run a
BACKVOL CDS command to backup the cds's however, what bugs me is
that when I look up the ARC0744I message via Quickref or Lookat, it
tells me what RC=0020 means but it gives no indication of where to
find the reason code.
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: MCDS could not be backed up (HSM Autobackup)

2009-10-16 Thread Gilbert Cardenas
Thanks, I did check the HSM backup logs and the syslogs and could not 
find either the ARC0500I or ARC0503I messages anywhere so I don't 
think this was the issue.



On Fri, 16 Oct 2009 08:21:29 -0500, Staller, Allan 
allan.stal...@kbm1.com wrote:

From
http://publibz.boulder.ibm.com/cgi-
bin/bookmgr_OS390/BOOKS/IEA2M291/SPTM
001945 (msg arc0744e Application programmer response):

The following list shows the return codes issued for this message and
the appropriate actions to be taken for each:

Retcode Meaning

irrelevant info snipped

16, 20,

and 24 See message ARC0500I or ARC0503I issued before this 
message was
issued. The appropriate message indicates the   
dynamic
allocation return and reason codes.

HTH,

snip

When I first converted the CDS to disk, I wondered if I had to make 
the
pre-allocated datasets large enough to hold the maximum size they
would get but it appears that when the new disk backup is created, the
allocation is dynamically created to the current size of the file.
Someone please correct me if I'm wrong.

The actual problem was that the MCDS backup files had been migrated.
I didn't want to post it so that I wouldn't impact the feedback I got
but
what I really want is to be able to confirm that REAS=0016 tells me
exactly that and where I can find that documented.
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFHSM QUESTION - LISTING ML2 ENTRIES (MCDS) FOR A SPECIFIC HLQ

2009-10-08 Thread Gilbert Cardenas
Hi Willie, I usually use the following format and it brings back only the 
datasets that match the HLQ :

HSEND LIST LEVEL(MYHLQ) MCDS ODS(MYUSERID.DATASETS)

Regards,
Gil.


On Wed, 7 Oct 2009 09:07:26 -0700, willie bunter 
williebun...@yahoo.com wrote:

Good Day To All,
 
Could anybody tell me how  I can obtain  a list of MCDS ML2 dsns for 
a specific user.  I tried the following command but I got all of the MCDS.
 
HSENDCMD LIST LEVEL(CICS003) MCDS SELECT(ML2)
 
Thanks.


  

__
Looking for the perfect gift? Give the gift of Flickr! 

http://www.flickr.com/gift/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: PRODUCING A LIST OF ROLLED-OFF GDG DSNS

2009-09-16 Thread Gilbert Cardenas
On Tue, 15 Sep 2009 11:38:07 -0700, John Dawes 
jhn_da...@yahoo.com.au wrote:

I have a problem trying to compile a list of ROLLED-OFF gdg 
dsns which were created over the past 3 years.  We noticed that the 
dsns were not being deleted because of a NOSCRATCH option used 
when the gdg base was created by the user.  I ran a LISTCAT of the 
catalog and did a FIND command for ROLLED-OFF and to my dismay 
there are over 40,000 of them just for 1 particular catalog.  Is there an 
easier way of getting the list instead of executing a LISTCAT.  We don't 
have FDR products on this LPAR only DFDSS.  I checked the DFDSS doc 
but found nothing yet - maybe I missed it.  Could someone suggest 
something that I could try?  DFDSS would be great because I could do a 
backup and delete of the dsns but I am not sure as to how to select 
only the ROLLED-OFF gdg dsns
 
Thanks in advance..

Hey John, I had almost the exact same requirement about a year ago 
and at Mark Z's suggestion, I tweaked the IGGCSIRX to include a DTYPE 
of N for deferred datasets and then only reported on those.  Now I just 
run the report weekly JIC.
It's been a while since I did it but if you need examples I'll be glad to 
send you some offline.

Gil.
  

__
Get more done like never before with Yahoo!7 Mail.
Learn more: http://au.overview.mail.yahoo.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Backing up CDS's to DASD

2009-08-12 Thread Gilbert Cardenas
Hello all, I am trying to move the CDS backups currently from tape 
using HSM as the datamover and noparallel to disk using DSS and 
parallel.  

The manual states that I have to preallocate the new dasd backup 
datasets but thats all, so do I use the same dcb characteristics as the 
current tape datasets...RECFM=VBS LRECL=32760 BLKSIZE=32760?

I am also again confused by the manual.  It makes a case for SMS 
managing the backup datasets so that you can use concurrent copy but 
then later on it says that you should allocate the different versions of 
the backup data sets on different volumes for recoverability purposes.

That doesn't really make sense because if they are SMS managed I 
have no control where the allocation goes.  And I do not really want to 
create 4 different storage groups/classes just to make sure the 
datasets do not go to the same volume.

Can someone shed some light on these questions?

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Backing up CDS's to DASD

2009-08-12 Thread Gilbert Cardenas
Thanks for the info Dave.  The section I was reading was in the z/OS 
1.9 Admin Guide Chapter 12 page 474 that says:

If you decide to back up the control data sets and journal data set to 
DASD, you must preallocate the data sets.  Also, allocate the different 
versions of the backup data sets on different volumes.  This better 
ensures that you can recover your backup versions.

What you say makes more sense than the above when using SMS.

Thanks again,
Gil.


On Wed, 12 Aug 2009 11:13:53 -0400, O'Brien, David W. (NIH/CIT) [C] 
obrie...@mail.nih.gov wrote:

Gil,

The manual says to allocate the backups on volumes other than the 
ones containing the CDSs being backed up.

Thank You,
Dave O'Brien
NIH Contractor

From: IBM Mainframe Discussion List [ibm-m...@bama.ua.edu] On 
Behalf Of O'Brien, David W. (NIH/CIT) [C]
Sent: Wednesday, August 12, 2009 10:55 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: Backing up CDS's to DASD

Gil,

The requirement to separate by volume refers to the BCDS, MCDS and 
OCDS not their Backups.

The jcl to allocate the backup datasets is in Sys1.samplib(ARCSTRST) 
look for ALLOCBK1

Thank You,
Dave O'Brien
NIH Contractor

From: IBM Mainframe Discussion List [ibm-m...@bama.ua.edu] On 
Behalf Of Gilbert Cardenas [gilbertcarde...@grocerybiz.com]
Sent: Wednesday, August 12, 2009 10:51 AM
To: IBM-MAIN@bama.ua.edu
Subject: Backing up CDS's to DASD

Hello all, I am trying to move the CDS backups currently from tape
using HSM as the datamover and noparallel to disk using DSS and
parallel.

The manual states that I have to preallocate the new dasd backup
datasets but thats all, so do I use the same dcb characteristics as the
current tape datasets...RECFM=VBS LRECL=32760 BLKSIZE=32760?

I am also again confused by the manual.  It makes a case for SMS
managing the backup datasets so that you can use concurrent copy but
then later on it says that you should allocate the different versions of
the backup data sets on different volumes for recoverability purposes.

That doesn't really make sense because if they are SMS managed I
have no control where the allocation goes.  And I do not really want to
create 4 different storage groups/classes just to make sure the
datasets do not go to the same volume.

Can someone shed some light on these questions?

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


HSM duplexing for ML2 Tapes

2009-08-11 Thread Gilbert Cardenas
Hello all, we are on z/OS 1.9 using CA/TLMS and I want to turn on ML2 
duplexing to reduce the amount of time and effort needed to recover an 
ML2 volume (not really for offsite backups).

I have read most of the HSM duplex posts and the manuals to try to 
formulate how this function works and from what I've read, if I want to 
turn on HSM duplexing for existing ML2 tapes (not backups),
it's as easy as adding SETSYS DUPLEX(MIGRATION(Y)) to the 
ARCCMD00 parmlib member.

The part that I (and apparently others) were confused about is the 
partialtape setting.

Question 1 (Is this correct?)
If you specify MIGRATION(REUSE) an ML2 tape is not selected for 
duplexing.  If you want an ML2 to be duplexed, you need to either 
specify SETSYS PARTIALTAPE(MIGRATION(MARKFULL)) or generate a list 
of partialtapes
LIST TTOC SELECT(ML2 NOTFULL) ODS(OUTPUT.DATASET) and mark 
them full so that TAPECOPY will automatically duplex them.  

Question 2 (Is this also correct?)
If the above is correct, then by marking an ML2 volume full before it is 
actually full will probably triple the number of scratch tapes required.  
An extra tape for each duplex copy and since all of a tape will not be 
used, more tapes will be needed.

Can someone please help clarify this area for me.


Many thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM duplexing for ML2 Tapes

2009-08-11 Thread Gilbert Cardenas
That's great news.  That means that my scratch pool will only double 
for the extra duplex copy.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFHSM QUESTION - HIGHLEVEL QUALIFIERS

2009-08-11 Thread Gilbert Cardenas
Hi Willie, I believe I believe this is covered in Chapter 2 of the Admin 
guide.


The MIGRATEPREFIX parameter of the SETSYS command specifies the 
prefix (high-level qualifier) of the generated name. If you do not specify 
a migrate prefix, DFSMShsm uses the UID that you specified in the 
startup procedure.


SETSYS MIGRATEPREFIX(??)
SETSYS BACKUPPREFIX(??)
etc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM duplexing for ML2 Tapes

2009-08-11 Thread Gilbert Cardenas
On Tue, 11 Aug 2009 08:04:13 -0500, Staller, Allan 
allan.stal...@kbm1.com wrote:

Question 1 (Is this correct?)
If you specify MIGRATION(REUSE) an ML2 tape is not selected for
duplexing.  If you want an ML2 to be duplexed, you need to either
specify SETSYS PARTIALTAPE(MIGRATION(MARKFULL)) or generate a 
list
of partialtapes
LIST TTOC SELECT(ML2 NOTFULL) ODS(OUTPUT.DATASET) and mark
them full so that TAPECOPY will automatically duplex them.

Answered in another post.

Question 2 (Is this also correct?)
If the above is correct, then by marking an ML2 volume full before it
is
actually full will probably triple the number of scratch tapes
required.
An extra tape for each duplex copy and since all of a tape will not be
used, more tapes will be needed.

Recycle is your friend here!

This is the way it really works:
Data is written concurrently to both the primary and duplex copy of 
ML2.
When either the primary or duplex becomes full, (1st to EOV wins) 
both
are so marked and another duplex pair is created. The partialtape
parameter enables the recycle to be performed immediately. Of course 
you
may not want to do this since it will foul up Fast Subsequent 
Migration.

HTH,


Thanks for the info, every tidbit helps.
We currently defer the recyclying of tapes to the weekends when 
processing is slower.

Thanks again,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


FTP log messages emanating from TN3270E

2008-10-10 Thread Gilbert Cardenas
We have recently split out TN3270 from TCPIP in preparation for the z/OS 1.9 
upgrade.
We are noticing what appear to be additional FTP messages in the system 
log.  At first I thought it was because some ftps use the debug parameter and 
this was causing the additional messages but after several tests I cannot tell 
if this is really the culprit.
Does anyone recognize the following type of messages and can point us to 
where we can look to see if we have a trace or parameter setting that would 
generate these :

BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD0421 926
accept_client: accept()   
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD0488 927
accept_client: accepted client on socket 8
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD1780 928
handle_client_socket: entered for socket 8
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD2002 929
handle_client_socket: new session for 150.150.3.17 port 2448  
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD1141 930
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 50331665 : SD1161 935
spawn_ftps: my pid is 50331665 and my parent's is 16777231


BPXF024I Explanation:  
The text is the contents of the user's write buffer at the
time of the write request is displayed. Messages written to /dev/console
by z/OS UNIX applications appear on the MVS console in this message.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Retract: FTP log messages emanating from TN3270E

2008-10-10 Thread Gilbert Cardenas
Please retract this question.  Apparently I didn't perform my due diligence and 
search the archives first.  I found several posts with information on the 
message BPXF024I so I will investigage the findings.

Thanks and sorry for the fast fingers,
Gil.

On Fri, 10 Oct 2008 07:58:47 -0500, Gilbert Cardenas 
[EMAIL PROTECTED] wrote:

We have recently split out TN3270 from TCPIP in preparation for the z/OS 1.9
upgrade.
We are noticing what appear to be additional FTP messages in the system
log.  At first I thought it was because some ftps use the debug parameter 
and
this was causing the additional messages but after several tests I cannot tell
if this is really the culprit.
Does anyone recognize the following type of messages and can point us to
where we can look to see if we have a trace or parameter setting that would
generate these :

BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD0421 926
accept_client: accept()
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD0488 927
accept_client: accepted client on socket 8
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD1780 928
handle_client_socket: entered for socket 8
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD2002 929
handle_client_socket: new session for 150.150.3.17 port 2448
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 16777231 : SD1141 930
BPXF024I (OMVSKERN) Oct 10 05:33:43 ftpd 50331665 : SD1161 935
spawn_ftps: my pid is 50331665 and my parent's is 16777231


BPXF024I Explanation:
The text is the contents of the user's write buffer at the
time of the write request is displayed. Messages written to /dev/console
by z/OS UNIX applications appear on the MVS console in this message.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Curiousity question

2008-10-01 Thread Gilbert Cardenas
Hi Linda, how do you drill down to a volume's vtoc from QW s=vol* ?
All I get are the volumes stats/summary but I see no way of drilling 
down into the detail.  We are on version 6.9.

Regards,
Gil.

On Tue, 30 Sep 2008 19:54:49 +, Linda Mooney 
[EMAIL PROTECTED] wrote:

I love that feature too!  And I really like being able to drill down into 
the VTOC list for the volume from Option S.  I use the fast path for it - 
QW S=vol*  most of the time.  I also like that is supports a mix of SMS 
and non-SMS vols, and the the user can arrange the columns in the 
table.

Linda

-- Original message --
From: Cebell, David [EMAIL PROTECTED]

 And I find the DASD Space Management tool quite handy. Option S

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-
[EMAIL PROTECTED] On
 Behalf Of Grine, Janet [GCG-PFS]
 Sent: Tuesday, September 30, 2008 2:19 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: Curiousity question

 And QuickRef does have the advantage of supporting messages for 
ISVs.

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-
[EMAIL PROTECTED] On
 Behalf Of Don Leahy
 Sent: Tuesday, September 30, 2008 3:15 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: Curiousity question

 I think that IBM has indeed started to muscle in on QuickRef.

 We just installed a new version of IBM's Fault Analyzer, and it has a
 pretty good message lookup facility built into it. AFAIK, it only works
 from within Fault Analyzer, so QuickRef doesn't have anything to 
worry
 about. Yet.

 On Tue, Sep 30, 2008 at 3:09 PM, Howard Brazee
 wrote:
  On 30 Sep 2008 07:16:10 -0700, [EMAIL PROTECTED] 
(Grine, Janet
  [GCG-PFS]) wrote:
 
 I have always thought of QuickRef as an automatic sure let's get 
it
 kind of product. Has anyone heard of other similar products? What
 kind of experience can we expect in the area of cost increases for
 this product over the long term?
 
  What I never figured out is how a product like this could have 
more
  than a small window of usefulness, before the main product's 
company
  (IBM) incorporates its advantages.
 
  Sometimes the window is bigger than we expect - it took time 
before
  IBM decided it wanted to keep the sort business.
 
  Sometimes the window is naturally small - by the time Microsoft 
got
  disk compression, that model had run its course.
 
  --
  For IBM-MAIN subscribe / signoff / archive access instructions, 
send
  email to [EMAIL PROTECTED] with the message: GET IBM-MAIN 
INFO
  Search the archives at http://bama.ua.edu/archives/ibm-main.html
 
 

 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send
 email to [EMAIL PROTECTED] with the message: GET IBM-MAIN 
INFO Search
 the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-
MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-
MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN 
INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Curiousity question

2008-10-01 Thread Gilbert Cardenas
Bingo, your memory serves you well Liz.  Thanks for the clarification.

Gil.



On Wed, 1 Oct 2008 08:41:49 -0400, Lizette Koehler 
[EMAIL PROTECTED] wrote:

If you are using QuickRef 6.6 and above (iirc)
Then you need to physically move the cursor to the volser and hit 
enter.  It
will pop up.

You cannot tab to the volume, you have to place the cursor on the 
volume.

Lizette



 Hi Linda, how do you drill down to a volume's vtoc from QW s=vol* ?
 All I get are the volumes stats/summary but I see no way of drilling
 down into the detail.  We are on version 6.9.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN 
INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Scratching Expired Datasets

2008-09-26 Thread Gilbert Cardenas
On Mon, 22 Sep 2008 16:39:52 -0500, Eric Bielefeld eric-
[EMAIL PROTECTED] wrote:

Is there a way to scratch expired datasets without DFHSM or some 
other storage manager?  We create SMS datasets in DB2 that have an 
expiration date automatically set.  When the datasets expire, we want 
to scratch them.

Eric
--
Eric Bielefeld
Systems Programmer
Washington University
St Louis, Missouri
314-935-3418

--
Hey Eric, I was curious about your request because I've been asked the 
same thing by our DB2 guys on an lpar that does not have HSM installed 
on it.
I tinkered around with DCOLLECT and SORT (syncsort in our case) and 
was able to come up with something that might be workable.
Here is an example you could use to generate a list of dataset names.  
You could then use sort to INCLUDE or OMIT what you do or do not 
want.  
Although I did not go as far as formatting the output to generate 
IDCAMS delete spreadcards it can be done very easily.  I just wanted 
you to see the output to determine what the results looked like.

//STEP01   EXEC  PGM=IDCAMS  
//SYSPRINT DD SYSOUT=*   
//OUTDSDD DSN=DISK.DATASET.DCOLLECT(+1), 
// UNIT=DISK,
// DSORG=PS, 
// RECFM=VB,LRECL=932,   
// SPACE=(1,(5,25000),RLSE),AVGREC=K,
// DISP=(NEW,CATLG,DELETE)   
//BCDS DD DSN=YOUR.BCDS,DISP=SHR   
//MCDS DD DSN=YOUR.MCDS,DISP=SHR   
//SYSINDD  * 
   DCOLLECT -
   OFILE(OUTDS) -
   ERRORLIMIT(1000)  -   
   VOLUMES(*)
//
//STEP04 EXEC  PGM=SORT 
//SYSOUT   DD  SYSOUT=* 
//SYSPRINT DD  SYSOUT=* 
//SYSUDUMP DD  SYSOUT=* 
//SORTDIAG DD  DUMMY
//SORTIN   DD DSN=DISK.DATASET.DCOLLECT(+0),DISP=SHR
//SORTOUT  DD DSN=WORK.DATASET.WITH.OMITTED.RECORDS,
// UNIT=DISK,   
// DSORG=PS,
// RECFM=VB,LRECL=932,  
// SPACE=(1,(5,25000),RLSE),AVGREC=K,   
// DISP=(NEW,PASS,DELETE)   
//SYSINDD  *
 OPTION DYNALLOC=(3390,2),MAINSIZE=300K 
 SORT FIELDS=COPY   
*EXCLUDE RECORDS THAT ARE NOT DATASET RECORDS   
*AND RECORDS THAT HAVE A DATE VALUE OF 1900/000 
 OMIT COND=(9,1,CH,NE,C'D', 
 OR,113,4,CH,EQ,X'000F')
 RECORD TYPE=V  
 END
//**
//STEP05 EXEC  PGM=SORT 
//SYSOUT   DD  SYSOUT=* 
//SYSPRINT DD  SYSOUT=* 
//SYSUDUMP DD  SYSOUT=* 
//SORTDIAG DD  DUMMY
//SORTIN   DD DSN=WORK.DATASET.WITH.OMITTED.RECORDS,
// UNIT=DISK,   
// DISP=(OLD,PASS,DELETE)   
//SORTOUT  DD SYSOUT=J  
//SYSINDD  *
 OPTION DYNALLOC=(3390,2),MAINSIZE=300K 
 SORT FIELDS=(29,44,CH,A)   
 RECORD TYPE=V  
 OUTFIL FNAMES=SORTOUT,CONVERT, 
*COPY DATASET NAME FROM DCOLLECT RECORDS
*AND REFORMAT EXPIRATION DATE TO /DDD   
 OUTREC=(1:29,44,C' ',113,4,DT3,EDIT=(/TTT))
/*  
//  


HTH,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: VOLID starting with ##

2008-09-25 Thread Gilbert Cardenas
On Wed, 24 Sep 2008 15:37:56 -0500, Ward, Mike S [EMAIL PROTECTED] 
wrote:

Hello all, I tried searching the archives for VOLID starting with ##,
and volume id ##, but didn't get any hits. We have from time to time had
cataloged datasets that have the volid as ## Where the ## are really
##'s and the ? are some assortment of characters. This has been going on
for years and we can't seem to figure out where they come from. Do  any
of you have any idea's as to where we might look.

Thanks.

==

Hello Mike, I have run across this (or something similar) and the root cause 
was that someone was trying to use FDRCOPY to copy a vsam file and add a 
new high level qualifier.  The problem was that the new HLQ threw the 
dataset name over the maximum number of characters (I think 44) and so the 
volser appeared with the ## as you mentioned.
Perhaps you are experiencing something similar?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Mark HSM migrat2 tape as full

2008-09-24 Thread Gilbert Cardenas
The operators have an HSM tape that has gone missing.  I want to mark the 
tape full so nothing new gets added to it because HSM is still calling for the 
tape to probably add to it.
I found the following notes in the HSM reference :

Marking a Migration Level 2 Tape Volume Full Example: In this example, a 
migration level 2 tape volume is marked full. The MARKFULL parameter does 
not delete the volume. DELVOL MIG003 MIGRATION(MARKFULL)

The DELVOl makes me nervous.  Has anyone done this before and can vouch 
that the volume is not deleted but only marked as full?

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Mark HSM migrat2 tape as full

2008-09-24 Thread Gilbert Cardenas
Thanks for the confirmation Len and Allen.  Worked like a charm.

Best regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Mark HSM migrat2 tape as full

2008-09-24 Thread Gilbert Cardenas
On Wed, 24 Sep 2008 12:02:55 -0400, O'Brien, David W. (NIH/CIT) [C] 
[EMAIL PROTECTED] wrote:

Delvol Markfull will only mark the tape full so that HSM stops appending to 
it. 
(I know this has already been answered. I include only for completeness.)

Now to the rest of your problem - Are your tapes duplexed? If yes, you issue 
a Taperepl command against the missing tape. Now all pointers the the missing 
tape will be directed to the duplexed tape and you may resume recalling data 
as necessary.

If the tape is not duplexed, you need to do the following (unless you're sure 
you will never need this data)

List the contents of the tape using the following:
HSEND LIST DSN MCDS SELECT(VOLUME(nn)) ODS('tsouid.nn')

Edit the ODS Hdelete all data sets on the tape and execute
Then edit the ODS again to Hrecover all of the datasets and execute.
Then either edit the ODS again to Migrate what was just recovered or let 
nature take its course.



The datasets on this tape are just TMM point-in-time backups so we don't 
back them up or duplex them since they are really expendable.  I don't 
believe HSM was trying to recall any dataset but simply appending to the tape 
which is why I wanted to mark it full to avoid any more problems trying to 
locate it.
I appreciate the thoroughness of your answer though.

Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Discussion on Mod 27 usage

2008-09-03 Thread Gilbert Cardenas
On Wed, 3 Sep 2008 02:25:10 -0700, Ron Hawkins 
[EMAIL PROTECTED] wrote:

Kees, Ted, et al,

As far back as I can remember IMS and DB2 logs have been hand 
placed to
reduce the chance of contention between the Logging process (mainly 
write)
and the Archive process (read). The most common practice I have seen 
is a
round-robin arrangement where the next used log is always on a 
different
volume, thus keeping the archive process one volume behind.

SMS does not take away the need to look after your loved ones. While 
80% of
datasets can happily exist anywhere in the Mosh Pit, there are still a 
few
precious hotties that need special treatment. They need to be hand 
placed,
which is one of the reasons SMS STORCLAS has the Guaranteed Space 
attribute:
to allow a VOLSER to be specified and honoured. Rather than calling it 
micro
managing, I would suggest it is good practice. Allocating the DB2 logs 
is
hardly a daily occurrence, and neither is allocating the WADS, catalogs,
RACF datasets and other files that require special treatment.

Critical Files that must be kept contention free need to be hand placed.
Files that need special treatment, like Solid State Performance for the 
WADS
in FlashAccess, also need to be hand placed. System Managed Storage 
is not
about chaos, it is about ensuring order. Giving the owner of DB2 
performance
the tools to deliver reliable performance will make everything run 
smoother.
It doesn't matter if they are a DBA, Storage Admin, or the Capacity 
Planner:
as long as they can hand place the things that need TLC. Kees' 
experience is
a good example of this.

Ron



 Right, until some extent: yes, the DBA's should not be able to 
request
 those storage management details.

 Another point that you (the storage manager) should be aware of 
when
 concerning DB2 Archive Logs is their performance. When we moved 
from
 Hitachi disk to ESS, we had severe performance problems with the 
Log
 Archiving. DB2 was logging faster than archiving could empty the 
logs
 causing several DB2 halts. We had to spread the DB2 Logs over the 
ESS
 in
 such a way that logging did not interfere with archiving, thus 
ensuring
 sufficient archive performance.

 I think this is an ESS problem and not a DS8000 problem, but you 
might
 want to take a look into this, to prevent the DBA's from seeing
 justification in their storage management demands.

 Kees.

--


Thanks for the feedback Ron et.al.  Some can call it micro-managing and 
their entitled to their opinion but unless someone knows all the details 
surrounding an environment, that is all it is...an opinion.  

I definitely agree that the need for little islands of storage doesn't 
really make sense from a storage management perspective but 
unfortunately, I am not a DBA and they get a lot more respect than my 
meager position holds so my two cents counts for very little.

I did want to pass on some more information just to hopefully help 
someone else who might run into this situation.

Apparently, the DBAs were using a template definition for several jobs 
where they were specifying a generic allocation of 100 primary and 100 
secondary cylinders and a volume count of 5.

This would work for some jobs but not for ones using larger tables.  
According to the DBAs, this was a necessary evil when they upgraded 
from version 7 to 8.  After questioning this, they were able to remove 
the generic space allocation from the template which has solved most of 
the issues.

I've gotten a lot of good feedback so I thank you all.

Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Discussion on Mod 27 usage

2008-09-02 Thread Gilbert Cardenas
I know there has been some recent discussions on the usage of Mod 27s 
and my question is somewhat related so I'm looking for some insight.

Background:
We recently added more storage to our Shark and we initialized all of 
the new volumes as Mod 27s.
I migrated several storage pool volumes to the new mod 27s so some of 
the storage pools went from 10 volumes per say to 2 volumes.

Observations:
We ran into an issue this weekend where one of our DB2 jobs abended 
with a IEC028I 837-08 error code.  From all indications there was plenty 
of space available but after looking at the control cards, the jcl was 
specifying a volume count of 5 but there were only 2 volumes in the 
pool available.
It was my understanding that specifying a volume count would only 
allow the job to utilize that many volumes if they were available.  Not 
that that many volumes would be a requirement.

Questions:
Would jcl that is specifying a volume count more than what is available 
cause this type of abend?
Are there other gotchas I need to look out for?


Can anyone shed some light on this?

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Discussion on Mod 27 usage

2008-09-02 Thread Gilbert Cardenas
On Tue, 2 Sep 2008 09:50:43 -0500, Wayne Driscoll 
[EMAIL PROTECTED] wrote:

Gilbert,
What type of dataset?  PS or VSAM?  If the primary and secondary 
quantity of
the dataset was defined such that (primary + (n*secondary) * 3 or 
higher) is
required for the dataset, (where n = 16 for PS and 121 for DB2 VSAM) 
then
the allocation amounts will have to be modified to ensure that the 
dataset
can be contained on only 2 volumes.

Wayne Driscoll
Product Developer
NOTE:  All opinions are strictly my own.





The dataset was PS...a GDG.  I see your point about containing the 
dataset within two volumes.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Discussion on Mod 27 usage

2008-09-02 Thread Gilbert Cardenas
On Tue, 2 Sep 2008 16:50:04 +0200, Vernooy, C.P. - SPLXM 
[EMAIL PROTECTED] wrote:

A possible cause here could be that the total storage required was 
based
on the space parameter in the JCL *and* the volume count of 5. E.g.
SPACE=(CYL,100),UNIT=(3390,5) and the datset requires 400 
cylinders. If
you only provide 2 volumes, the required space will not be available.
You could correct this situation by enlarging the space parameter (if
possible).

Kees.


I definitely see a pattern forming here which worries me that there will 
have to be a lot of jcl revisions to accomodate the mod 27s i.e. 
adjusting primary/secondary allocations and/or volume counts.  I 
definitely didn't see this coming.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Discussion on Mod 27 usage

2008-09-02 Thread Gilbert Cardenas
On Tue, 2 Sep 2008 11:30:23 -0500, Greg Shirey 
[EMAIL PROTECTED] wrote:

From: IBM Mainframe Discussion List On Behalf Of Ted MacNEIL

I definitely see a pattern forming here which worries me that there
will have to be a lot of jcl revisions to accomodate the mod 27s i.e.
adjusting primary/secondary allocations and/or volume counts.  I
definitely didn't see this coming.

You may be able to handle in within your ACS routines, under SMS.
That could save you some work.

How?  Allocation parameters are DATACLAS attributes which are always
overridden by JCL.  (except UNIT, VOLSER, and dynamic volcount)

Greg Shirey
Ben E. Keith Company

--
That is exactly my point.  If previously a user coded per say a primary of 
50 cyl and a seconary of 5 cyl with a volume count of 5, they might be 
able to acquire the space that step needed whereas if the volume count 
is dimished to 2, then the allocation is more likely to fall short of it's 
previous capacity which would require adjusting the allocation requests.

I agree with Tom Merchant who said Too many small storage puddles 
makes it difficult to manage.  It sounds to me as if you are micro-
managing your SMS.

Micro managing was not my intention at all.  If the DBAs request that 
they want their archive logs separate from their bootstraps and so on, 
then I am simply trying to accomodate the users requests.  

Plus, there is definitely a culture change when working with larger 
volumes such as a mod 27 because what seemed like an acceptable 
storage pool of 10 volumes is quickly diminished when it is reduced into 
2 volumes so it may be necessary to merge some pools together.

I certainly appreciate everyone's feedback which is what I was looking 
for because this is new territory for me.

Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



FTP from windows to z/os error code 451

2008-07-24 Thread Gilbert Cardenas
Hello all, I am trying to FTP an LCDS type file from a windows server to a z/OS 
1.7 mainframe and then submit jcl to print that dataset using iebgener 
however, I keep getting the following error :

451
Error error closing the data set. File is catalogued.
Explanation:
The MVS data set did not close successfully. The data set was catalogued.
error is not meaningful.

System action:
FTP continues processing.

User response:
Change the CONDDISP setting with a SITE subcommand if you do not want 
data sets to be catalogued when file transfers fail. See the z/OS 
Communications Server: IP User's Guide and Commands for information about 
the SITE subcommand.

System programmer response:
None.


Here are the commands I am passing :

quote mode b
quote type e
quote site recfm=vbm
put c:\tmp\META2113 'META2113.LCDS.FILE'
quote mode s
quote type a
quote site recfm=fb lrecl=80 blksize=27920
quote site filetype=jes
site jeslrecl=80
put c:\tmp\FTP.JCL.2113
bye
quit


I am verifying that the mainframe file is not cataloged prior to the ftp so the 
message is somewhat  misleading.
Any ideas/suggestions would be appreciated.

Best regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: FTP from windows to z/os error code 451

2008-07-24 Thread Gilbert Cardenas
Hi Earl, I don't get an error message when I issue the quote mode b 
from windows and I don't have a problem with smaller datasets so I 
think I may be having a space related issue.

Thanks for the reply,
Gil.

On Thu, 24 Jul 2008 19:49:29 -0500, Earl Buhrmester 
[EMAIL PROTECTED] wrote:

Your error is caused by quote mode b.

Is block mode supported between windows and z/OS ?


Earl



Gilbert Cardenas [EMAIL PROTECTED]
Sent by: IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU
07/24/2008 06:02 PM
Please respond to
IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU


To
IBM-MAIN@BAMA.UA.EDU
cc

Subject
FTP from windows to z/os error code 451






Hello all, I am trying to FTP an LCDS type file from a windows server 
to a
z/OS
1.7 mainframe and then submit jcl to print that dataset using iebgener
however, I keep getting the following error :

451
Error error closing the data set. File is catalogued.
Explanation:
The MVS data set did not close successfully. The data set was 
catalogued.
error is not meaningful.

System action:
FTP continues processing.

User response:
Change the CONDDISP setting with a SITE subcommand if you do not 
want
data sets to be catalogued when file transfers fail. See the z/OS
Communications Server: IP User's Guide and Commands for 
information about
the SITE subcommand.

System programmer response:
None.


Here are the commands I am passing :

quote mode b
quote type e
quote site recfm=vbm
put c:\tmp\META2113 'META2113.LCDS.FILE'
quote mode s
quote type a
quote site recfm=fb lrecl=80 blksize=27920
quote site filetype=jes
site jeslrecl=80
put c:\tmp\FTP.JCL.2113
bye
quit


I am verifying that the mainframe file is not cataloged prior to the ftp
so the
message is somewhat  misleading.
Any ideas/suggestions would be appreciated.

Best regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html





-
This e-mail and any attachments are intended only for the
individual or company to which it is addressed and may contain
information which is privileged, confidential and prohibited from
disclosure or unauthorized use under applicable law.  If you are
not the intended recipient of this e-mail, you are hereby notified
that any use, dissemination, or copying of this e-mail or the
information contained in this e-mail is strictly prohibited by the
sender.  If you have received this transmission in error, please
return the material received to the sender and delete all copies
from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-
MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Controlling the execution sequence of dependant jobs in JES2

2008-05-27 Thread Gilbert Cardenas
On Mon, 26 May 2008 11:24:27 -0500, Paul Gilmartin 
[EMAIL PROTECTED] wrote:

Is it possible that this worked for you because you also had
CNVTNUM=1?


As far as I'm aware, we have been using CNVTNUM=2 for the longest time:

$DPCEDEF
$HASP849 PCEDEF 160 
$HASP849 PCEDEF  CNVTNUM=2,PURGENUM=2,PSONUM=2,OUTNUM=2,
$HASP849 STACNUM=2,SPINNUM=3


And some questions:

If CNVTNUM1, do the multiple converters operate asynchrously, so
that even hundreds of jcls within the one deck could run out of
order?

If the jobs are submitted with TYPRUN=HOLD and released later, will
they run

o In order of release?

o In order of conversion?

o Other (specify)?


In our case, the jobs submitted with a TYPRUN=HOLD and then they were re-
prioritized and they were always  released and ran by the jobnumber in 
sequential order.  This never posed a problem as long as they were in the 
same unique initiator.

snip


Best Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Controlling the execution sequence of dependant jobs in JES2

2008-05-26 Thread Gilbert Cardenas
On Fri, 23 May 2008 04:25:10 -0400, David Cole [EMAIL PROTECTED] 
wrote:

Hi,

I have a process that submits up to a couple of hundred jobs for
execution. I require that these jobs execute in the same order in
which they were submitted.

For decades I have accomplished this by assigning all of the jobs to
a specific job class and then insuring that there was never more that
one initiator that had that job class assigned.

I am now running at a new data center. (Guess where...) And I have
just discovered that my jobstream is running out of sequence. For
some reason, my single-threading initiator is selecting jobs from the
input queue out of sequence.

Is there an official way to enforce job execution sequencing?

TIA

Dave Cole  REPLY TO: [EMAIL PROTECTED]
Cole Software  WEB PAGE: http://www.colesoft.com
736 Fox Hollow RoadVOICE:540-456-8536
Afton, VA 22920FAX:  540-456-6658

--
For the longest time before we moved the application to a scheduler we had a 
similar situation where we had one jcl deck that contained hundreds of jcls 
within the one deck.
We also had the need to run these in the order they were submitted and we 
got around this by simply setting the priority of all input jobs to be the same 
priority ($TJ1-,P=10) which meant that all the jobs ran in the order we 
needed.  It could be further qualified to specify a job mask $tJ1-
,JM=MYJOBS* if needed.
It wasn't ideal but it worked for us for several years.

Best regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SMS PUZZLE

2008-04-29 Thread Gilbert Cardenas
On Mon, 28 Apr 2008 11:25:31 -0700, willie bunter 
[EMAIL PROTECTED] wrote:

Hallo Everybody,

  I am trying to track down why some dsns with the HLQ of  TEST.** are 
being directed to a non-SMS volume. I noticed that a majority of the dsns are 
being allocated correctly by SMS.  I ran a check on the ACS routines and I 
found that the SC ACS has a storage class set for TEST.**
  I also noticed that is another storage class for unit types.  Could this be 
the cause?  I executed some tests where I used the unit=3390, the dsn 
(TEST.AAA.BB) was then allocated with the proper storage class and storage 
group.  However, when I used the unit=3390 with the volser of a non-SMS 
pack, the dsn (TEST.AAA.CC)  was allocated on the pack  Below is a sample 
of the SC ACS.
  FILTLIST VALUNIT INCLUDE 
('3380','SYSDA','SYSALLDA','3390','PUBLIC',VVIO','PROD','TEST''' )

  FILTLIST GDVX_DSN INCLUDE
  (LIBR.**, SCPX.SYST.*.VSAMEXT.**,
  TEST.**,TEST1.**,ALQTOPR.**,
  .
  .))

  WHEN (UNIT NE VALUNIT)
  SET STORCLAS = ''

  WHEN (HLQ = POOL_GDVX OR DSN = GDVX_DSN)
  SET STORCLAS = 'SCTEST'

  Am I on the right track in thinking that the SC VALUNIT is the cause of the 
problem?   If so, how can I fix it?  I would appreciate your suggestion and 
comments.

  Thanks in advance.


Hey Willie, just a couple of thoughts :

Is it possible that perhaps there are other esoteric names being used for 
VALUNIT such as DISK... that aren't being tested?

I noticed that the string has quotes around the commas as does the VVIO 
designation.  I assume this was done on purpose?

HTH,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



SMS allocation in Cylinders using AMS

2008-04-16 Thread Gilbert Cardenas
Hello everyone, I have an unknown that I have not been able to figure out or 
at least I don't know where to look.
I have a user who is trying to define a vsam file using IDCAMS as follows:

DEFINE CLUSTER -
   ( NAME(BLAH.BLAH.BLAH) - 
LINEAR -
REUSE - 
CYL(30) -   
SHAREOPTIONS(3 3) ) -   
   DATA -   
   ( NAME(BLAH.BLAH.BLAH.DATA) -
  ) -   
  CATALOG(BLAH) 

The problem is, when I look at the dataset (which is SMS managed) it usually 
ends up being around 3 times larger than what the user requested.

Of course the first place I looked was in the dataclas that this dataset gets 
assigned and the only thing that the dataclas specifies is a volume count of 1.
There are no overrides anywhere else for primary or secondary space.

Looking at the Access Method Services documentation, it states that you  
should not use the TRACKS or CYLINDERS parameters. If you use them for an 
SMS-managed data set, space is allocated on the volumes selected by SMS in 
units equivalent to the device default
geometry.

Could the reason that this dataset is being allocated larger than requested 
have to do with the fact that the sms volumes are mod 9 volumes?

Btw, I tried allocating the dataset in Megabytes (30) and I got 43 cyl with 1 
extent.  I don't know what the conversion of megabytes to cylinders is but 
this still doesn't seem right.

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SMS allocation in Cylinders using AMS

2008-04-16 Thread Gilbert Cardenas
Additional info,
I found out where the default allocation is coming from (CDS base 
configuration).  Apparently, when they setup the SMS default configuration, 
they setup the default unit to be 3390 and the default bytes per track to be 
56665 :

Default Management Class :   Default Device Geometry :  
Default Unit . . . . . . : 3390Bytes/track . . . . . : 56665
Tracks/cylinder . . . : 15   

The bytes per track is the same for a 3390 mod 3 and 9 (56,664) but is the 
allocation tripled when a mod 9 is used?


On Wed, 16 Apr 2008 12:28:13 -0500, Gilbert Cardenas 
[EMAIL PROTECTED] wrote:

Hello everyone, I have an unknown that I have not been able to figure out or
at least I don't know where to look.
I have a user who is trying to define a vsam file using IDCAMS as follows:

DEFINE CLUSTER -
   ( NAME(BLAH.BLAH.BLAH) -
LINEAR -
REUSE -
CYL(30) -
SHAREOPTIONS(3 3) ) -
   DATA -
   ( NAME(BLAH.BLAH.BLAH.DATA) -
  ) -
  CATALOG(BLAH)

The problem is, when I look at the dataset (which is SMS managed) it usually
ends up being around 3 times larger than what the user requested.

Of course the first place I looked was in the dataclas that this dataset gets
assigned and the only thing that the dataclas specifies is a volume count of 
1.
There are no overrides anywhere else for primary or secondary space.

Looking at the Access Method Services documentation, it states that you
should not use the TRACKS or CYLINDERS parameters. If you use them for an
SMS-managed data set, space is allocated on the volumes selected by SMS 
in
units equivalent to the device default
geometry.

Could the reason that this dataset is being allocated larger than requested
have to do with the fact that the sms volumes are mod 9 volumes?

Btw, I tried allocating the dataset in Megabytes (30) and I got 43 cyl with 1
extent.  I don't know what the conversion of megabytes to cylinders is but
this still doesn't seem right.

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SMS allocation in Cylinders using AMS

2008-04-16 Thread Gilbert Cardenas
On Wed, 16 Apr 2008 13:52:52 -0400, O'Brien, David W. (NIH/CIT) [C] 
[EMAIL PROTECTED] wrote:

snip
Where in the manual did it tell you not to use Cyls? I've been using Cyls or 
Tracks for years with no problem.
 


I was looking at the Access Method Services for Catalogs Chapter 14 page 
142.  
To maintain device independence, do not use the TRACKS or CYLINDERS
parameters...

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SMS allocation in Cylinders using AMS

2008-04-16 Thread Gilbert Cardenas
On Wed, 16 Apr 2008 13:57:05 -0400, David Betten [EMAIL PROTECTED] 
wrote:

I'd also ask if when you see the dataset using 3 times what the user
requested, are you looking immediately after it's been defined or after
data has been loaded into it?  It might be extending once it's populated.

Have a nice day,
Dave Betten
DFSORT Development, Performance Lead
IBM Corporation
email:  [EMAIL PROTECTED]
DFSORT/MVSontheweb at http://www.ibm.com/storage/dfsort/


It is already bigger after allocation and before the dataset is even populated.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SMS allocation in Cylinders using AMS

2008-04-16 Thread Gilbert Cardenas
On Wed, 16 Apr 2008 14:02:33 -0400, O'Brien, David W. (NIH/CIT) [C] 
[EMAIL PROTECTED] wrote:

And when was the last time you used non-3390 geometry?
 
The device independence was important back in the early days of VSAM, 
circa 1975 when you had 3330s, 3350, 3340s on the floor, which were then 
followed by 3380s and 3390s. 
 
Assuming that 3390 geometry is here to stay at least from a logical Zos 
perspective, I don't see any downside to using Tracks or Cyls. But then I'm 
sure someone will disagree.



Actually, I don't disagree either.  We have always used tracks/cylinders for 
allocation and have not had the urge or need to convert to other formats.  
But considering the current problem I'm having, I wondered if that was the 
correct decision?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: SMS allocation in Cylinders using AMS

2008-04-16 Thread Gilbert Cardenas
On Wed, 16 Apr 2008 14:27:07 -0400, O'Brien, David W. (NIH/CIT) [C] 
[EMAIL PROTECTED] wrote:

Gil,
 
Is it only the one file that you have this problem with?





Actually, I don't disagree either.  We have always used tracks/cylinders for
allocation and have not had the urge or need to convert to other formats. 
But considering the current problem I'm having, I wondered if that was the
correct decision?

Regards,
Gil.

--

I found the problem y'all, the default sms configuration bytes/track was off by 
one byte (56665 instead of 56664):

Default Management Class :   Default Device Geometry :  
Default Unit . . . . . . : 3390Bytes/track . . . . . : 56665
Tracks/cylinder . . . : 15   

I saw this earlier but I didn't think it would make such a drastic difference 
until 
one of the guys here said that might be the cause.
I changed it and reran the jcl and now its working as designed.

Thanks for the quick responses,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Question On Space Allocation

2008-04-12 Thread Gilbert Cardenas
We also resolved large allocations requests from DB2 as well as TMM tape 
requests which can get fairly high by filtering those datasets and assigning 
them a special dataclass as David suggested.

As a sidebar, does anyone use anything higher than a mod 9 volume to allow 
for these higher allocation or large dataset requests?

Regards,
Gil.

On Fri, 11 Apr 2008 10:20:06 -0400, O'Brien, David W. (NIH/CIT) [C] 
[EMAIL PROTECTED] wrote:

You could also direct his file via the ACS routines to a Data Class with the 
following:
 
 CDS Name  . . . . . : ACTIVE  
 Data Class Name . . : VSCOMP  
   
 Data Set Name Type  . . . . . : EXTENDED  
   If Extended . . . . . . . . : REQUIRED 
   Extended Addressability . . : YES   
   Record Access Bias  . . . . : USER  
 Space Constraint Relief . . . : YES   
   Reduce Space Up To (%)  . . : 50
   Dynamic Volume Count  . . . : 9 
 Compaction  . . . . . . . . . : YES  
 
This should be done with adequate testing. Extended addressing can provide 
up to 50% savings in space allocation and works for both Sequential and 
VSAM.
And if 1000 compacted cyls aren't enough, you now have a volume count of 
9.
 
Caveat: Make sure only normal APIs are used to access the data. 




On Fri, 11 Apr 2008 06:31:10 -0700, willie bunter [EMAIL PROTECTED] 
wrote:

Good Day To All,

  My question is regarding the allocation of space on a given storage group 
which has 218 volumes. The job failed, due to a IGD17279I 218 VOLUMES 
WERE REJECTED BECAUSE THEY DID NOT HAVE SUFFICIENT SPACE 
(041A041D)

  To resolve the problem I added 2 disks (3390-9) as a quick fix.  I asked the 
user to correct the jcl so as to ask for a smaller primary and a larger 
secondary.  He refuses to do so because in his opinion the system will search 
for a volume that has that much free space in four chunks or less. If you ask 
for less space in the PRIMARY allocation, the system may choose a volume 
that has that much free space, but not enough to overflow into the 
SECONDARY allocation, if needed.  I was not aware of the fact that the 
system allocates the space in chunks.  Could anybody confirm this?  Also, in 
the case of the allocation being done in 4 chunks, does the system allocate 
the dsn on the same volume or on other voumes within that particular storage 
group?

  In the case of not heeding my recommendation to reduce the primary 
allocation would there be another alternative that I could suggest besides 
putting the dsn to tape?

  Below is the user's code for the output dsn:

  //OUTERR   DD DSN=DE01.DE0PS$02.XDS1.EXPAND.ERR(+1),
  DISP=(,CATLG,DELETE),
  SPACE=(CYL,(2000,250),RLSE),
  DCB=(MODELDCB,RECFM=FB,LRECL=14126)

  Thanks for your comments and suggestions in advance.


 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: z/OS 1.4-1.7 gotchas

2008-04-11 Thread Gilbert Cardenas
I also remember some issues with the consoles.  We use BMC's Control-O to 
perform system commands and retrieve messages but because of the console 
changes, the software would not work properly.
I believe we had to tweak the settings in the application's parms in order to 
get it to work.

Although it is documented, there was the switch from the HSM ARCCMD00 
parmlib settings:
AUTH USERXXX  DATABASEAUTHORITY(CONTROL)
to use RACF authority as well.

There was at least one other thing but since it was several years ago, it 
escapes my memory.  If I remember I'll re-post.

Regards,
Gil.





On Thu, 10 Apr 2008 15:02:31 -0700, Gibney, Dave [EMAIL PROTECTED] 
wrote:

JES exits. And CONSOLE restructure (There is(was) a FMID or two that you
can put on 1.4 and do the CONSOLE ahead of time)

-Original Message-
From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
Behalf Of Shmuel Metz (Seymour J.)
Sent: Wednesday, April 09, 2008 7:11 AM
To: IBM-MAIN@BAMA.UA.EDU
Subject: z/OS 1.4-1.7 gotchas

I may be drafted to do a 1.4 to 1.7 migration. I'm concerned both about
any gotchas in the migration itself and about anything that might impede
a
later migration to a supported[1] release. There are two LPAR's in a
sysplex and a third LPAR in a monplex.

Is there anything critical that the documentation and PSP don't tell
you,
or that they get wrong?

[1] 1.7 is still supported, but not for long.

--
 Shmuel (Seymour J.) Metz, SysProg and JOAT
--

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Identifying datasets in POOL with Dcollect

2008-04-04 Thread Gilbert Cardenas
On Fri, 4 Apr 2008 12:03:16 -0400, Lizette Koehler 
[EMAIL PROTECTED] wrote:

DCOLLECT runs once per day.  The time frame for this error is anytime 
between 11pm and 3am.  No specific time, just around Midnight.

I have a file from last night that ran during the event.  That is why I was 
thinking of running it a little more often.  To try and get closer to the 
actual 
event.

I am also running SMF 30, 14 15 17 18 60-69 Records to see if I can get to 
the problem that way as well.

Lizette


---
Hi Lizette, we had a similar situation where some DB2 datasets were 
constantly flowing into our SPILL volumes but when we would check the pools 
in the morning, they would be fine.
Come to find out that the DB2 pool would get flooded with large datasets 
during the night but then the primary/secondary space managements would 
run and offload the datasets to tape leaving the pools squeeky clean except 
for the SPILL volumes which we would sweep the datasets to move them to 
the proper pool.
Perhaps something like this is happening in your instance.

Anyway, its a thought.
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: FTP from z/VM to z/OS JES Spool

2008-04-04 Thread Gilbert Cardenas
On Fri, 4 Apr 2008 11:48:04 -0700, Raymond Noal [EMAIL PROTECTED] 
wrote:

Lionel,

Have you tried -

SITE JESLrecl=xxx  where xxx can be 1 - 254. This specifically sets the 
LRECL for the JES internal reader.

HITACHI
 DATA SYSTEMS
Raymond E. Noal
Senior Technical Engineer
Office: (408) 970 - 7978



I was looking to do something similar several weeks back but couldn't make 
this work either.

I could be wrong but I don't believe that changing the JESLrecl to greater 
than 80 would make any difference because the internal reader is still looking 
for jcl records and not spool print records.

The only compromise I found was as someone had previously suggested was 
to ftp the dataset to the region and then submit an IEBGENER jcl to print it.

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: FTP from z/VM to z/OS JES Spool

2008-04-04 Thread Gilbert Cardenas
On Fri, 4 Apr 2008 12:52:31 -0700, Lionel B Dyck [EMAIL PROTECTED] 
wrote:

Raymond wrote:

Lionel,

Have you tried -

SITE JESLrecl=xxx  where xxx can be 1 - 254. This specifically sets the
LRECL for the JES internal reader.

I didn't know that option existed but it does and it WORKS ! ! ! !

Thanks

snip

So Lionel, did you do as Ed Finnell suggested and wrap the jcl around the 
spool data inline?
Would you mind sharing cause I would sure like to give this a try since they 
continue to jinx my nje options.


Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



z/OS 1.7 with toleration PTFs for 1.9

2008-04-02 Thread Gilbert Cardenas
Hello everyone, I'm looking for some feedback for some problems were having 
with the operating system either locking up (ipl locked up) or slowing down to 
a crawl for a period of around 30 minutes.  

There are always changes going in but the most recent and suspect is that 
1.9 tolerations ptfs were applied recently to the o/s. 

The only symptom that something is happening is that any TSO users logged 
on get disconnected and an error message is issued:
IKT116I USERID  RECEIVE ERROR,RPLRTNCD=14 RPLFDB2=13
 SENSE= WAITING FOR RECONNECTION
IKT122I IPADDR..PORT ...

I could'nt find the RTNCD=14 with FDB2=13 in z/OS V1R7.0 Comm Svr: SNA 
Messages or the z/OS Communications Server: IP and SNA Codes and SNA 
Programming Guide

I've googled trying several variations but so far all I could come up with is 
that 
there is possibly some problems with auxillary storage or getmain/freemain 
virtual storage.

If anyone has run across these codes I would appreciate any information you 
could share with me.

This is only happening once a week at the same general time so I have 
scheduled several display commands such as D ASM,ALL and $DA to display 
what is going on at the time.

There are several jobs that are processing across 3 lpars (not sysplexed) and 
it's hard to isolate it to one job (it this is really the culprit.)

Are there any other commands I can issue to detect what might be going on 
with the system at intervals leading up to the slowdown as we have very little 
information to act upon except for the TSO users being forced off.

Thanks in advance,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: z/OS 1.7 with toleration PTFs for 1.9

2008-04-02 Thread Gilbert Cardenas
On Wed, 2 Apr 2008 11:20:23 -0500, Mark Zelden 
[EMAIL PROTECTED] wrote:

On Wed, 2 Apr 2008 12:13:31 -0400, Daniel McLaughlin
[EMAIL PROTECTED] wrote:

Might this be related to the splitout of the TN3270 stack?

I am just starting my 1.9 build and haven't got the coexistence stuff in
on 1.7 yet.


Good point.  The OP did say the system was locking up or slowing down,
but perhaps it is just TSO and the perception is that the entire system is
having a problem.

So check to see that TN3270 (or whatever your STC is) is running in 
SYSSTC.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at 
http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--

Additional info :

The VTAM/TCPIP/TSO are all running at the SYSSTC level.  The TSO stc does 
not come down but only the users are kicked off.

Regards,
Gil.
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: z/OS 1.7 with toleration PTFs for 1.9

2008-04-02 Thread Gilbert Cardenas
On Wed, 2 Apr 2008 13:56:23 -0500, Mark Zelden 
[EMAIL PROTECTED] wrote:

Additional info :

The VTAM/TCPIP/TSO are all running at the SYSSTC level.  The TSO stc 
does
not come down but only the users are kicked off.


What about TN3270?

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at 
http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--


As far as I can tell, we don't run a separate TN3270 stc.  But looking at the 
TCPIP started task for the TCPIP stack functions, I did notice there was an 
EZZ4328I ERROR E010 SETTING ROUTING FOR DEVICE OSAFD10
error so I checked and noticed that there had been some changes to the 
TCPIP ROUTE tables to accomodate a wireless change.  Looking into this but 
the once a week thing is really perturbing.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Changing the format of the Archives web display?

2008-04-02 Thread Gilbert Cardenas
On Wed, 2 Apr 2008 14:38:59 -0500, Patrick O'Keefe 
[EMAIL PROTECTED] wrote:

I access IBM-Mail only through the archives web site.

A few minutes ago something was posted to IBM-Main that radically
changed the way this month's archives are displayed on my browser.
Scott Rowe posted something with a very long subject.  The subject
is wrapped, but only after taking up about two thirds of the page
width.   This has shoved the From adn Date fields way off to the
right, and has quished the Date field so it wraps - taking 4 lines.

I just went back to preior month's archives and the subject on
them wraps after only about 1/3 of the page width.

Is this format presentation hard coded, or is there a way to limit
the width?  I can't find any options for setting this.

Pat O'Keefe

--


Ditto here.  All of Aprils postings are skewed now.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: z/OS 1.7 with toleration PTFs for 1.9

2008-04-02 Thread Gilbert Cardenas
On Wed, 2 Apr 2008 17:18:42 -0400, Jon Brock [EMAIL PROTECTED] wrote:

But he's still on 1.7, isn't he?  I don't think TN3270 was required to
be split out until 1.8, was it?  Or was it 1.7?

Jon



snip

I don't know what you are calling it, but you must be if you are using
TN3270.  It is required as of z/OS 1.9. If it is falling into some
low
priority service class, then it could be that it is only TN3270 having
a problem when the system is very heavily loaded and not the entire
system as was previously mentioned.
/snip

--

That is correct.  We are still on z/OS 1.7 with the 1.9 toleration PTFs applied 
so that was my understanding as well.
Is there something we are missing?

Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: z/OS 1.7 with toleration PTFs for 1.9

2008-04-02 Thread Gilbert Cardenas
On Wed, 2 Apr 2008 16:27:51 -0500, Matthew Stitt 
[EMAIL PROTECTED] wrote:

I would go looking into the network side of the house.  Since it appears to
be happening at a certain time, for a certain duration, that points to the
network.

And the TN3270 server is not required at 1.8.  I'll find out soon on 1.9.

And yes, I know

MainframeGuilty until proven innocent.  g

/snip
--

That's what the sys prog was thinking as well.  We'll probably bounce it off 
the network admin tomorrow.

Other ideas still welcomed.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: LISTCAT QUESTION

2008-04-01 Thread Gilbert Cardenas
On Mon, 31 Mar 2008 06:30:19 -0700, willie bunter 
[EMAIL PROTECTED] wrote:

Is there a way of finding out the DEVTYP of indirectly-cataloged datasets?

  Could the good folks at SUN/STORAGETEK provide an answer if possible?

Chase, John [EMAIL PROTECTED] wrote:
   -Original Message-
 From: IBM Mainframe Discussion List On Behalf Of willie bunter

 Good Morning To All,

 I am stuck trying to decipher the DEVTYPE X'' from
 the following LISTCAT (posted below). The dsn is cataloged,
 and the VOLSERS are controlled by a product ExHPDM. I have
 checked the manual Z/OS V1R7.0.DFSMS Access Method Services
 for Catalogs, but it is not indicated in the LISTCAT Code
 column. Can anybody suggest where else I can look it up?

 NONVSAM --- SYS2.VPROM02.C1018300

 IN-CAT --- SYS2.ICFCAT.BACKUP

 HISTORY

 DATASET-OWNER-(NULL) CREATION2008.082

 RELEASE2 EXPIRATION--.000

 VOLUMES

 VOLSER**SOV* DEVTYPE--X''
 FSEQN--4
 ASSOCIATIONS(NULL)

 ATTRIBUTES


 Thanks.

We see that on all indirectly-cataloged datasets.

-jc-



Hi Willie, as John and Shmuel have pointed out data sets that contain an 
indirect volser (C’**’) have a DEVTYPE of X’’.  I believe these 
data sets have their DEVTYPE resolved to that of the IPL volume.
I found some information related to indirectly cataloged datasets in the Tivoli 
Advanced Catalog Management for z/OS and it lists the devtype definitions:
HTTP://PUBLIB.BOULDER.IBM.COM/TIVIDD/TD/ITACMZOS/SC23-7973-
00/EN_US/PDF/SC23-7973-00.PDF

And I also found some information on some apars that deal with indirect 
datasets for z/OS 1.7

ftp://ftp.software.ibm.com/software/websphere/awdtools/filemanager/fmv7apa
r.pdf

I don't know if any of this applies to your situation but hopefully you can get 
some related info out of this.

Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Displaying multiple volumes allocated from DSLIST

2008-03-26 Thread Gilbert Cardenas
When a dataset has multiple volumes allocated for example :

Command - Enter / to select action  Message   Volume 
---
 TAPEX.SMFCICS.DATA ?? 
 TAPEX.SMFCICS.DATA.G1506V00   TMM829+

And I select the dataset to view the attributes, there is a message that says:
To display multiple volumes press Enter or enter Cancel to Exit.
If I press enter all I get is the following:

All allocated volumes:  
More: + 
 Number of volumes allocated: 3 

 TMM829  *   *  

Is there anyway to tell what the other volumes are?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Displaying multiple volumes allocated from DSLIST

2008-03-26 Thread Gilbert Cardenas
On Wed, 26 Mar 2008 11:37:04 -0400, Cochran, Bill 
[EMAIL PROTECTED] wrote:

It's just showing that you have 2 additional candidate volumes
allocated. They have not yet been assigned.


Thanks,
Bill Cochran
AIT Mainframe Storage
502-560-3025


That's kind of what I was thinking but I thought it was kind of weird because 
the file has been closed so why would it hold on to the additional volume 
allocations especially if they weren't used.
Also, in the dataclass to this dataset, I only specify a volume count of 2 but 
yet this shows 3 potential volumes?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Displaying multiple volumes allocated from DSLIST

2008-03-26 Thread Gilbert Cardenas
On Wed, 26 Mar 2008 16:14:21 -0500, Mark Zelden 
[EMAIL PROTECTED] wrote:

On Wed, 26 Mar 2008 21:38:09 +, Ted MacNEIL 
[EMAIL PROTECTED] wrote:

It's just showing that you have 2 additional candidate volumes allocated.
They have not yet been assigned.

It's a control block thing.
It doesn't cost anything; it can protect you from future growth problems.

The last two shops I worked at had 20 volumes allocated in most 
dataclasses.
I would recommend the 59 vol max.
It doesn't cost; it can save.
Especially, since only the secondary allocation size is 'remembered' on all
future volumes.


It does cost.  It takes space in the TIOT.  IIRC, 4 bytes for each candidate.
Depending on how many DDs are in a step and the TIOT size defined in
ALLOCxx this could cause a problem.  It also can have an effect on catalog
space requirements.In z/OS 1.3 IBM came up with  DVC (Dynamic
Volume Count) which only stores the volumes specified / used in the
catalog.  I'm not clear if it still takes up the same amount of space in
the TIOT - I would have to research that.

But now I am a little confused... the z/OS 1.6 manual that I opened up just
said that DVC is  a feature supported only for multi-striped VSAM data sets
and had the changed bar (|) next to it.

I am leaving for the day so someone perhaps can shed some light on
this and I'll follow up tomorrow.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:[EMAIL PROTECTED]
z/OS Systems Programming expert at 
http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--


Thank you all for all the good information.  BTW, we are on z./OS 1.7 going to 
1.9 soon.   I had forgotten that I was also using the following attributes for 
this dataclas:

Space Constraint Relief . . . Y 
  Reduce Space Up To (%)  . . 50
  Dynamic Volume Count  . . . 5 

I am not really having any issues with any of the allocations, I was just 
curious why some datasets listed as multi-volume datasets did not show the 
additional volumes but now it kind of makes sense.

Thanks again,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Need MASS recall advice

2008-03-25 Thread Gilbert Cardenas
On Fri, 21 Mar 2008 10:25:01 -0500, Rugen, Len [EMAIL PROTECTED] 
wrote:

I'm doing some planning for a end-of-life for the mainframe conversion
project, as in the 9th year of our 5 year plan to get off this system.  

 

We have View Direct / Mobius report viewer.  It has 100,000's of HSM
migrated files in our 3494 library.  If they just issue recalls for
these in some order they choose, the physical tape and file order will
be random and slow.  On a good day, 500 recalls would probably be the
max we could allow before we would need the drives for overnight
production.  I think when I did the math, it was 3-4 years worth of
recalls.  (That's OK, I need this job :-) )

 

Is there a faster way?  If I had a list of files they wanted, could I
send the recalls in tape - file sequence somehow to batch them?  

 

Thanks

 

Len Rugen

 


--


Hi Len, your scenario practically describes what we've been going through as 
well.

We have been trying to get off of Mobius Viewdirect for about 5 years now 
but there are some stubborn users who have done everything they can do to 
prevent this from happening.

I have gotten the report archives to around 60,000 records and I basically did 
the same thing that someone already mentioned which was the 
TSO HLIST LEVEL(whatever.level.qualifier) OUTDATASET(output.dataset.name)
recall routine.  

I changed the management class to not migrate the archives anymore since 
there were very few of them so now they just stay on L0.

Just out of curiosity, what did you replace VD with?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DCOLLECT and HSM backup indication

2008-03-24 Thread Gilbert Cardenas
On Tue, 18 Mar 2008 15:34:21 -0400, Jack Kelly 
[EMAIL PROTECTED] wrote:

I tried to send this issue to IBM but they think that it's a question
rather than a problem, so I'll try here.
Trying to use DCOLLECT to ensure that all ML2 data has a backup. In
general the UMLBKDT field (DCOLLECT ML2 backup date)indicates if a backup
has been done (as long as it's not empty, been opened, is SMS, etc). I do
have a couple of dsn that have zero UMLBKDT but a valid backup exists
(from HLIST and the B record in DCOLLECT).
Has anyone else seen this or know what I have missed or does IBM just need
the money (we don't have QA support)?
Thanks...Jack



Jack Kelly
202-502-2390 (Office)

--

Is it possible that someone might have done a manual HBACKDS and then a 
HMIG on a non-sms dataset?  There are some very resourceful 
programmers/operators that do some odd things every once in a while in our 
shop.

The DCOLLECT documentation for UMLBKDT says that this field only applies to 
SMS-managed datasets.

The UBBDATE does not make that assertion.

Just a thought,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Transfer reports from lpar to lpar

2008-03-11 Thread Gilbert Cardenas
On Tue, 11 Mar 2008 10:02:28 -0400, Farley, Peter x23353 
[EMAIL PROTECTED] wrote:

Asked and answered.  The OP's local sysprogs don't like NJE for some
reason, have not implemented it, and (according to the OP) will not do
so.  Politics was suggested as the underlying reason.


The reason I think they don't want to open the NJE capability is that they 
don't want people transferring reports that were created on the 
development/qa regions to the production region where the reports could get 
out to our customers by mistake.

Unfortunately, all the high speed printers are on the production lpar so 
occassionaly a test report does need to be transferred to the production 
region for printing.  Don't laugh but we are still bus and tag attached.

They would rather me do it on an as-needed basis, however, the requests are 
getting too frequent and taking up too much of my time.

I would think that there are security measures that would allow only certain 
individuals or groups the ability to use the nje feature and not make it 
generally available to everyone.

I am hoping that with all the information I received (thanks to Brian 
Westerman for sending me the share document) that I can convince them into 
taking another look at setting up the NJE.  It definitely looks like a time 
saver.

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Transfer reports from lpar to lpar

2008-03-11 Thread Gilbert Cardenas
On Tue, 11 Mar 2008 16:40:26 -0700, Ulrich Krueger [EMAIL PROTECTED] 
wrote:

Gil,
Now that you explained why you would like to use NJE, have you thought 
about
alternatives?

For example:
Do you have TCP/IP-attached laser printers in the office, HP laser printers
or high-speed copiers with IP-interface?
If so, do you have VPS or any other mainframe software that allows printing
of mainframe reports on IP printers?
Assuming, of course, that the reports are not millions of lines or
AFP-format, requiring 3900-class printers and/or special size paper, how
about this: Use XMITIP[1] to convert the report to a PDF file (with optional
green-bar page background) and email it to the user. The user can then
decide if on-line viewing on the PC is sufficient and/or print all or
selected pages to a network / PC - printer. Might save a tree or two, and a
lot of time waiting for report delivery ...

Until you do find a workable solution, how about charging the users a fee
for each special report handling request? $5 per request, perhaps? (Just
kidding)

Regards,
Ulrich Krueger

[1] Shameless plug: XMITIP and TXT2PDF by Lionel Dyck is software to send
Email from the mainframe, with or without attached files in a variety of
file formats. And the price is unbeatable: Free.
See http://www.lbdsoftware.com/tcpip.html



Actually, alternatives was what I was looking for but the alternatives were 
really not as seemless and intervention-less as I hoped.

I looked at XMITIP and SDSFEXT and although viable, I was hoping I could 
just use something like the IBM IASXWR00 external writer program to offload 
the reports and point the IEFRDER ddname to the desired IP destination.  It 
sounded viable but unreachable.

I then thought all I would have to do is point the IEFRDER to a temp dataset 
and follow it with an FTP to send it to the desired region but I could not find 
a 
way to send the temp dataset directly to the JES2 spool.  

Each solution started to get messier and in reality, NJE just sounds like the 
way to go.  I just have to find a way to be politically astute and grease the 
right palms to get the thing in.

Thanks,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SPAM: Re: Transfer reports from lpar to lpar

2008-03-08 Thread Gilbert Cardenas
On Sat, 8 Mar 2008 09:21:53 -0800, Edward Jaffe 
The OP indicates that he already has TCP/IP connectivity between all
LPARs and FTP servers running on each. Therefore, the JES FTP interface
should be available without any additional configuration. (Though I
would highly recommend specifying JESINTERFACELEVEL 2 for the FTP
server configurations.)

--
Edward E Jaffe
Phoenix Software International, Inc
5200 W Century Blvd, Suite 800
Los Angeles, CA 90045
310-338-0400 x318
[EMAIL PROTECTED]
http://www.phoenixsoftware.com/


Thank you all for all the great ideas.  Sounds like I am restricted to using 
FTP 
to transfer the report files.

I looked up info on the JESINTERFACELEVEL 2 option and it appears this will 
allow me to pull reports/sysouts that do not match my userid or owner among 
other things.  Does this parameter have to be coded in the TCPIP parameter 
library or can it be specified as a parameter in a batch jcl?

Also can you ftp a report directly from one jes spool to another jes spool or 
do 
I have to put the report in an intermediary dataset and then ftp that file to 
the desired jes spool output?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SPAM: Re: Transfer reports from lpar to lpar

2008-03-08 Thread Gilbert Cardenas
On Sat, 8 Mar 2008 12:45:28 EST, Ed Finnell [EMAIL PROTECTED] wrote:

Guess if all you got is PFCSKs  DEST='IP:ipaddr' might suffice w/o NJE

quote JCL  Reference
   DEST=destination
The destination subparameter for JES2 is one of the following:
LOCAL|ANYLOCAL
'IP:ipaddr'
name
|  Nn
|  NnRm
NnnR
NnnnRmmm
NRmm
|  NnRm
|  (node,remote)
nodename.userid
'nodename.IP:ipaddr'

end quote


I would like to use this option however, if I use the 'IP:ipaddr' the jcl 
reference 
states that a functional subsystem that can process IP-distributed data sets 
sends the data to the specified host system.
Could you please translate what that means.
I see the report in the output queue sitting with a dest of IP but how does 
the report get transmitted across?

Regards,
Gil. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Transfer reports from lpar to lpar

2008-03-07 Thread Gilbert Cardenas
Hello all, I am looking for ideas on ways to transfer reports/sysouts from one 
JES2 spool on one lpar to another lpar.

The current method I am using is to use a JES offload dataset to offload the 
report(s) and then reload them on the other lpar.  This works okay but I'm 
looking for an automated way to do this so I don't have to get involved in the 
process.

We also have a VPS printer defined that takes any report in a certain dest and 
then LPRs them to another lpar but this doesn't work very well because 
oftentimes the format of the original report is not correct and this method 
also 
breaks up the print files into separate print streams so DJDE print records get 
separated from the original report and printing is incorrect.

I don't know much about external writers or even sapi but as long as it is not 
too complicated and freeware, I'm willing to look into something like that or 
other methods???

Any feedback appreciated,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Transfer reports from lpar to lpar

2008-03-07 Thread Gilbert Cardenas
Sorry, forgot to mention that I have been trying to get the system progs to 
setup NJE but for some reason, they are reluctant to do so and shared spool 
is definitely out of the question.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Transfer reports from lpar to lpar

2008-03-07 Thread Gilbert Cardenas
On Fri, 7 Mar 2008 20:33:38 +, Ted MacNEIL [EMAIL PROTECTED] 
wrote:

Sorry, forgot to mention that I have been trying to get the system progs to 
setup NJE but for some reason, they are reluctant to do so

Why?
and shared spool is definitely out of the question.

And, why?

If you have a true business need, sysprogs should not be able to stand in 
the way!

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Everytime I bring it up, they hem haw around.  Perhaps it is because they do 
not know how to set it up???
If it doesn't require too much system admin intervention,  I don't mind taking 
a 
stab at it.
Any docs or websites that can help get me started?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Transfer reports from lpar to lpar

2008-03-07 Thread Gilbert Cardenas
On Fri, 7 Mar 2008 15:00:07 -0600, McKown, John 
[EMAIL PROTECTED] wrote:

 -Original Message-
It definitely would take administrator intervention. There are JES2
changes and maybe even VTAM changes.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology


That was what I was afraid of. Any other options available  

BTW, I have not been part of this list long so is it prohibited to post 
information regarding a job opening?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Transfer reports from lpar to lpar

2008-03-07 Thread Gilbert Cardenas
On Fri, 7 Mar 2008 13:37:30 -0800, Jon Nolting 
[EMAIL PROTECTED] wrote:

If you have all LPARs on the network and have FTP servers on each, would it 
be possible to use the FTP JES interface to GET/PUT the JES content over 
that interface?

Jon Nolting
EPG Compete - CATM
Enterprise Technology Architect
(425) 707-9334 (O)
(925) 381-2375 (M)
(425) 222-7969 (H)
ives at http://bama.ua.edu/archives/ibm-main.html



We do have all lpars on the network with FTP servers on each so FTP'ing 
between them is no big deal.
I was thinking I could setup some kind of external writer routine that would 
ftp 
the jes report/sysout but I don't know how this would work or where to begin?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS QUESTION - DATACLAS FILTERLIST

2008-02-21 Thread Gilbert Cardenas
Willie, I throw this out just as another option but what if you defined a 
FILTLIST called TMMTAPE in your dataclas acs member:

FILTLIST TAPE_UNITS INCLUDE('TMMTAPE')

Then you could refer to that esoteric name in any jcl or started task etc.

//SYSUT2   DD  DSN=TAPE.BLAH(+1),
// DISP=(NEW,CATLG,DELETE),  
// UNIT=TMMTAPE, 
// DCB=(TAPE.MODEL,BLKSIZE=27600,LRECL=400,RECFM=FB),
// LABEL=(1,SL), 
// VOLUME=(,,,4) 


There are some inherent problems in using this design in that you may lose 
control of what you actually want to be put in the TMM pool.  Unfortunately, 
anyone who knows about the esoteric can use it in their allocations but 
perhaps that is not a consideration for you or perhaps you can protect 
yourself with RACF or something.
Just a thought.

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: SMS managed volume

2008-02-21 Thread Gilbert Cardenas
On Wed, 20 Feb 2008 10:05:06 -0500, Mark Pace [EMAIL PROTECTED] 
wrote:

Is there an easy way to tell if a DASD volume is SMS managed or not?

--
Mark Pace
Mainline Information Systems

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Mark, if you are looking to run something in batch, I use DCOLLECT to 
generate a list of all volumes:

//DCOLLECT EXEC  PGM=IDCAMS  
//SYSPRINT DD SYSOUT=*   
//OUTDSDD DSN=TEMP.VOLLIST,  
// UNIT=DISK,
// DSORG=PS, 
// RECFM=VB,LRECL=932,   
// SPACE=(1,(5,25000),RLSE),AVGREC=K,
// DISP=(NEW,CATLG,DELETE)   
//BCDS DD DSN=HSM.BCDS,DISP=SHR  
//MCDS DD DSN=HSM.MCDS,DISP=SHR  
//SYSINDD  * 
   DCOLLECT -
   OFILE(OUTDS) -
   ERRORLIMIT(1)  -  
   NODATAINFO -  
   VOLUMES(*)

I then use a SAS routine (you can use whatever you prefer) to convert the 
DCVFLAG1 bitstring (Offset 30 length=1) to hex and test for the following 
designations:
CC = STORAGE
D4 = PUBLIC
E4 = PRIVATE
E7 = SMS MANAGED
So far this has worked for me and hopefully my thought process is correct 
cause I have been using this to create a SAS report of the status of our shark 
allocations.

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


HSM ARC0923I message

2008-02-06 Thread Gilbert Cardenas
Good morning all, for some time now, I have been running the following 
command:
LIST TTOC SELECT(BOTH FAILEDRECYCLE) DSI
and I have recently been getting several tapes that are listed as failed 
recycle.  
When I manually recycle them, I get most of the datasets off of the tape 
except for a few mostly DB2 datasets.

I was able to locate several error messages in my HSM logs, however, I can't 
make heads or tails what they are trying to tell me and the ARC and ADR 
messages aren't very helpful at all.

  
PAGE 0001 5695-DF175  DFSMSDSS V1R07.0 DATA SET SERVICES 
2008.037 02:17   
ADR035I (SCH)-PRIME(06), INSTALLATION EXIT ALTERED BYPASS FAC CLASS 
CHK DEFAULT TO YES
 DUMP DATASET(INCLUDE
(DISSDB2.DSNU.BMCCM74F.BMCBUCC.P000.G0939V00)) -
 
  OUTDDNAME(SYS01526) CANCELERROR OPTIMIZE
(2) 
ADR101I (R/I)-RI01 (01), TASKID 001 HAS BEEN ASSIGNED TO 
COMMAND 'DUMP '  
ADR109I (R/I)-RI01 (01), 2008.037 02:17:42 INITIAL SCAN OF USER CONTROL 
STATEMENTS COMPLETED. 
ADR050I (001)-PRIME(01), DFSMSDSS INVOKED VIA APPLICATION 
INTERFACE   
ADR035I (001)-PRIME(50), INSTALLATION EXIT ALTERED TAPE BLOCK SIZE 
DEFAULT TO 32 K-BYTES  
ADR016I (001)-PRIME(01), RACF LOGGING OPTION IN EFFECT FOR THIS 
TASK  
ADR006I (001)-STEND(01), 2008.037 02:17:42 EXECUTION 
BEGINS   
ADR049E (001)-STEND(01), 2008.037 02:17:47 DFSMSDSS FUNCTION 
TASK 
ABEND RECOVERY ROUTINE WAS ENTERED. SYSTEM ABEND 
CODE=0001
ADR415W (001)-DTDSC(04), NO DATA SETS WERE COPIED, DUMPED, OR 
RESTORED FROM ANY VOLUME
ADR324E (001)-DTDSC(01), THE VOLUME/DATA SET SPECIFIED BY DDNAME 
SYS01526 HAS BECOME UNUSABLE 
ADR006I (001)-STEND(02), 2008.037 02:17:47 EXECUTION 
ENDS 
ADR013I (001)-CLTSK(01), 2008.037 02:17:47 TASK COMPLETED WITH 
RETURN CODE 0016   
ADR012I (SCH)-DSSU (01), 2008.037 02:17:47 DFSMSDSS PROCESSING 
COMPLETE. HIGHEST RETURN CODE IS 0016 FROM:
 TASK
001  
ARC0421I MIGRATION VOLUME TL0173 IS NOW MARKED 
FULL   
ARC0923I ERROR CLOSING TAPE DATA SET DFHSM.HMIGTAPE.DATASET, RC=0

I see several ARC0923I and ARC0734I with rc=969 reason=0 throughout the 
log.

When I look up the ARC0923I message, it says that there are only six possible 
return codes and rc=0 is not one of them.

I don't know whether the problem is hardware (the tape drive or the tape 
itself) or whether the dataset was in use or perhaps a RACF security issue?

Has anyone dealt with something similar to this?  Any feedback would be 
appreciated.

Best regards,
Gil.


  
  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM ARC0923I message (additional info)

2008-02-06 Thread Gilbert Cardenas
I was reviewing some of the changes in the ARCCMD00 parmlib member and I 
do remember making a change several weeks back because there were too 
many tapes showing up in the excessivevoluimes list and after reviewing some 
IBM documentation in the HSM storage admin reference chapter 36 that said:  

DFSMShsm recommends a value of 4000MB for all IBM 3490 and 3590 tape 
cartridges.

I set the SETSYS TAPESPANSIZE(4000) as recommended.

Could this possibly be the culprit?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HSM ARC0923I message

2008-02-06 Thread Gilbert Cardenas
Rick, this does sound very much like what we are experiencing.  The apar is 
only listed for 1.8 and 1.9 and we are on 1.7 so I will call IBM to see if this 
might apply to our situation as well.

Thanks for the info,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


More ICETOOL questions

2008-01-30 Thread Gilbert Cardenas
Since there's a buzz going on with ICETOOL, I am using ICETOOL to create a 
report from the SMF30 records and I am using the total= parameter to provide 
sections totals.
I don't have a problem totaling individual fields such as SMF30CPT or 
SMF30CPS, however, I am deriving the duration of a job step by subtracting 
the SMF30TME field from the SMF30SIT field and I'm having problems getting 
the total amount of time for all steps in a job.
I'm looking for something like this but the syntax is all wrong:

 SECTIONS=(SMF30JNM,SMF30JBN,
*BEGIN OF SECTION TOTAL LINE:
 TRAILER3=(2:110C'-',1/,31:C'TOTALS FOR',
 42:SMF30JNM,
 51:SMF30JBN,
 78:TOT=(7,4,FI,SUB,SMF30SIT,FI,EDIT=(TT:TT:TT)),   ***This line fails***
 87:TOT=(SMF30CPT,FI,EDIT=(IIIT.TT)),
 TOT=(SMF30CPS,FI,EDIT=(IIIT.TT)),   
 102:TOT=(SMF30TEP,FI,EDIT=('I''III''IIT')),1/)),
*END OF SECTION TOTAL LINE:  

Note:
The SMF30JNM,JBN,CPT,CPS,SIT,TEP are all symbolics passed to the sort with 
the appropriate location of the fields.

Thanks for any assistance,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Identifying GDG datasets in a deferred status

2008-01-17 Thread Gilbert Cardenas
Thanks for pointing me in the right direction Mark.  I could only work on it 
part 
time and I'm no REXX expert so it took me a while to modify the REXX routine 
to accept the dstype I was looking for and only report on those entries .
The only problem I have now is that when I invoke it from TSO the routine 
runs fine, however, when I invoke it from batch using IKJEFT01, the output 
dataset always returns empty.
I am queuing the output records that match the deferred status and then 
writing them out to a dataset at the end of the REXX then freeing the 
dataset.  Any suggestions?

Regards,
Gil.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Identifying GDG datasets in a deferred status

2008-01-11 Thread Gilbert Cardenas
I have created a jcl to list all the GDG datasets that are in a deferred 
status.  
Unfortunately, the jcl is lengthy and runs quite a while because I had to 
perform the following steps to filter out only the datasets I wanted :

STEP1.  Perform an IDCAMS LISTCAT UCAT ALL to a dataset to identify all the 
user catalogs.

STEP2.  Sort include only the catalog names and reformat the output to 
format the IDCAMS spread cards LISTC NVSAM HIST CAT [the catalog name].

STEP3.  Perform the IDCAMS LISTC from the dataset created in STEP2.

STEP4.  Perform another SORT against the output of STEP3 and include only 
the records that contain GDG BASE and reformat the output to format the 
IDCAMS spread cards LISTC NVSAM HIST GDG ENT [gdg base name].

STEP5.  Perform the actual IDCAMS LISTC from the dataset created in STEP4.

STEP6.  SORT include only the records that contain NONVSAM and STATUS 
which gives me all the GDG entries and their status.  I then perform a search 
on deferred to find if there are any GDGs in a deferred rollin status.

This seems like a lot of rigmarole just to be able to find out what gdgs are in 
a 
deferred status.

Does anyone have a better/faster/reliable way to do this?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html