Disk i/o problem

2018-10-08 Thread Tommy Tsui
hi all,
During online,one our dwdm link suspend 50ms,and the switch port offline
and then online within 1 second,we found os issued IOS080i i/o exceed
timeout value alert,our MIH set IOTDASD=00:07,most our cics trans timeout
and some db2 logcopy hang,how come just 1 second interrupted and casued all
trans timeout?any parameter can turning?any shop hit the same problem ?many
thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Cross-memory POST ERRET and return codes

2018-10-08 Thread Jim Mulder
  POSTERR will get control when an abend occurs under the XMPOST 
SRB in the target address space.  Since you specified MEMREL=NO, 
POSTERR will get control under an SRB running in ASID 1.

  The return codes 4 and 8 are only for LINKAGE=SYSTEM, so 
they are not relevant to your LINKAGE=BRANCH request.  They
are return codes in R15 from POST.  They are not passed to the ERRET.

  For LINKAGE=SYSTEM, return code 8 indicates that the POST is 
being done asynchronously (under an SRB in the target address space),
and that ERRET will not be used if an abend occurs under the SRB.
The book doesn't say when you would get return code 8 instead of 4,
but I see in the code that 4 is used when the POST is issued in PSW key
0-7, and 8 is used when the POST is issued in PSW key 8-15.
So it seems that effectively, ERRET is ignored for LINKAGE=SYSTEM
cross-memeory POSTs issue in PSW key 8-15. 
I don't know why that was done, but it has been that way since the 
introduction of LINKAGE=SYSTEM in MVS/ESA SP3.1.0. 
 
Jim Mulder z/OS Diagnosis, Design, Development, Test  IBM Corp. 
Poughkeepsie NY

"Charles Mills"  wrote on 10/08/2018 06:40:40 PM:

> From: "Charles Mills" 
> To: ibm-main@listserv.ua.edu
> Date: 10/08/2018 10:01 PM
> Subject: Cross-memory POST ERRET and return codes
> 
> Pursuant to a recent thread here I am converting a cross-memory POST to 
use
> IEAMSXMP instead. However ... I still need to support older systems 
without
> IEAMSXMP support, so I will be dual-pathing the existing POST. I got to
> looking at code that I have not examined in several years, and I am 
trying
> to determine exactly what is or should be going on. It runs without 
apparent
> errors, so this is kind of a theoretical question, not "please help me 
with
> this error." Here's the POST
> 
> POST  (R3),LINKAGE=BRANCH,ASCB=(R2),ERRET=POSTERR,MEMREL=NO
> 
> The questions are these
> - Given that code, will POSTERR indeed get control on an error?
> - The POST documentation documents two error codes, 4 and 8. Will they 
get
> passed to POSTERR? Where?
> 
> Yes, I have RTFM but the FM is showing the effects of years of somewhat
> piecemeal revisions.
> 
> Thanks,
> 
> Charles



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Any reason to still use SWA=BELOW?

2018-10-08 Thread Mark Jacobs - Listserv
If something/some program executing in that jobclass (I assume it's a batch 
initiator) is still running control blocks and not using SWAREQ, then yes it 
might be needed.

Mark Jacobs

Tom Conley wrote on 10/8/18 8:37 PM:
Just wondering if there is any reason to still use SWA=BELOW.  I'm
seeing this in a JES2 parm member and I'm surprised, since I changed to
SWA=ABOVE ages ago.

Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with 
the message: INFO IBM-MAIN



Please be alert for any emails that may ask you for login information or 
directs you to login via a link. If you believe this message is a phish or 
aren't sure whether this message is trustworthy, please send the original 
message as an attachment to 
'phish...@meredith.com'.

This electronic message, including any attachments, may contain proprietary, 
confidential or privileged information for the sole use of the intended 
recipient(s). You are hereby notified that any unauthorized disclosure, 
copying, distribution, or use of this message is prohibited. If you have 
received this message in error, please immediately notify the sender by reply 
e-mail and delete it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question on DISP=MOD and GDGs

2018-10-08 Thread CM Poncelet
Yes, a DISP=(MOD,CATLG) defaults to DISP=(NEW,CATLG) if the DSN does not
exist.

The following *might* be invalid, as it would be referring to a GDG entry DSN 
that already exists (but it might also just issue a 'NOT CATLG 2' or similar 
JES2 log message):
"//SMFOUT DD DSN=SMFHLQ.DAILY.SMFDSN(0),  "
"// DISP=(MOD,CATLG,DELETE)   "  <--- not 'CATLG' again
It would 'correctly' have to be:
"//SMFOUT DD DSN=SMFHLQ.DAILY.SMFDSN(0),  "
"// DISP=(MOD,KEEP,KEEP)  "
 
Why not try it using a test GDG hlq DSN and/or on a test LPAR to see what 
happens, regardless of JCL checks?  
 
My ha'penny. CP


On 08/10/2018 18:38, Lizette Koehler wrote:
> The question was not about the JCL Check process returning a 4 or an 8 
>
>
> It was more on when using DISP=MOD, would any of the sample provided be 
> against
> JCL rules?
>
> My understanding is if DISP=MOD is used, then if the dataset does not exist,
> create it, if it does exist append to it.
>
> Just trying to understand why some might think these sample coding is invalid.
>
> Thanks
>
> Lizette
>
>
>> -Original Message-
>> From: IBM Mainframe Discussion List  On Behalf Of
>> Lizette Koehler
>> Sent: Friday, October 05, 2018 3:50 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Question on DISP=MOD and GDGs
>>
>> List -
>>
>> I am having a discussion on how a GDG is handled based on the DISP.  I was
>> always working from the position that the use of MOD changes JCL behavior
>> slightly with a dataset.  My understanding is:  If using MOD then if the
>> dataset does not exist, it is treated as NEW and if it exists then treated as
>> OLD.  That seems reasonable, however, I have some users coding this for GDGs
>> and I am not sure why they should work.
>>
>>
>> Now the following are samples and I am sure there are other coding that I
>> have not included.
>>
>>
>>
>> So based on the following, which should be considered incorrect coding
>>
>> 1) First time creating the new Daily dataset.  There is also a concern on the
>> second coding
>>
>>
>> //SMFOUT DD DSN=SMFHLQ.DAILY.SMFDSN(+1),
>> // DISP=(MOD,CATLG,DELETE),
>> // STORCLAS=NONSMS,EXPDT=99000,
>> // RECFM=VBS,BLKSIZE=32000,LRECL=32760,BUFNO=10,
>> // UNIT=TAPE
>> *** WARN 04: DISP FOR NEW GDG DATASET IS NOT (NEW,CATLG)
>>
>> Appending SMF data daily dataset
>>
>> //SMFOUT DD DSN=SMFHLQ.DAILY.SMFDSN(0),
>> // DISP=(MOD,CATLG,DELETE
>> *** WARN 04: DISP FOR NEW GDG DATASET IS NOT (NEW,CATLG)
>>
>>
>>
>> 2)  Using BR14 with MOD DELETE for a GEN that has NOT been created
>>
>> //S1 EXEC PGM=IEFBR14
>> //GDGBASE DD
>> DISP=(MOD,DELETE,DELETE),DSN=TSOHLQ.GDGTEST(0),SPACE=(TRK,(1,1)),UNIT=SYSDA
>> !!!ERROR 04: GDG(0) NOT PERMITTED WITH DISP=NEW
>>
>>
>> GDG does not have any GENs yet.  So get the following
>>
>> //COPYIT1 EXEC PGM=IEBGENER
>> //SYSPRINT DD SYSOUT=*
>> //*
>> //*
>> //SYSIN DD DUMMY
>> //SYSUT2 DD DISP=(MOD,CATLG,DELETE),UNIT=SYSDA,
>> // SPACE=(CYL,(1,1),RLSE),
>> // DSN=TSOHLQ.GDGTEST(+1)
>> *** WARN 04: DISP FOR NEW GDG DATASET IS NOT (NEW,CATLG)
>> //SYSUT1 DD *
>>  TEST RECORD
>>
>>
>> The GDG has one generation in the base I am able to see this should work
>>
>>//
>>//*
>>//
>>//S1 EXEC PGM=IEFBR14
>>//GDGBASE DD DISP=(MOD,DELETE,DELETE),DSN=TSOHLQ.GDGTEST(0)
>>
>>
>> I am not saying these are great ways to code DISP=MOD, just that I have seen
>> this coding work but have been told that they should not work or they would
>> cause weird/crazy results
>>
>>
>> Any and all opinions welcome.
>>
>>
>>
>> Thanks
>>
>>
>> Lizette Koehler
>> statistics: A precise and logical method for stating a half-truth
>> inaccurately
>>
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
>> lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
> .
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Any reason to still use SWA=BELOW?

2018-10-08 Thread Tom Conley
Just wondering if there is any reason to still use SWA=BELOW.  I'm 
seeing this in a JES2 parm member and I'm surprised, since I changed to 
SWA=ABOVE ages ago.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


CAVMEN Revival Meeting October 18 - SECOND NOTICE

2018-10-08 Thread Jack H. Slavick
The fourth quarter revival meeting of the Chicago Area VM (and Linux)
Enthusiasts will be held on Thursday, October 18. Please be encouraged to
attend

Meeting Location: 
This quarter's meeting will once again be held at  Alight (formerly AON
Hewitt) West
Campus' located at 4 Overlook Point, in Lincolnshire, IL (same location as
previously) . 
We will be entering on the south side of the complex (opposite of where we
have entered previously) and there should be an 'Alight' sign to guide you.
Security at that entrance should be able to guide you to our meeting room.
Parking should otherwise remain the same.
Please be attentive to where you park and where you attempt to enter the
building.Security has been instructed to
send you to the correct entrance. If you have not attended a meeting at this
location before and do not know the location of this entrance, please go to
www.cavmen.org . Check the links out!!!
There is a lot of information on the this site for your benefit!!!   

Attendance: 
We would like to request a count of expected attendees by the Monday before
the meeting, so that we may plan appropriately for arranging the
facilities.. If you are planning to attend, PLEASE send an E-Mail by that
date to jslavi...@comcast.net with a subject line of "Meeting Attendance".  

This is meant to be a facilities planning aid and should not be interpreted
as a registration requirement. If you suddenly become available at the last
minute, please feel free to attend even if you have not responded. 

Thank you in advance for your cooperation in this matter.  It is only
necessary to reply once concerning attendance.   


Meeting Agenda:

It will include Bruce Hayden of IBM, Rich Smrcina of Velocity Software, Jay
Zelnick of CA and myself.
Please see the website for times.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EDIT Macro

2018-10-08 Thread Paul Gilmartin
On Mon, 8 Oct 2018 22:16:16 +, Steely.Mark  wrote:

>I have a JCL Check product that uses an EDIT Macro to check the JCL. This EDIT 
>Macro has been copied under different member names through the years. The 
>customers uses whatever  process was provided when hired.
> 
What language?  Assembler, PL/I, CLIST, Rexx, other (specify)?

>The question is can I call another edit macro within another edit macro.
>
>Ex.   JJ1 JJ2 JJ3 are all exactly the same.
>How can I leave JJ1 alone and have JJ2 & JJ3 call  JJ1.
>
>The manuals were not very informative.
>
>I tried EXEC 'aaa.aaa.aaa(JJ1)'Cant exec an edit macro that way.
> 
If Rexx, you can use CALL or a function reference, but JJ1 must use
PARSE ARG X rather than ADDRESS XEDIT MACRO (X).

>I really don't want to use PDS ALIAS technique.
> 
You will need either to change JJ1 to use PARSE or use aliases.  I have written
eclectic code that tries MACRO, then falls back to PARSE on failure.

Hmmm.   If JJ2 and JJ3 address no EDIT commands before CALL JJ1, the
MACRO command may work as desired.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EDIT Macro

2018-10-08 Thread Steely.Mark
I really don't want to do that but may be the only option.

Thanks

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Lizette Koehler
Sent: Monday, October 08, 2018 5:41 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: EDIT Macro

A more simple solution is to create an ALIAS for the main entry

JJ
Then create an ALIAS JJ1, JJ2, JJ3 all pointing to JJ


Lizette


> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of
> Steely.Mark
> Sent: Monday, October 08, 2018 3:16 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: EDIT Macro
>
> I have a JCL Check product that uses an EDIT Macro to check the JCL. This
> EDIT Macro has been copied under different member names through the years.
> The customers uses whatever  process was provided when hired.
>
> The question is can I call another edit macro within another edit macro.
>
> Ex.   JJ1 JJ2 JJ3 are all exactly the same.
> How can I leave JJ1 alone and have JJ2 & JJ3 call  JJ1.
>
> The manuals were not very informative.
>
> I tried EXEC 'aaa.aaa.aaa(JJ1)'Cant exec an edit macro that way.
>
> I really don't want to use PDS ALIAS technique.
>
> Any help would be appreciated.
>
> Thank You
>
>
> *** Disclaimer ***
> This communication (including all attachments) is solely for the use of the
> person to whom it is addressed and is a confidential AAA communication. If
> you are not the intended recipient, any use, distribution, printing, or
> copying is prohibited. If you received this email in error, please
> immediately delete it and notify the sender.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
*** Disclaimer ***
This communication (including all attachments) is solely for the use of the 
person to whom it is addressed and is a confidential AAA communication. If you 
are not the intended recipient, any use, distribution, printing, or copying is 
prohibited. If you received this email in error, please immediately delete it 
and notify the sender.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Cross-memory POST ERRET and return codes

2018-10-08 Thread Charles Mills
Pursuant to a recent thread here I am converting a cross-memory POST to use
IEAMSXMP instead. However ... I still need to support older systems without
IEAMSXMP support, so I will be dual-pathing the existing POST. I got to
looking at code that I have not examined in several years, and I am trying
to determine exactly what is or should be going on. It runs without apparent
errors, so this is kind of a theoretical question, not "please help me with
this error." Here's the POST

POST  (R3),LINKAGE=BRANCH,ASCB=(R2),ERRET=POSTERR,MEMREL=NO

The questions are these
- Given that code, will POSTERR indeed get control on an error?
- The POST documentation documents two error codes, 4 and 8. Will they get
passed to POSTERR? Where?

Yes, I have RTFM but the FM is showing the effects of years of somewhat
piecemeal revisions.

Thanks,

Charles

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: EDIT Macro

2018-10-08 Thread Lizette Koehler
A more simple solution is to create an ALIAS for the main entry

JJ
Then create an ALIAS JJ1, JJ2, JJ3 all pointing to JJ


Lizette


> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of
> Steely.Mark
> Sent: Monday, October 08, 2018 3:16 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: EDIT Macro
> 
> I have a JCL Check product that uses an EDIT Macro to check the JCL. This
> EDIT Macro has been copied under different member names through the years.
> The customers uses whatever  process was provided when hired.
> 
> The question is can I call another edit macro within another edit macro.
> 
> Ex.   JJ1 JJ2 JJ3 are all exactly the same.
> How can I leave JJ1 alone and have JJ2 & JJ3 call  JJ1.
> 
> The manuals were not very informative.
> 
> I tried EXEC 'aaa.aaa.aaa(JJ1)'Cant exec an edit macro that way.
> 
> I really don't want to use PDS ALIAS technique.
> 
> Any help would be appreciated.
> 
> Thank You
> 
> 
> *** Disclaimer ***
> This communication (including all attachments) is solely for the use of the
> person to whom it is addressed and is a confidential AAA communication. If
> you are not the intended recipient, any use, distribution, printing, or
> copying is prohibited. If you received this email in error, please
> immediately delete it and notify the sender.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


EDIT Macro

2018-10-08 Thread Steely.Mark
I have a JCL Check product that uses an EDIT Macro to check the JCL. This EDIT 
Macro has been copied under different member names through the years. The 
customers uses whatever  process was provided when hired.

The question is can I call another edit macro within another edit macro.

Ex.   JJ1 JJ2 JJ3 are all exactly the same.
How can I leave JJ1 alone and have JJ2 & JJ3 call  JJ1.

The manuals were not very informative.

I tried EXEC 'aaa.aaa.aaa(JJ1)'Cant exec an edit macro that way.

I really don't want to use PDS ALIAS technique.

Any help would be appreciated.

Thank You


*** Disclaimer ***
This communication (including all attachments) is solely for the use of the 
person to whom it is addressed and is a confidential AAA communication. If you 
are not the intended recipient, any use, distribution, printing, or copying is 
prohibited. If you received this email in error, please immediately delete it 
and notify the sender.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: IBMLink down since at least yesterday; Granular APAR search also dead

2018-10-08 Thread Edward Finnell
Seems like there ought to be a contractual obligation

In a message dated 10/8/2018 4:02:25 PM Central Standard Time, 
pinnc...@rochester.rr.com writes:
I wanted to personally thank IBM for their commitment to 24/7 downtime for 
electronic support.  Somehow SR escaped the 24/7 down requirements,

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


IBMLink down since at least yesterday; Granular APAR search also dead

2018-10-08 Thread Tom Conley
I wanted to personally thank IBM for their commitment to 24/7 downtime 
for electronic support.  Somehow SR escaped the 24/7 down requirements, 
so I can still create PMRs.  Searching, not so much.  Well done, KUDOS!!!


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: S106 abends after copying into LINKLIST

2018-10-08 Thread John Eells
I do not know whether LLA keeps a pointer to the first text record 
(though it might), but it would certainly need the preceding associated 
control and ESD records to be cached as well if the first read done is 
for a text record.  I would expect that, since the ESD and control 
records encode their own length, they are read with the SLI bit on in 
the CCW, so that incorrect length does not cause any sort of I/O error, 
logical or otherwise.  The same goes for RLD records, and it might also 
apply to others.


Based on some research I did a long time ago, here is how I believe 
things work:


The control record contains a CCW fragment to be used in constructing 
the Read CCW for the next text record, unless it's the last.  PCI 
processing is used to chain onto the channel program to get the entire 
module in one shot unless the system is so busy the PCI can't be 
serviced in time to add to the chain and the I/O operation terminates. 
In that case, I believe it's restarted where it left off.


The read CCW for the text record should be constructed using the 
specific length stored in the control record, and I would not expect the 
SLI bit to be used for that CCW.  On that basis, I would agree that if 
the first "text" record you read does not have the expected length that 
the unexpected status back from the device would likely result in a 
"logical I/O error."  However, it's possible that SLI is used for the 
read (I have not read the code), and that would make other reasons 
(empty track, no record at that location on a track, additional extents, 
etc.) more likely culprits for ABEND106-F RC40.  For performance 
reasons, though, I would expect that SLI is not set.  This code was 
originally written before control unit cache existed and was designed to 
be really good at avoiding unecessary disk latency.  And, of course, we 
might change details in the code at any time (though why we would ever 
want to is a good question!).


The text records themselves are of variable length.  They have a minimum 
length of 1024 bytes, and a maximum length of the track length or block 
size, whichever is smaller.  The Binder (and COPYMOD) try to write the 
minimum possible number based on those limits.  They issue TRKBAL to 
find out how much space is left on the track, and write records on 
following tracks as needed to finish writing a load module.  (This is 
why 32760 is the best block size for load libraries.)


Because the directory pointers to PDS members are TTR pointers, every 
load module does not generally happen to start on a new track.  This 
means that large block sizes rarely if ever result in uniform text 
record lengths.  They do result in fewer text records if the modules' 
lengths exceed a lower block size, though.


All the above applies to load modules.  I have no idea how this works 
under the covers for program objects, but Program Management Advanced 
Facilities documents load module records.


Just some random additional info to reinforce the "except under narrow 
circumstances, with sufficient advance reflection, and malice--er, risk 
acceptance-aforethought, don't update running systems' data sets" others 
have already expressed.


Michael Stein wrote:



It's been a while but from what I remember about program fetch
here's a guess.

Looking up S106 RC 0F reason code 40:

either an uncorrectable I/O error occurred or an error in the
load module caused the channel program to fail.

Well, lets assume the hardware is work so this isn't a "real" I/O
error caused by some hardware problem.  And there are no dataset
extent changes, only the overwriting the dataset to empty it
out and then copying in the new modules.

Well the EOF pointer for the dataset got moved toward the front after
the directory.  This caused the new modules to be written starting at
the new EOF over the old modules.

And LLA still has the directory entries for the old modules, not the new
ones.  These now point into the new modules.  LLA's information includes
specific information on the first block of text of each old module:

   - the TTR of the first block of text
   - the length of the first block of text
   - the linkage editor assigned origin of first block of text

This allows program fetch to start with reading first text block,
rather than having to start at the beginning of the module.   Fetch can
build a CCW to directly read the first block since it knows the TTR of
the block and it's length and also the storage address (storage area +
block origin).

Since the old modules were overwritten, it's certain that the block at
the old location isn't the expected one.  There might not be a block there
giving no record found, there might be an EOF or there might be different
length block causing fetch's channel program to end with incorrect length.

This would explain the S106 RC 0F reason code 40.

This isn't that bad.  The length of the wrong block/module might
have matched.  I wonder if program fetch could 

Re: Destination z article: Ensuring Data Storage Longevity

2018-10-08 Thread Mike Schwab
Or archive a detailed printout of the data to answer any questions.
On Mon, Oct 8, 2018 at 12:22 PM Steve Thompson  wrote:
>
> You have hit upon a checklist item when I am involved in a migration 
> (especially from one O/S environment to another).
>
> A sign off ensuring that management understands that should any archives be 
> needed from the “old database”, the new system will not be able to read or 
> process that data to produce any needed reports.
>
> If they need that data it must be migrated to the new system’s format and 
> tested to ensure the reports give the same answers as found on a prior 
> printed/captured report (prior to the time the migration was started).
>
> Steve Thompson
>
> Sent from my iPhone — small keyboarf, fat fungrs, stupd spell manglr. Expct 
> mistaks
>
>
> > On Oct 8, 2018, at 7:47 PM, Jesse 1 Robinson  
> > wrote:
> >
> > The distinction between backup and archive is useful. I'm not sure that '90 
> > day usage' is the definitive boundary--we have scheduled year-end jobs that 
> > run only annually--but the categories make sense. However, it's not only 
> > about the data itself. Data is a structured mass of zeroes and ones that 
> > makes no sense at all without a means to render it intelligible.
> >
> > Some time ago, we (IT) was asked to restore very old data needed by the 
> > Finance department. The data was years old, but as responsible corporate 
> > caretakers we found the tapes that contained the information requested. The 
> > kicker: it was IMS data, and IMS had been decommissioned here years 
> > earlier. Even if we could somehow wangle a temporary copy of IMS, the 
> > process of installing it was daunting as none of us had relevant 
> > experience. Moreover, there was no guarantee that a 'modern' version of IMS 
> > would be able to untangle ancient data formats. Finance eventually let us 
> > off the hook, but it was a lesson that still haunts us.
> >
> > .
> > .
> > J.O.Skip Robinson
> > Southern California Edison Company
> > Electric Dragon Team Paddler
> > SHARE MVS Program Co-Manager
> > 323-715-0595 Mobile
> > 626-543-6132 Office ⇐=== NEW
> > robin...@sce.com
> >
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> > Behalf Of Gabe Goldberg
> > Sent: Sunday, October 07, 2018 6:11 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: (External):Destination z article: Ensuring Data Storage Longevity
> >
> > Ensuring Data Storage Longevity
> >
> > Backup and Archival Data
> >
> > Data comes in many varieties, related to why it exists and how it's
> > stored: active, warehouse, transactional, backup, archival and more.
> > I'll skip over the first three forms and focus on backup data (briefly) and 
> > archival data (primarily).
> >
> > Because backup data recovers from human error, equipment failures and 
> > external catastrophes, its only reason for existing is restoring data to a 
> > recent image. Archival data may be needed for legal or industry compliance, 
> > historical recordkeeping, merger and acquisition due diligence, 
> > unanticipated queries/searches, or reconstructing operational environments. 
> > Backup data can be stored piecemeal as long as it can be completely 
> > restored. Archival data is holistic, a complete/consistent image. For
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: S106 abends after copying into LINKLIST

2018-10-08 Thread Michael Stein
> Last friday morning we copied new CICS LINKLIST/LPA modules into the
> existing LINKLI in use (a rather new scenario in use here - we used to
> use alternative datasets), in be done sunday morning.
> 
> anyway, around 6pm friday evening, an I/O error occured in linklist
> and other jobs s the linklist library was not allocated with secondary
> extents and there was no LLA r the day. I cannot find anything like this
> situation occurring on IBMLINK and we have
> 
> Does anyone have any idea of what could have caused the I/O error.
> both the input and output datasets have a max blksize of 32760.
> 
> IEW4009I FETCH FAILED FOR MODULE DFHXCPRX FROM DDNAME -LNKLST- BECAUSE
> OF AN I/O ERROR.
> IEW4005I FETCH FOR MODULE DFHXCPRX FROM DDNAME -LNKLST- FAILED BECAUSE
> IEWFETCH ISSUED RC 0F AND REASON 40
> CSV031I LIBRARY ACCESS FAILED FOR MODULE DFHXCPRX, RETURN CODE 24,
> REASON CODE 26080021, DDNAME *LNKLST*

It's been a while but from what I remember about program fetch
here's a guess.

Looking up S106 RC 0F reason code 40:

   either an uncorrectable I/O error occurred or an error in the 
   load module caused the channel program to fail.

Well, lets assume the hardware is work so this isn't a "real" I/O
error caused by some hardware problem.  And there are no dataset
extent changes, only the overwriting the dataset to empty it
out and then copying in the new modules.

Well the EOF pointer for the dataset got moved toward the front after
the directory.  This caused the new modules to be written starting at
the new EOF over the old modules.

And LLA still has the directory entries for the old modules, not the new
ones.  These now point into the new modules.  LLA's information includes
specific information on the first block of text of each old module:

  - the TTR of the first block of text
  - the length of the first block of text
  - the linkage editor assigned origin of first block of text

This allows program fetch to start with reading first text block,
rather than having to start at the beginning of the module.   Fetch can
build a CCW to directly read the first block since it knows the TTR of
the block and it's length and also the storage address (storage area +
block origin).

Since the old modules were overwritten, it's certain that the block at
the old location isn't the expected one.  There might not be a block there
giving no record found, there might be an EOF or there might be different
length block causing fetch's channel program to end with incorrect length.

This would explain the S106 RC 0F reason code 40.

This isn't that bad.  The length of the wrong block/module might
have matched.  I wonder if program fetch could successfully load the
wrong module.

Now with a blocksize of 32760, possibly each module will fit in one block
and they likely have different sizes so this wrong module case might
be unlikely.  Or something else might prevent loading the wrong module
(what?)  Or it may be possible to have a successful program fetch with
the wrong module.  And then attempt to execute it with the parameters
and environment of the old module.

What would that cause?  Program checks?  Mangled data?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Question on DISP=MOD and GDGs

2018-10-08 Thread Lizette Koehler
The question was not about the JCL Check process returning a 4 or an 8 


It was more on when using DISP=MOD, would any of the sample provided be against
JCL rules?

My understanding is if DISP=MOD is used, then if the dataset does not exist,
create it, if it does exist append to it.

Just trying to understand why some might think these sample coding is invalid.

Thanks

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List  On Behalf Of
> Lizette Koehler
> Sent: Friday, October 05, 2018 3:50 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Question on DISP=MOD and GDGs
> 
> List -
> 
> I am having a discussion on how a GDG is handled based on the DISP.  I was
> always working from the position that the use of MOD changes JCL behavior
> slightly with a dataset.  My understanding is:  If using MOD then if the
> dataset does not exist, it is treated as NEW and if it exists then treated as
> OLD.  That seems reasonable, however, I have some users coding this for GDGs
> and I am not sure why they should work.
> 
> 
> Now the following are samples and I am sure there are other coding that I
> have not included.
> 
> 
> 
> So based on the following, which should be considered incorrect coding
> 
> 1) First time creating the new Daily dataset.  There is also a concern on the
> second coding
> 
> 
> //SMFOUT DD DSN=SMFHLQ.DAILY.SMFDSN(+1),
> // DISP=(MOD,CATLG,DELETE),
> // STORCLAS=NONSMS,EXPDT=99000,
> // RECFM=VBS,BLKSIZE=32000,LRECL=32760,BUFNO=10,
> // UNIT=TAPE
> *** WARN 04: DISP FOR NEW GDG DATASET IS NOT (NEW,CATLG)
> 
> Appending SMF data daily dataset
> 
> //SMFOUT DD DSN=SMFHLQ.DAILY.SMFDSN(0),
> // DISP=(MOD,CATLG,DELETE
> *** WARN 04: DISP FOR NEW GDG DATASET IS NOT (NEW,CATLG)
> 
> 
> 
> 2)  Using BR14 with MOD DELETE for a GEN that has NOT been created
> 
> //S1 EXEC PGM=IEFBR14
> //GDGBASE DD
> DISP=(MOD,DELETE,DELETE),DSN=TSOHLQ.GDGTEST(0),SPACE=(TRK,(1,1)),UNIT=SYSDA
> !!!ERROR 04: GDG(0) NOT PERMITTED WITH DISP=NEW
> 
> 
> GDG does not have any GENs yet.  So get the following
> 
> //COPYIT1 EXEC PGM=IEBGENER
> //SYSPRINT DD SYSOUT=*
> //*
> //*
> //SYSIN DD DUMMY
> //SYSUT2 DD DISP=(MOD,CATLG,DELETE),UNIT=SYSDA,
> // SPACE=(CYL,(1,1),RLSE),
> // DSN=TSOHLQ.GDGTEST(+1)
> *** WARN 04: DISP FOR NEW GDG DATASET IS NOT (NEW,CATLG)
> //SYSUT1 DD *
>  TEST RECORD
> 
> 
> The GDG has one generation in the base I am able to see this should work
> 
>//
>//*
>//
>//S1 EXEC PGM=IEFBR14
>//GDGBASE DD DISP=(MOD,DELETE,DELETE),DSN=TSOHLQ.GDGTEST(0)
> 
> 
> I am not saying these are great ways to code DISP=MOD, just that I have seen
> this coding work but have been told that they should not work or they would
> cause weird/crazy results
> 
> 
> Any and all opinions welcome.
> 
> 
> 
> Thanks
> 
> 
> Lizette Koehler
> statistics: A precise and logical method for stating a half-truth
> inaccurately
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Destination z article: Ensuring Data Storage Longevity

2018-10-08 Thread Jerry Whitteridge
We also have been informed by our records retention team that we are NOT to
retain data past the legal retention period, even if the application
teams/business teams would like to do so. If the data exists, you would be
legally obliged to provide the data in the case of a legal dispute, if its
purged no issues arise.

Jerry Whitteridge
Delivery Manager / Mainframe Architect
GTS - Safeway Account
602 527 4871 Mobile
jerry.whitteri...@ibm.com

IBM Services

IBM Mainframe Discussion List  wrote on
10/08/2018 09:47:37 AM:

> From: Jesse 1 Robinson 
> To: IBM-MAIN@LISTSERV.UA.EDU
> Date: 10/08/2018 09:48 AM
> Subject: Re: Destination z article: Ensuring Data Storage Longevity
> Sent by: IBM Mainframe Discussion List 
>
> The distinction between backup and archive is useful. I'm not sure
> that '90 day usage' is the definitive boundary--we have scheduled
> year-end jobs that run only annually--but the categories make sense.
> However, it's not only about the data itself. Data is a structured
> mass of zeroes and ones that makes no sense at all without a means
> to render it intelligible.
>
> Some time ago, we (IT) was asked to restore very old data needed by
> the Finance department. The data was years old, but as responsible
> corporate caretakers we found the tapes that contained the
> information requested. The kicker: it was IMS data, and IMS had been
> decommissioned here years earlier. Even if we could somehow wangle a
> temporary copy of IMS, the process of installing it was daunting as
> none of us had relevant experience. Moreover, there was no guarantee
> that a 'modern' version of IMS would be able to untangle ancient
> data formats. Finance eventually let us off the hook, but it was a
> lesson that still haunts us.
>
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-543-6132 Office ⇐=== NEW
> robin...@sce.com
>
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU
> ] On Behalf Of Gabe Goldberg
> Sent: Sunday, October 07, 2018 6:11 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: (External):Destination z article: Ensuring Data Storage
Longevity
>
> Ensuring Data Storage Longevity
>
> Backup and Archival Data
>
> Data comes in many varieties, related to why it exists and how it's
> stored: active, warehouse, transactional, backup, archival and more.
> I'll skip over the first three forms and focus on backup data
> (briefly) and archival data (primarily).
>
> Because backup data recovers from human error, equipment failures
> and external catastrophes, its only reason for existing is restoring
> data to a recent image. Archival data may be needed for legal or
> industry compliance, historical recordkeeping, merger and
> acquisition due diligence, unanticipated queries/searches, or
> reconstructing operational environments. Backup data can be stored
> piecemeal as long as it can be completely restored. Archival data is
> holistic, a complete/consistent image. For a detailed explanation of
> why multiple backup copies—even cloud storage—don't constitute
> archived data, see this Storage Switzerland blog: https://
> urldefense.proofpoint.com/v2/url?
> u=https-3A__bit.ly_2DzoJrR=DwIGaQ=jf_iaSHvJObTbx-
>
siA1ZOg=0avyVTgpzBFlo1QAgHxCtqKtRE6Ldl_1M9tI2p7Kc8E=n5vEqYyQJdgVV6l7F06AIBuPQxWwDhb8wjgqk1BME4Q=4GZQNL_blbDWP7_a94lx8obHHyNp4tOypPAes6dodQo=

>
> INVALID URI REMOVED
>
u=http-3A__destinationz.org_Mainframe-2DSolution_Trends_Ensuring-2DData-2DStorage-2DLongevity=DwIGaQ=jf_iaSHvJObTbx-

>
siA1ZOg=0avyVTgpzBFlo1QAgHxCtqKtRE6Ldl_1M9tI2p7Kc8E=n5vEqYyQJdgVV6l7F06AIBuPQxWwDhb8wjgqk1BME4Q=96iPf4sjy2kN0Twtd8ec7tTUcm9cGXm3IHdp8n8leTM=

> INVALID URI REMOVED
> u=https-3A__bit.ly_2NiJVSS=DwIGaQ=jf_iaSHvJObTbx-
>
siA1ZOg=0avyVTgpzBFlo1QAgHxCtqKtRE6Ldl_1M9tI2p7Kc8E=n5vEqYyQJdgVV6l7F06AIBuPQxWwDhb8wjgqk1BME4Q=zl7GuQO8ctdbaBqQtx93EJOERBaBsHGgBEU2aksMinY=

>
> ...for non-technical folk reading this (it's going to diverse lists)
> -- your data needs backup and archiving too. And backup still isn't
archive.
>
> --
> Gabriel Goldberg, Computers and Publishing, Inc.   g...@gabegold.com
> 3401 Silver Maple Place, Falls Church, VA 22042   (703) 204-0433
> LinkedIn: INVALID URI REMOVED
> u=http-3A__www.linkedin.com_in_gabegold=DwIGaQ=jf_iaSHvJObTbx-
>
siA1ZOg=0avyVTgpzBFlo1QAgHxCtqKtRE6Ldl_1M9tI2p7Kc8E=n5vEqYyQJdgVV6l7F06AIBuPQxWwDhb8wjgqk1BME4Q=OqDmH0BmTVL9r60Jckl6_jq60ERES5d8dP90xuJ2OqM=

> Twitter: GabeG0
>
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Destination z article: Ensuring Data Storage Longevity

2018-10-08 Thread Steve Thompson
You have hit upon a checklist item when I am involved in a migration 
(especially from one O/S environment to another). 

A sign off ensuring that management understands that should any archives be 
needed from the “old database”, the new system will not be able to read or 
process that data to produce any needed reports. 

If they need that data it must be migrated to the new system’s format and 
tested to ensure the reports give the same answers as found on a prior 
printed/captured report (prior to the time the migration was started). 

Steve Thompson 

Sent from my iPhone — small keyboarf, fat fungrs, stupd spell manglr. Expct 
mistaks 


> On Oct 8, 2018, at 7:47 PM, Jesse 1 Robinson  wrote:
> 
> The distinction between backup and archive is useful. I'm not sure that '90 
> day usage' is the definitive boundary--we have scheduled year-end jobs that 
> run only annually--but the categories make sense. However, it's not only 
> about the data itself. Data is a structured mass of zeroes and ones that 
> makes no sense at all without a means to render it intelligible. 
> 
> Some time ago, we (IT) was asked to restore very old data needed by the 
> Finance department. The data was years old, but as responsible corporate 
> caretakers we found the tapes that contained the information requested. The 
> kicker: it was IMS data, and IMS had been decommissioned here years earlier. 
> Even if we could somehow wangle a temporary copy of IMS, the process of 
> installing it was daunting as none of us had relevant experience. Moreover, 
> there was no guarantee that a 'modern' version of IMS would be able to 
> untangle ancient data formats. Finance eventually let us off the hook, but it 
> was a lesson that still haunts us. 
> 
> .
> .
> J.O.Skip Robinson
> Southern California Edison Company
> Electric Dragon Team Paddler 
> SHARE MVS Program Co-Manager
> 323-715-0595 Mobile
> 626-543-6132 Office ⇐=== NEW
> robin...@sce.com
> 
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of Gabe Goldberg
> Sent: Sunday, October 07, 2018 6:11 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: (External):Destination z article: Ensuring Data Storage Longevity
> 
> Ensuring Data Storage Longevity
> 
> Backup and Archival Data
> 
> Data comes in many varieties, related to why it exists and how it's
> stored: active, warehouse, transactional, backup, archival and more. 
> I'll skip over the first three forms and focus on backup data (briefly) and 
> archival data (primarily).
> 
> Because backup data recovers from human error, equipment failures and 
> external catastrophes, its only reason for existing is restoring data to a 
> recent image. Archival data may be needed for legal or industry compliance, 
> historical recordkeeping, merger and acquisition due diligence, unanticipated 
> queries/searches, or reconstructing operational environments. Backup data can 
> be stored piecemeal as long as it can be completely restored. Archival data 
> is holistic, a complete/consistent image. For

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Destination z article: Ensuring Data Storage Longevity

2018-10-08 Thread Jesse 1 Robinson
The distinction between backup and archive is useful. I'm not sure that '90 day 
usage' is the definitive boundary--we have scheduled year-end jobs that run 
only annually--but the categories make sense. However, it's not only about the 
data itself. Data is a structured mass of zeroes and ones that makes no sense 
at all without a means to render it intelligible. 

Some time ago, we (IT) was asked to restore very old data needed by the Finance 
department. The data was years old, but as responsible corporate caretakers we 
found the tapes that contained the information requested. The kicker: it was 
IMS data, and IMS had been decommissioned here years earlier. Even if we could 
somehow wangle a temporary copy of IMS, the process of installing it was 
daunting as none of us had relevant experience. Moreover, there was no 
guarantee that a 'modern' version of IMS would be able to untangle ancient data 
formats. Finance eventually let us off the hook, but it was a lesson that still 
haunts us. 

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Gabe Goldberg
Sent: Sunday, October 07, 2018 6:11 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Destination z article: Ensuring Data Storage Longevity

Ensuring Data Storage Longevity

Backup and Archival Data

Data comes in many varieties, related to why it exists and how it's
stored: active, warehouse, transactional, backup, archival and more. 
I'll skip over the first three forms and focus on backup data (briefly) and 
archival data (primarily).

Because backup data recovers from human error, equipment failures and external 
catastrophes, its only reason for existing is restoring data to a recent image. 
Archival data may be needed for legal or industry compliance, historical 
recordkeeping, merger and acquisition due diligence, unanticipated 
queries/searches, or reconstructing operational environments. Backup data can 
be stored piecemeal as long as it can be completely restored. Archival data is 
holistic, a complete/consistent image. For a detailed explanation of why 
multiple backup copies—even cloud storage—don't constitute archived data, see 
this Storage Switzerland blog: https://bit.ly/2DzoJrR

http://destinationz.org/Mainframe-Solution/Trends/Ensuring-Data-Storage-Longevity
https://bit.ly/2NiJVSS

...for non-technical folk reading this (it's going to diverse lists) -- your 
data needs backup and archiving too. And backup still isn't archive.

-- 
Gabriel Goldberg, Computers and Publishing, Inc.   g...@gabegold.com
3401 Silver Maple Place, Falls Church, VA 22042   (703) 204-0433
LinkedIn: http://www.linkedin.com/in/gabegoldTwitter: GabeG0


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: using REXX to spawn a Java program

2018-10-08 Thread Steve Austin
Thanks for the link I found it useful and interesting. As to why I used 
syscall('off'), the doc I worked from said it was not usually necessary, rather 
than not recommended, so it seemed tidier to do so. I'll certainly remove it. 
Incidentally the problem only manifested when I switched from _BPX_SHAREAS=NO 
to _BPX_SHAREAS=MUST/REUSE/YES.  From the description it sounds like REUSE is 
what I should use. 

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Paul Gilmartin
Sent: Monday, October 8, 2018 2:57 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: using REXX to spawn a Java program

On Mon, 8 Oct 2018 08:11:51 +, Steve Austin wrote:

>I have some questions regarding the above.
>
>I'm specifying an environment variable of "_BPX_SHAREAS=MUST". Before each 
>spawn of the Java program I have a "syscalls('on') and after a 
>"syscalls("off"). I'm noticing a significant delay at the "syscalls('off'); it 
>can be as much as 2 minutes. If I remove the "syscalls('off')" I don't get the 
>delay.
>
"Doctor, it hurts when I do this.   "

Why do you use SYSCALLS("OFF")?  It's not recommended:

http://www2.marist.edu/htbin/wlvtype?MVS-OE.35369

Date: Mon, 2 Jun 2003 20:08:28 -0400
From: William Schoen 
Subject:  Re: SYSCALLS() Query?
 
I do not recommend using sycalls 'OFF'.  It may clean up more than you want
yet not everything from syscalls 'ON'. 

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

-- 
This e-mail message has been scanned and cleared by Google Message Security 
and the UNICOM Global security systems. This message is for the named 
person's use only. If you receive this message in error, please delete it 
and notify the sender. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ISPF issue - new system

2018-10-08 Thread zos reader
Thank All for your valuable inputs.

I did some analysis and I could see the Catalog is pointing to the other
system where we cloned and it’s CPUX and that’s the reason we couldn’t get
ISPF Profiles.
 And also we need to change the rexx exec Procs to not point SMS.



On Monday, October 8, 2018, Anthony Thompson 
wrote:

> As Paul said, you will need volumes mounted as PUBLIC in your VALTLST to
> be able to satisfy non-specific volume allocation requests in a non-SMS
> environment (which is likely what your ISPF initial EXEC is doing to
> pre-allocate new ISPF datasets).
>
> Ant.
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Feller, Paul
> Sent: Saturday, 6 October 2018 6:59 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: ISPF issue - new system
>
> Without SMS you will need to have DASD volumes mounted as
> PRIVATE/PUBLIC/STORAGE.  I believe without the PUBLIC or STORAGE volumes
> you will have a hard time allocating new dataset or temp datasets.  I would
> think any datasets that are cataloged should get found through the proper
> catalog search.
>
> Thanks..
>
> Paul Feller
> AGT Mainframe Technical Support
> paul.fel...@transamerica.com
>
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Vinoth M
> Sent: Friday, October 05, 2018 12:18 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: ISPF issue - new system
>
> Hi All,
>
> We have build a new system and we are bringing this system without SMS
> configuration.
> We got the VTAM, TSO, TCPIP up and we are ready to login TSO, when we
> login to TSO, the ISPF profiles are not getting allocated, since it’s
> pointing to SMS dataset but we have disabled SMS.
>
> May I know, where to change the SMS stuffs to NON-SMS volume, do I no to
> change in parmlibs or any other procs, please let me know.
>
> Thanks
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email
> to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: using REXX to spawn a Java program

2018-10-08 Thread Paul Gilmartin
On Mon, 8 Oct 2018 08:11:51 +, Steve Austin wrote:

>I have some questions regarding the above.
>
>I'm specifying an environment variable of "_BPX_SHAREAS=MUST". Before each 
>spawn of the Java program I have a "syscalls('on') and after a 
>"syscalls("off"). I'm noticing a significant delay at the "syscalls('off'); it 
>can be as much as 2 minutes. If I remove the "syscalls('off')" I don't get the 
>delay.
>
"Doctor, it hurts when I do this.   "

Why do you use SYSCALLS("OFF")?  It's not recommended:

http://www2.marist.edu/htbin/wlvtype?MVS-OE.35369

Date: Mon, 2 Jun 2003 20:08:28 -0400
From: William Schoen 
Subject:  Re: SYSCALLS() Query?
 
I do not recommend using sycalls 'OFF'.  It may clean up more than you want
yet not everything from syscalls 'ON'. 

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Changing the way how ISPF recovery datasets are allocated

2018-10-08 Thread Joel C. Ewing
On 10/08/2018 08:28 AM, Tom Conley wrote:
> On 10/8/2018 4:29 AM, Elardus Engelbrecht wrote:
>> Good day to all,
>>
>> I see these restrictions for Edit Recovery Datasets from "z/OS ISPF
>> Planning and Customizing" (z/OS v2.2):
>>
>> 
>>
>> These restrictions apply to edit recovery data sets:
>>
>>  They must be allocated as sequential data sets of record format U.
>>  They cannot be striped, or striped and compressed data sets.
>>  They cannot be multivolume data sets.
>>
>> 
>>
>> One of my Storage Admin guys changed (to resolve an unrelated
>> problem) the default SMS Data class to have 'DATA SET NAME TYPE' as
>> EXTENDED and other new attributes resulting in that the ISPF Recovery
>> datasets are 'striped'.
>>
>> Which means that I see 'Recovery suspended-error' in an ISPF Edit
>> session and a long description what was wrong.
>>
>> I also see this: IGD17070I DATASET ?.ISP7195.BACKUP ALLOCATED
>> SUCCESSFULLY WITH 1 STRIPE(S).
>>   ... then a IGD101I followed by IGD105I where it was deleted
>> immediately.
>>
>> That change was reversed eventually because of the allocation error,
>> but it may be needed to re-apply the new changes in the Data class.
>>
>> Question:
>>
>> Where can I change the allocation behaviour of ISPF to force another
>> Dataclass to be selected? I already looked at ISPCCONFIG, but there
>> is nothing about selecting right SMS class(es).
>>
>> If that is not possible, is it Ok to rather create a Data Class rule
>> where .ISP*.BACKUP datasets are assigned to another dataclass? Or
>> are there better solutions?
>>
>> Many thanks in advance.
>>
>
> If you really want to default everything to striped, then you need to
> create a DATACLAS and the appropriate ACS routine code to ensure that
> the recovery datasets are correct.
>
> Regards,
> Tom Conley
> ...
>
Obviously an implicit "else" after "default everything".

The Storage Admins need to be aware about all restrictions on these data
sets, not just the striping restriction.  The more inclusive statement
is that SMS definitions can override data set characteristics coming
from any other sources (like ISPF).  So, once Storage Admins have
managed to demonstrate a non-awareness of special requirements for Edit
Recovery data sets, this demonstrates the need for a separate documented
DATACLAS and ACS routine so they are [hopefullly] unlikely to make a
similar mistake in the future.  The DATACLAS/ACS code documentation
should include all the restrictions and also a reference to where these
restrictions may be found, should they change at some point in the future.
    Joel C. Ewing

-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Changing the way how ISPF recovery datasets are allocated

2018-10-08 Thread Michael Oujesky
ACS rule dataset masking.  My ROT is to have each special case in 
it's own DATACLAS identifying why it is an exception.


BTW, we also had this happen to us while attempting to get to 
extended-format as the shop standard.


At 03:29 AM 10/8/2018, you wrote:

Good day to all,

I see these restrictions for Edit Recovery Datasets from "z/OS ISPF 
Planning and Customizing" (z/OS v2.2):




These restrictions apply to edit recovery data sets:

They must be allocated as sequential data sets of record format U.
They cannot be striped, or striped and compressed data sets.
They cannot be multivolume data sets.



One of my Storage Admin guys changed (to resolve an unrelated 
problem) the default SMS Data class to have 'DATA SET NAME TYPE' as 
EXTENDED and other new attributes resulting in that the ISPF 
Recovery datasets are 'striped'.


Which means that I see 'Recovery suspended-error' in an ISPF Edit 
session and a long description what was wrong.


I also see this: IGD17070I DATASET ?.ISP7195.BACKUP ALLOCATED 
SUCCESSFULLY WITH 1 STRIPE(S).

 ... then a IGD101I followed by IGD105I where it was deleted immediately.

That change was reversed eventually because of the allocation error, 
but it may be needed to re-apply the new changes in the Data class.


Question:

Where can I change the allocation behaviour of ISPF to force another 
Dataclass to be selected? I already looked at ISPCCONFIG, but there 
is nothing about selecting right SMS class(es).


If that is not possible, is it Ok to rather create a Data Class rule 
where .ISP*.BACKUP datasets are assigned to another dataclass? Or 
are there better solutions?


Many thanks in advance.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: ZZSA question

2018-10-08 Thread R.S.

W dniu 2018-10-07 o 11:19, Brian Westerman pisze:

It IPLs on z13, z13s and z14-ZR1.  (also zBC12, zEC12, z114, z10BC, z9 and 
z890).  I have not tried anything older lately, but one of our clients still 
has a z/800, I expect it will probably work there.  I could give it a shot if 
there is some need to know, but I can't think of any reason it would not work.


The reason is z14 changed things. It is no longer possible to IPL in 
ESA/390 mode.


--
Radoslaw Skorupka
Lodz, Poland




==

Jeśli nie jesteś adresatem tej wiadomości:

- powiadom nas o tym w mailu zwrotnym (dziękujemy!),
- usuń trwale tę wiadomość (i wszystkie kopie, które wydrukowałeś lub zapisałeś 
na dysku).
Wiadomość ta może zawierać chronione prawem informacje, które może wykorzystać 
tylko adresat.Przypominamy, że każdy, kto rozpowszechnia (kopiuje, rozprowadza) 
tę wiadomość lub podejmuje podobne działania, narusza prawo i może podlegać 
karze.

mBank S.A. z siedzibą w Warszawie, ul. Senatorska 18, 00-950 
Warszawa,www.mBank.pl, e-mail: kont...@mbank.pl. Sąd Rejonowy dla m. st. 
Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, KRS 025237, 
NIP: 526-021-50-88. Kapitał zakładowy (opłacony w całości) według stanu na 
01.01.2018 r. wynosi 169.248.488 złotych.

If you are not the addressee of this message:

- let us know by replying to this e-mail (thank you!),
- delete this message permanently (including all the copies which you have 
printed out or saved).
This message may contain legally protected information, which may be used 
exclusively by the addressee.Please be reminded that anyone who disseminates 
(copies, distributes) this message or takes any similar action, violates the 
law and may be penalised.

mBank S.A. with its registered office in Warsaw, ul. Senatorska 18, 00-950 
Warszawa,www.mBank.pl, e-mail: kont...@mbank.pl. District Court for the Capital 
City of Warsaw, 12th Commercial Division of the National Court Register, KRS 
025237, NIP: 526-021-50-88. Fully paid-up share capital amounting to PLN 
169,248,488 as at 1 January 2018.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Changing the way how ISPF recovery datasets are allocated

2018-10-08 Thread Tom Conley

On 10/8/2018 4:29 AM, Elardus Engelbrecht wrote:

Good day to all,

I see these restrictions for Edit Recovery Datasets from "z/OS ISPF Planning and 
Customizing" (z/OS v2.2):



These restrictions apply to edit recovery data sets:

 They must be allocated as sequential data sets of record format U.
 They cannot be striped, or striped and compressed data sets.
 They cannot be multivolume data sets.



One of my Storage Admin guys changed (to resolve an unrelated problem) the 
default SMS Data class to have 'DATA SET NAME TYPE' as EXTENDED and other new 
attributes resulting in that the ISPF Recovery datasets are 'striped'.

Which means that I see 'Recovery suspended-error' in an ISPF Edit session and a 
long description what was wrong.

I also see this: IGD17070I DATASET ?.ISP7195.BACKUP ALLOCATED SUCCESSFULLY WITH 
1 STRIPE(S).
  ... then a IGD101I followed by IGD105I where it was deleted immediately.

That change was reversed eventually because of the allocation error, but it may 
be needed to re-apply the new changes in the Data class.

Question:

Where can I change the allocation behaviour of ISPF to force another Dataclass 
to be selected? I already looked at ISPCCONFIG, but there is nothing about 
selecting right SMS class(es).

If that is not possible, is it Ok to rather create a Data Class rule where 
.ISP*.BACKUP datasets are assigned to another dataclass? Or are there better 
solutions?

Many thanks in advance.



If you really want to default everything to striped, then you need to 
create a DATACLAS and the appropriate ACS routine code to ensure that 
the recovery datasets are correct.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: using REXX to spawn a Java program

2018-10-08 Thread Joe Monk
This kinda gives you a clue:

"syscalls('ON') ensures that the SYSCALL host command environment is
available in your REXX environment. If the call detects that SYSCALL is not
available in your environment, it dynamically adds it."

"Performance characteristics for dynamically added host commands are not as
good as for host commands that are included in the initial environment:
Every time a command is directed to the SYSCALL host command environment,
the TSO/E REXX support loads the module for the SYSCALL host command."

Page 13

https://www-01.ibm.com/servers/resourcelink/svc00100.nsf/pages/zOSV2R3SA232283/$file/bpxb600_v2r3.pdf

So every time you turn off syscalls, and subsequently turn it back on, you
are dynamically adding host support... this is probably the cause of your
performance issues.

Joe

On Mon, Oct 8, 2018 at 3:12 AM Steve Austin  wrote:

> I have some questions regarding the above.
>
> I'm specifying an environment variable of "_BPX_SHAREAS=MUST". Before each
> spawn of the Java program I have a "syscalls('on') and after a
> "syscalls("off"). I'm noticing a significant delay at the "syscalls('off');
> it can be as much as 2 minutes. If I remove the "syscalls('off')" I don't
> get the delay.
>
>
> 1)  syscalls('off') is removing the unix environment, but why the
> delay?
>
> 2)  By removing the "syscalls('off')" I'm retaining the unix
> environment, but is the Java JVM also retained for reuse? I'm guessing not,
> but it would be nice if it were.
>
> 3)  The documentation I've found so far is pretty general. Is there
> documentation somewhere that describes the mechanics in detail?
>
> Thanks
>
> Steve
>
> --
> This e-mail message has been scanned and cleared by Google Message
> Security
> and the UNICOM Global security systems. This message is for the named
> person's use only. If you receive this message in error, please delete it
> and notify the sender.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: PMAR (was: Where in SMDE ...)

2018-10-08 Thread Peter Relson

PMARL_DATE:X'DDDF'Packed decimal, four digit year, 3 digit 
day-of-year and X'F sign nibble
PMARL_TIME:X'0HHMMDDF'Packed decimal, digit zero, 2-digit hour, 
2-digit minute, 2-digit second and X'F' sign nibble


Whatever proves to be the correct answer (What Peter F posted seems likely 
to be correct, but I'm checking) will be placed into the commentary of the 
field and will make its way into the data area books in a forthcoming 
release.

Peter Relson
z/OS Core Technology Design


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Changing the way how ISPF recovery datasets are allocated

2018-10-08 Thread Elardus Engelbrecht
Good day to all,

I see these restrictions for Edit Recovery Datasets from "z/OS ISPF Planning 
and Customizing" (z/OS v2.2):



These restrictions apply to edit recovery data sets:

They must be allocated as sequential data sets of record format U.
They cannot be striped, or striped and compressed data sets.
They cannot be multivolume data sets.



One of my Storage Admin guys changed (to resolve an unrelated problem) the 
default SMS Data class to have 'DATA SET NAME TYPE' as EXTENDED and other new 
attributes resulting in that the ISPF Recovery datasets are 'striped'.

Which means that I see 'Recovery suspended-error' in an ISPF Edit session and a 
long description what was wrong. 

I also see this: IGD17070I DATASET ?.ISP7195.BACKUP ALLOCATED SUCCESSFULLY WITH 
1 STRIPE(S).
 ... then a IGD101I followed by IGD105I where it was deleted immediately.

That change was reversed eventually because of the allocation error, but it may 
be needed to re-apply the new changes in the Data class.

Question:

Where can I change the allocation behaviour of ISPF to force another Dataclass 
to be selected? I already looked at ISPCCONFIG, but there is nothing about 
selecting right SMS class(es).

If that is not possible, is it Ok to rather create a Data Class rule where 
.ISP*.BACKUP datasets are assigned to another dataclass? Or are there better 
solutions?

Many thanks in advance.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


using REXX to spawn a Java program

2018-10-08 Thread Steve Austin
I have some questions regarding the above.

I'm specifying an environment variable of "_BPX_SHAREAS=MUST". Before each 
spawn of the Java program I have a "syscalls('on') and after a 
"syscalls("off"). I'm noticing a significant delay at the "syscalls('off'); it 
can be as much as 2 minutes. If I remove the "syscalls('off')" I don't get the 
delay.


1)  syscalls('off') is removing the unix environment, but why the delay?

2)  By removing the "syscalls('off')" I'm retaining the unix environment, 
but is the Java JVM also retained for reuse? I'm guessing not, but it would be 
nice if it were.

3)  The documentation I've found so far is pretty general. Is there 
documentation somewhere that describes the mechanics in detail?

Thanks

Steve

-- 
This e-mail message has been scanned and cleared by Google Message Security 
and the UNICOM Global security systems. This message is for the named 
person's use only. If you receive this message in error, please delete it 
and notify the sender. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN