Re: Spool file

2017-03-23 Thread Joseph Reichman
Thanks you have really helped 

> On Mar 23, 2017, at 9:10 PM, Lizette Koehler  wrote:
> 
> When you cold start, you need to include a $S command to get JES2 working 
> again.
> 
> Lizette
> 
> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Joseph Reichman
>> Sent: Thursday, March 23, 2017 6:06 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: Spool file
>> 
>> That is correct
>> 
>> I did have to do a cold start to get JES to use the spool file from my new
>> packs
>> 
>> However the cold start none of the other jobs took off
>> 
>> I am thinking of using 2 different iplparm members to get around the problem
>> 
>> 
>> 
>>> On Mar 23, 2017, at 8:53 PM, Lizette Koehler 
>> wrote:
>>> 
>>> Joe
>>> 
>>> Are you using zPDT on Linux?  Is that why you have not posted the
>>> messages?  Or why you have not opened a case with IBM for assistance?
>>> 
>>> 
>>> 
>>> That could change the discussion
>>> 
>>> Lizette
>>> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Lizette Koehler
When you cold start, you need to include a $S command to get JES2 working again.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Joseph Reichman
> Sent: Thursday, March 23, 2017 6:06 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> That is correct
> 
> I did have to do a cold start to get JES to use the spool file from my new
> packs
> 
> However the cold start none of the other jobs took off
> 
> I am thinking of using 2 different iplparm members to get around the problem
> 
> 
> 
> > On Mar 23, 2017, at 8:53 PM, Lizette Koehler 
> wrote:
> >
> > Joe
> >
> > Are you using zPDT on Linux?  Is that why you have not posted the
> > messages?  Or why you have not opened a case with IBM for assistance?
> >
> >
> >
> > That could change the discussion
> >
> > Lizette
> >

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Joseph Reichman
That is correct 

I did have to do a cold start to get JES to use the spool file from my new 
packs 

However the cold start none of the other jobs took off 

I am thinking of using 2 different iplparm members to get around the problem



> On Mar 23, 2017, at 8:53 PM, Lizette Koehler  wrote:
> 
> Joe 
> 
> Are you using zPDT on Linux?  Is that why you have not posted the messages?  
> Or
> why you have not opened a case with IBM for assistance?
> 
> 
> 
> That could change the discussion
> 
> Lizette
> 
> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Joseph Reichman
>> Sent: Thursday, March 23, 2017 8:12 AM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: Spool file
>> 
>> Thanks
>> 
>>> On Mar 23, 2017, at 11:05 AM, Lizette Koehler 
>> wrote:
>>> 
>>> Generally -
>>> 
>>> You can walk the CKPT anytime with $TCKPTDEF,RECONFIG=YES
>>> 
>>> It is long, but fairly safe.  You can cancel the process at any time,
>>> and JES2 will not impact much until you end it or allow it to proceed
>>> with the reconfiguration.
>>> 
>>> You only need to cold start jes2 if you want to change the name of the
>>> SPOOL Volume.  Otherwise you can init other volumes with the mask
>>> selected and allow volumes to drain, and once drained, remove from the
>> system.
>>> 
>>> One of the suggestions in best practice for JES2 CKPT is to have CKPT1
>>> in CF Structure and CKPT2 on DASD.
>>> 
>>> 
>>> 
>>> 
>>> Lizette
>>> 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Joseph Reichman
 Sent: Thursday, March 23, 2017 2:36 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: Spool file
 
 Sorry just got up that was it
 
 
 But I ran I to done other problems
 
 Can I move the checkpoint file to another pack As well
 
 Do t know why IBM set it up that way
 Putting the spool and check point file on the same pack as the
 parmlib's
 
 
> On Mar 23, 2017, at 2:41 AM, Anthony Thompson
> 
 wrote:
> 
> Of course, you know that implementing a change to the SPOOLDEF
> VOLUME
 parameter requires a cold start of JES2? And, unless you're not
 interested in keeping any of the old spool data, a spool offload/reload.
> 
> Ant.
> 
> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
 Behalf Of Joseph Reichman
> Sent: Thursday, 23 March 2017 8:53 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> My parmlib member points to SPOOL as the volume but everything else
> shows
 B2SYS.
> 
>  SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)
> 
> COMMAND INPUT ===>SCROLL ===>
> CSR
> NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk
> 
>B2SYS1 ACTIVE   5  3000   164  ANY   00  031B
> 
>B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
> 
> 
> 3.4 Display
> DSLIST - Data Sets on volume SPOOL* Row 1
>> of
> 6
> Command ===>  Scroll ===>
> PAGE
> 
> 
> Command - Enter "/" to select action  Message
> Volume
> 
> 
> ---
>   SYS1.HASPACE
> SPOOL1
>   SYS1.HASPACE
> SPOOL2
>   SYS1.HASPACE
> SPOOL3
>   SYS1.VTOCIX.SPOOL1
> SPOOL1
>   SYS1.VTOCIX.SPOOL2
> SPOOL2
>   SYS1.VTOCIX.SPOOL3
> SPOOL3
> * End of Data Set list
> 
> 
> 
> 
> 
> 
> 
>$dspool,long
>$HASP893 VOLUME(B2SYS1)
> VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,
>  SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
>  TRKPERTGB=3,PERCENT=5,RESERVED=NO,
>  MAPTARGET=NO
>$HASP893 VOLUME(B2SYS2)
> VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS
> 
>$DSPOLLDEF
> 
> JES parmlib member
> 
> SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
>   DSNAME=SYS1.HASPACE,
>   FENCE=NO,   /* Don't Force to Min.Vol. oc*/
>   SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
>   TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
>   TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
>   TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
>WARN=80),  /*   

Re: Spool file

2017-03-23 Thread Lizette Koehler
Joe 

Are you using zPDT on Linux?  Is that why you have not posted the messages?  Or
why you have not opened a case with IBM for assistance?



That could change the discussion

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Joseph Reichman
> Sent: Thursday, March 23, 2017 8:12 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> Thanks
> 
> > On Mar 23, 2017, at 11:05 AM, Lizette Koehler 
> wrote:
> >
> > Generally -
> >
> > You can walk the CKPT anytime with $TCKPTDEF,RECONFIG=YES
> >
> > It is long, but fairly safe.  You can cancel the process at any time,
> > and JES2 will not impact much until you end it or allow it to proceed
> > with the reconfiguration.
> >
> > You only need to cold start jes2 if you want to change the name of the
> > SPOOL Volume.  Otherwise you can init other volumes with the mask
> > selected and allow volumes to drain, and once drained, remove from the
> system.
> >
> > One of the suggestions in best practice for JES2 CKPT is to have CKPT1
> > in CF Structure and CKPT2 on DASD.
> >
> >
> >
> >
> > Lizette
> >
> >> -Original Message-
> >> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> >> On Behalf Of Joseph Reichman
> >> Sent: Thursday, March 23, 2017 2:36 AM
> >> To: IBM-MAIN@LISTSERV.UA.EDU
> >> Subject: Re: Spool file
> >>
> >> Sorry just got up that was it
> >>
> >>
> >> But I ran I to done other problems
> >>
> >> Can I move the checkpoint file to another pack As well
> >>
> >> Do t know why IBM set it up that way
> >> Putting the spool and check point file on the same pack as the
> >> parmlib's
> >>
> >>
> >>> On Mar 23, 2017, at 2:41 AM, Anthony Thompson
> >>> 
> >> wrote:
> >>>
> >>> Of course, you know that implementing a change to the SPOOLDEF
> >>> VOLUME
> >> parameter requires a cold start of JES2? And, unless you're not
> >> interested in keeping any of the old spool data, a spool offload/reload.
> >>>
> >>> Ant.
> >>>
> >>> -Original Message-
> >>> From: IBM Mainframe Discussion List
> >>> [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> >> Behalf Of Joseph Reichman
> >>> Sent: Thursday, 23 March 2017 8:53 AM
> >>> To: IBM-MAIN@LISTSERV.UA.EDU
> >>> Subject: Re: Spool file
> >>>
> >>> My parmlib member points to SPOOL as the volume but everything else
> >>> shows
> >> B2SYS.
> >>>
> >>>   SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)
> >>>
> >>> COMMAND INPUT ===>SCROLL ===>
> >>> CSR
> >>> NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk
> >>>
> >>> B2SYS1 ACTIVE   5  3000   164  ANY   00  031B
> >>> 
> >>> B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
> >>> 
> >>>
> >>> 3.4 Display
> >>> DSLIST - Data Sets on volume SPOOL* Row 1
> of
> >>> 6
> >>> Command ===>  Scroll ===>
> >>> PAGE
> >>>
> >>>
> >>> Command - Enter "/" to select action  Message
> >>> Volume
> >>> 
> >>> 
> >>> ---
> >>>SYS1.HASPACE
> >>> SPOOL1
> >>>SYS1.HASPACE
> >>> SPOOL2
> >>>SYS1.HASPACE
> >>> SPOOL3
> >>>SYS1.VTOCIX.SPOOL1
> >>> SPOOL1
> >>>SYS1.VTOCIX.SPOOL2
> >>> SPOOL2
> >>>SYS1.VTOCIX.SPOOL3
> >>> SPOOL3
> >>> * End of Data Set list
> >>> 
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> $dspool,long
> >>> $HASP893 VOLUME(B2SYS1)
> >>> VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,
> >>>   SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
> >>>   TRKPERTGB=3,PERCENT=5,RESERVED=NO,
> >>>   MAPTARGET=NO
> >>> $HASP893 VOLUME(B2SYS2)
> >>> VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS
> >>>
> >>> $DSPOLLDEF
> >>>
> >>> JES parmlib member
> >>>
> >>> SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
> >>>DSNAME=SYS1.HASPACE,
> >>>FENCE=NO,   /* Don't Force to Min.Vol. oc*/
> >>>SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
> >>>TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
> >>>TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
> >>>TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
> >>> WARN=80),  /*   =(,% onc*/
> >>>TRKCELL=3,  /* 3 Buffers/Track-cell   c*/
> >>>VOLUME=SPOOL/* SPOOL VOLUME SERIALc*/
> >>> <---
> >>> -Original Message-
> >>> From: IBM Mainframe Discussion List
> >>> [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> >> Behalf Of Lizette Koehler
> >>> Sent: Wednesday, March 22, 2017 12:23 AM
> >>> To: 

Request for IBM ICSF support for X9 TR-34

2017-03-23 Thread Frank Swarbrick
Currently ICSF has support for ASC X9 TR-31 2010 (Interoperable Secure Key 
Exchange Key Block Specification for Symmetric Algorithms) .  However ICSF does 
not currently support for ASC X9 TR-34 2012 (Interoperable Method for 
Distribution of Symmetric Keys using Asymmetric Techniques: Part 1 - Using 
Factoring-Based Public Key Cryptography Unilateral Key Transport), which is a 
"interoperable method" to distribute TR-31 Key Block Protection Keys (among 
other things, I'm sure).

NCR ATM hardware (specifically their "Encrypting PIN Pads") and software (Aptra 
Edge V7) support both TR-31 for symmetric exchanging of "bundling" of keys 
(TR-31 key blocks) and TR-34 for distribution of symmetric keys (TR-31 Key 
Block Protection Key) using RSA asymmetric encryption with SHA-256 Digital 
Signatures.  TR-34 is the only method supported by NCR ATMs to comply with PCI 
3 (Payment Card Industry) requirements that SHA-1 no longer be used.

If you are an NCR ATM customer, or even just someone who wants IBM to support 
the latest cryptographic standards, please vote for my RFE requesting TR-34 
support:  
http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe_ID=102736.

Thanks,

Frank

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-23 Thread Paul Gilmartin
On Thu, 23 Mar 2017 18:42:04 -0500, Kirk Wolf wrote:
>>
>> One of my favorites is:
>> ssh User@zOS "cd source & pax -wxpax ." | ( cd target 
>> & pax
>> -rv )
>>
>> ... where either "pax" or both could be *some-transfer-program*.  XML?
>> CSV?  I expect Co:Z enhances this, at least by supplying "fromdsn".
>>
> Nice if both ends are z/OS, but have you tried this from a non-z/OS system
>to a z/OS system (using a compatible tar format with pax?)
>
Yes, but it (usually) needs to be:
ssh User@zOS "cd source & pax -wxpax . |
iconv -f ISO8859-1 -t IBM-1047" | ( cd target & pax -rv )

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-23 Thread Kirk Wolf
On Thu, Mar 23, 2017 at 5:29 PM, Paul Gilmartin <
000433f07816-dmarc-requ...@listserv.ua.edu> wrote:

>
> One of my favorites is:
> ssh User@zOS "cd source & pax -wxpax ." | ( cd target & pax
> -rv )
>
> ... where either "pax" or both could be *some-transfer-program*.  XML?
> CSV?  I expect Co:Z enhances this, at least by supplying "fromdsn".
>
> Nice if both ends are z/OS, but have you tried this from a non-z/OS system
to a z/OS system (using a compatible tar format with pax?)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-23 Thread Paul Gilmartin
On Thu, 23 Mar 2017 16:54:37 -0500, Kirk Wolf wrote:

>You could use the Co:Z Launcher, which works over a secure SSH connection
>to distribute work between z/OS batch jobs and processing on a remote
>server to this kind of ETL work.
>
>For example:
>
>//COZCB2  JOB (),'COZ'
>//STEP1   EXEC PROC=COZPROC,
>//ARGS='my...@linux1.myco.com'
>//INPUT   DD DISP=SHR,DSN=MY.DSN
>//STDIN   DD *
># this is a shell script that runs on the remote server
>fromdsn //DD:INPUT | *some-transform-program*  >  result-file
>//
>
One of my favorites is:
ssh User@zOS "cd source & pax -wxpax ." | ( cd target & pax -rv )

... where either "pax" or both could be *some-transfer-program*.  XML?
CSV?  I expect Co:Z enhances this, at least by supplying "fromdsn".

>Of course, it is easy with Unix pipes to run the data through several
>transforms before (or never) hitting a target file.
>
>Kirk Wolf
>Dovetailed Technologies
>http://dovetail.com
>
>PS> The Co:Z Toolkit is available free under our Community License.
>Commercial license and support agreements are also available.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Edward Finnell
http://www-01.ibm.com/support/docview.wss?uid=isg1PM24416
 
It was useful to allocate a HASPINDX ddname and look at inactive  SPOOL.
 
 
In a message dated 3/23/2017 6:41:37 A.M. Central Daylight Time,  
rpin...@firsttennessee.com writes:

I don't  think HASPINDX is used anymore

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: track submitted jobs

2017-03-23 Thread Paul Gilmartin
On Thu, 23 Mar 2017 08:03:48 -0500, Elardus Engelbrecht wrote:
>
>Sources of jobs:
>
>- Programs spitting something out in INTRDR (This is, for example, what I do 
>for every day. I use LISTC and CSI to build up DDs as concatenated input 
>before submit)
>- SJ in SDSF or similar products
>
That, and many others below, use ISPF Edit/Browse, which uses TSO SUBMIT after
copying to a temp DS, making tracing harder.

>- FTP
>- Scheduler ( ... and Control-O which can submit something based on activated 
>rules)
>- ISPF panels for other products (DB2 utilities, zSecure, etc.) can build up 
>jobs for you.
>- "Launcher job" - A job which spits something in INTRDR - Note, "launcher 
>job" is not my invention, it was mentioned last year. [1] 
>- Submit it yourself from a PS dataset, OMVS file, etc.
>- Use Unix or zLinux (and perhaps others) pipe command '|' to pump something 
>into JES2
>
That would be /bin/submit which uses the Rexx submit() function.

>- Exits (My SMFU29 exit does sort of that automagically that when a MANx fills 
>up. I said sort of, because it is actually it issues 'S ' (not Submit) to 
>start a STC with that MANx dataset as parameter. Yet another job from a 
>library. ;-D )
>
>There are other sources from where jobs can be submitted...
>
- Certainly NJE.
- For a guest z/OS, another guest may spool punch to that guest's virtual 
reader.
  IIRC, Ed J. (Mark Z.?) said that still works.  But that reader could be 
varied off.

Do any of these *not* use INTRDR?

>To track: 
>
>You can use RACF and SMF to track who is the OWNER of that job.
>Or better, use RACF to control usage of JOB accounting lines and monitor any 
>changes of libraries. Think about SMF 42 for member auditing.
>Implement better security in Control-M and Control-O using that OWNER in your 
>job schedule.
>
>(You can try to enforce job standards, but you will get some crazies who like 
>to bypass any standards your trying to enforce.)
>
>Or, just close down all INTRDR for fun. Then you have no z/OS to play with, 
>but have lots of time to battle with angry users...
>
Restrict allocating SYSOUT(,INTRDR) to APF authorized programs, then front-end
all (how many?) authorized programs/services to write tracking records (SMF?)
And you may find your worst offenders have fairly high privilege.

That *should*not* involve a collateral requirement that all INTRDR input be 
Fixed-80.

>Note: Not even JES2 can see from where the job is coming, because another 
>address space is placing contents from a source into INTRDR. Only when you 
>close that INTRDR, then JES2 picks up whatever there is and tries to interpret 
>it.
>
That just makes it harder to meet the OP's requirement which I see as
somewhat legitimate.

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-23 Thread Kirk Wolf
You could use the Co:Z Launcher, which works over a secure SSH connection
to distribute work between z/OS batch jobs and processing on a remote
server to this kind of ETL work.

For example:

//COZCB2  JOB (),'COZ'
//STEP1   EXEC PROC=COZPROC,
//ARGS='my...@linux1.myco.com'
//INPUT   DD DISP=SHR,DSN=MY.DSN
//STDIN   DD *
# this is a shell script that runs on the remote server
fromdsn //DD:INPUT | *some-transform-program*  >  result-file
//

Of course, it is easy with Unix pipes to run the data through several
transforms before (or never) hitting a target file.

Kirk Wolf
Dovetailed Technologies
http://dovetail.com

PS> The Co:Z Toolkit is available free under our Community License.
Commercial license and support agreements are also available.



On Wed, Mar 22, 2017 at 3:15 PM, John McKown 
wrote:

> I am wondering if anyone else thinks the following might be a nice
> enhancement to the z/OS FTP server. At present, when you transfer a file
> to/from another system, you can basically only do a BIN (null) or ASCII
> transformation. We have been doing a lot of ftp's to a Windows server, so
> we really need an ASCII transformation. The problem is that our real data,
> in a VSAM data set, has PACKED DECIMAL and 32 bit internal binary numbers
> and not just character data. So, I was thinking that it might be nice to
> have a FTP server command which would set up a "global" data transformation
> program as in intermediary. That is, the client (on Windows) would do
> something like:
>
> quote outxform somepgm
> get vsam.dataset
>
> And what the FTP server would do is invoke "somepgm" with a parameter of
> "vsam.dataset". The "somepgm" would allocate & open the given data set. It
> would then read the data; transform it; then return the transformed
> record(s) to ftp. This would be conceptually similar to what COBOL and SORT
> do when the SORT verb in a program has the USING INPUT PROCEDURE phrase.
> Perhaps the parameters to "somepgm" would be a character string and the
> address of a "subroutine" to call to return a record back to the ftp
> server.
>
> Or maybe something similar to how ftp can do an SQL query, but more
> generic:
> https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.
> 0/com.ibm.zos.v2r1.halu001/db2sqlquerysubmitftps.htm
>
>
> --
> "Irrigation of the land with seawater desalinated by fusion power is
> ancient. It's called 'rain'." -- Michael McClary, in alt.fusion
>
> Maranatha! <><
> John McKown
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Fwd: track submitted jobs

2017-03-23 Thread Edward Gould
> Begin forwarded message:
> 
> From: Elardus Engelbrecht 
> Subject: Re: track submitted jobs
> Date: March 23, 2017 at 8:03:48 AM CDT
> To: IBM-MAIN@LISTSERV.UA.EDU
> Reply-To: IBM Mainframe Discussion List 
> 
> bILHA ELROY wrote:
> 
>> For accounting purposes we need to know the name of the library/member that 
>> the job was submitted from.
>> Is there any way we can find out. (The job was scheduled not through 
>> CONTROL-M and similar)
> 
> You already got one reply of "NO". There were some similar threads last year. 
> Same old "NO" story.
> 
> And Pual said "The most practical approach is to prohibit submitting jobs 
> except by your scheduler, and let that scheduler do the tracking." - I agree!
> 
> 
> Sources of jobs:
> 
> - Programs spitting something out in INTRDR (This is, for example, what I do 
> for every day. I use LISTC and CSI to build up DDs as concatenated input 
> before submit)
> - SJ in SDSF or similar products
> - FTP
> - Scheduler ( ... and Control-O which can submit something based on activated 
> rules)
> - ISPF panels for other products (DB2 utilities, zSecure, etc.) can build up 
> jobs for you.
> - "Launcher job" - A job which spits something in INTRDR - Note, "launcher 
> job" is not my invention, it was mentioned last year. [1] 
> - Submit it yourself from a PS dataset, OMVS file, etc.
> - Use Unix or zLinux (and perhaps others) pipe command '|' to pump something 
> into JES2
> - Exits (My SMFU29 exit does sort of that automagically that when a MANx 
> fills up. I said sort of, because it is actually it issues 'S ' (not 
> Submit) to start a STC with that MANx dataset as parameter. Yet another job 
> from a library. ;-D )
> 
> There are other sources from where jobs can be submitted...
> 
> 
> To track: 
> 
> You can use RACF and SMF to track who is the OWNER of that job.
> Or better, use RACF to control usage of JOB accounting lines and monitor any 
> changes of libraries. Think about SMF 42 for member auditing.
> Implement better security in Control-M and Control-O using that OWNER in your 
> job schedule.
> 
> (You can try to enforce job standards, but you will get some crazies who like 
> to bypass any standards your trying to enforce.)
> 
> Or, just close down all INTRDR for fun. Then you have no z/OS to play with, 
> but have lots of time to battle with angry users...
> 
> Note: Not even JES2 can see from where the job is coming, because another 
> address space is placing contents from a source into INTRDR. Only when you 
> close that INTRDR, then JES2 picks up whatever there is and tries to 
> interpret it.
> 
> Perhaps there is some tracking software/method available...
> 
> Groete / Greetings
> Elardus Engelbrecht

One place I worked had strict job naming standards and it was enforced by 
various exits mostly SMF and JES.
*IF* the job *ONLY* came from this one production library then it was given the 
RACF userid & Password. If it didn’t come from that library it was flushed. 
If a user submitted a production job he/she better have the rights to access 
Production datasets if they didn’t it got a 913 abend.
We were merciless in our facility that production was KING or emperor if you 
prefer.
There were no arguments no maybe’s no if’s PERIOD.
The auditor came down open you like a hammer if they caught you and they looked 
at everything.
A long time ago I was running a SMPE job that applied 5 years of maintenance 
(we were that far behind). I was getting logged all over the place because I 
was updating sys1 datasets. I got a call from the auditor asking what I was 
doing. I told him I would be happy to bring the output to him and explain every 
message (there were thousands). I brought the output up to him in several 
carts. I then proceeded to explain each message and why I needed update. After 
30 minutes or so his eyes glazed over from so much data he said fine.
I never had to explain again why I needed access to sys1 datasets.
The big issue is that you have to lock down *EVERYTHING* and stick to it and no 
BS either.
A few years later I was called into a meeting (unannounced) and sat in while a 
programmer updated a dataset he shouldn’t of (it was a semi production DS 
meaning it wasn’t used in production but in system assurance testing). I got 
the general idea he thought he could bluff his way through but he couldn’t and 
was escorted out of the building right after the meeting.

In summary you must be serious and be prepared to back up every decision that 
you may make and have a damn good reason. And you have to have *EVERYTHING* 
locked down.

Ed




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Controlling TCPIP performance

2017-03-23 Thread Steve Beaver
FTP's rarely dominate anything.  The only throttle is the speed of the line and 
the capacity of the receiver and what is happening
On the LCU

Steve  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of van der Grijn, Bart (B)
Sent: Thursday, March 23, 2017 3:34 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Controlling TCPIP performance

Wouldn't that be determined by the priority of the application rather than by 
the TCPIP task? In this case, the FTP client or server.
Bart

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tracy Adams
Sent: Thursday, March 23, 2017 3:26 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Controlling TCPIP performance

For obvious reasons we want to run the TCPIP address at a very high dispatching 
priority.  There are times though when we want to throttle back certain 
functions of the TCPIP stack.  I will use FTP as the immediate example.  I 
really don’t want a file transfer to dominate the system :-)  TIA for your 
thoughts and ideas. 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Controlling TCPIP performance

2017-03-23 Thread van der Grijn, Bart (B)
Wouldn't that be determined by the priority of the application rather than by 
the TCPIP task? In this case, the FTP client or server.
Bart

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tracy Adams
Sent: Thursday, March 23, 2017 3:26 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Controlling TCPIP performance

For obvious reasons we want to run the TCPIP address at a very high dispatching 
priority.  There are times though when we want to throttle back certain 
functions of the TCPIP stack.  I will use FTP as the immediate example.  I 
really don’t want a file transfer to dominate the system :-)  TIA for your 
thoughts and ideas. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Controlling TCPIP performance

2017-03-23 Thread Allan Staller
Class of Service. Check the TCPIP books...

HTH,

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tracy Adams
Sent: Thursday, March 23, 2017 2:26 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Controlling TCPIP performance

For obvious reasons we want to run the TCPIP address at a very high dispatching 
priority.  There are times though when we want to throttle back certain 
functions of the TCPIP stack.  I will use FTP as the immediate example.  I 
really don’t want a file transfer to dominate the system :-)  TIA for your 
thoughts and ideas. 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN


::DISCLAIMER::


The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written 
consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and 
other defects.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Controlling TCPIP performance

2017-03-23 Thread Tracy Adams
For obvious reasons we want to run the TCPIP address at a very high dispatching 
priority.  There are times though when we want to throttle back certain 
functions of the TCPIP stack.  I will use FTP as the immediate example.  I 
really don’t want a file transfer to dominate the system :-)  TIA for your 
thoughts and ideas. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Victor Gil
Thanks Denis, an interesting approach!

Will have to do some reading as this is my "terra incognita"...

-Victor-

-
How about using Unix System Services shared memory and optionally semaphores?
 
If found this, but it uses C.
http://www.infodd.com/images/infodd/downloads/I33.pdf
 
You can also do it from COBOL with the BPX1Mxx calls (like GT for get or AT for 
Attach), it can run unauthorized in keys 2, 8 and 9.
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r1.bpxb100/mgt.htm
 
Hope that helps.
Denis.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Victor Gil
ITschak,

Really like this idea, could you please elaborate a bit - how would a 
problem-state program load data in common area? It needs to be store-protected 
to prevent overlays, so that would require the loader to be APF authorization, 
no?

-Victor- 

--
If your interest is sharing, CICS can store a complete vsam file in
dataspace using a standard IO routines (GET, PUT, POINT etc.). It is few
CICS parameters away and involves no code change. I am not sure that EXCI
will save you CPU or elapse. BTW, there are few products that store data in
storage and/or dataspaces (Matrix from Expanse comes to mind).
Another alternative is to store the dataset in a common area (nay be above
the bar), and store the start and end addresses in single name-token pair.
you just need a loaded program and a search one. this fits for a none
updated file.

HTH
ITschak

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


IBMLink/ServiceLink SIS function not working correctly?

2017-03-23 Thread Porowski, Kenneth
Every search I do only seems to show hits in Q, PDDB and support and even 
those don't show results.



This email message and any accompanying materials may contain proprietary, 
privileged and confidential information of CIT Group Inc. or its subsidiaries 
or affiliates (collectively, "CIT"), and are intended solely for the 
recipient(s) named above.  If you are not the intended recipient of this 
communication, any use, disclosure, printing, copying or distribution, or 
reliance on the contents, of this communication is strictly prohibited.  CIT 
disclaims any liability for the review, retransmission, dissemination or other 
use of, or the taking of any action in reliance upon, this communication by 
persons other than the intended recipient(s).  If you have received this 
communication in error, please reply to the sender advising of the error in 
transmission, and immediately delete and destroy the communication and any 
accompanying materials.  To the extent permitted by applicable law, CIT and 
others may inspect, review, monitor, analyze, copy, record and retain any 
communications sent from or received at this email address.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Denis
How about using Unix System Services shared memory and optionally semaphores?
 
If found this, but it uses C.
http://www.infodd.com/images/infodd/downloads/I33.pdf
 
You can also do it from COBOL with the BPX1Mxx calls (like GT for get or AT for 
Attach), it can run unauthorized in keys 2, 8 and 9.
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.2.0/com.ibm.zos.v2r1.bpxb100/mgt.htm
 
Hope that helps.
Denis.
 
-Original Message-
From: IronSphere by SecuriTeam Software 
To: IBM-MAIN 
Sent: Thu, Mar 23, 2017 6:25 pm
Subject: Re: Easiest way to share incore data between batch jobs

If your interest is sharing, CICS can store a complete vsam file in
dataspace using a standard IO routines (GET, PUT, POINT etc.). It is few
CICS parameters away and involves no code change. I am not sure that EXCI
will save you CPU or elapse. BTW, there are few products that store data in
storage and/or dataspaces (Matrix from Expanse comes to mind).
Another alternative is to store the dataset in a common area (nay be above
the bar), and store the start and end addresses in single name-token pair.
you just need a loaded program and a search one. this fits for a none
updated file.

HTH
ITschak

On Thu, Mar 23, 2017 at 5:52 PM, Victor Gil 
wrote:

> A co-worker suggested to save CPU by having one job to cache a VSAM file
> [which is frequently looked up by multiple jobs] and introduce a new "look
> up" API to "connect" to that job and locate a particular record with a
> given key.
>
> I am a bit outdated in current systems services, so my first suggestion
> was to use EXCI into a CICS region or call a DB2 stored procedure which
> would act as a "server", but the question is - is there an easier way to
> accomplish this in pure batch?  I am familiar with the cross-memory access
> but this would require heavy assembler coding, APF authorization, etc. all
> of which we are trying to avoid.
>
> TIA!
> -Victor-
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
ITschak Mugzach
*|** IronSphere Platform* *|** An IT GRC for Legacy systems* *| Automated
Security Readiness Reviews (SRR) **|*

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread PINION, RICHARD W.
As someone else has already mentioned Hyperbatch.

Here's a link to the manual.

https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.idag200/hbatch.htm

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of IronSphere by SecuriTeam Software
Sent: Thursday, March 23, 2017 1:25 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Easiest way to share incore data between batch jobs

If your interest is sharing, CICS can store a complete vsam file in dataspace 
using a standard IO routines (GET, PUT, POINT etc.). It is few CICS parameters 
away and involves no code change. I am not sure that EXCI will save you CPU or 
elapse. BTW, there are few products that store data in storage and/or 
dataspaces (Matrix from Expanse comes to mind).
Another alternative is to store the dataset in a common area (nay be above the 
bar), and store the start and end addresses in single name-token pair.
you just need a loaded program and a search one. this fits for a none updated 
file.

HTH
ITschak

On Thu, Mar 23, 2017 at 5:52 PM, Victor Gil 
wrote:

> A co-worker suggested to save CPU by having one job to cache a VSAM 
> file [which is frequently looked up by multiple jobs] and introduce a 
> new "look up" API to "connect" to that job and locate a particular 
> record with a given key.
>
> I am a bit outdated in current systems services, so my first 
> suggestion was to use EXCI into a CICS region or call a DB2 stored 
> procedure which would act as a "server", but the question is - is 
> there an easier way to accomplish this in pure batch?  I am familiar 
> with the cross-memory access but this would require heavy assembler 
> coding, APF authorization, etc. all of which we are trying to avoid.
>
> TIA!
> -Victor-
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send 
> email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



--
ITschak Mugzach
*|** IronSphere Platform* *|** An IT GRC for Legacy systems* *| Automated 
Security Readiness Reviews (SRR) **|*

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN
FIRST TENNESSEE

Confidentiality notice: 
This e-mail message, including any attachments, may contain legally privileged 
and/or confidential information. If you are not the intended recipient(s), or 
the employee or agent responsible for delivery of this message to the intended 
recipient(s), you are hereby notified that any dissemination, distribution, or 
copying of this e-mail message is strictly prohibited. If you have received 
this message in error, please immediately notify the sender and delete this 
e-mail message from your computer.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread IronSphere by SecuriTeam Software
If your interest is sharing, CICS can store a complete vsam file in
dataspace using a standard IO routines (GET, PUT, POINT etc.). It is few
CICS parameters away and involves no code change. I am not sure that EXCI
will save you CPU or elapse. BTW, there are few products that store data in
storage and/or dataspaces (Matrix from Expanse comes to mind).
Another alternative is to store the dataset in a common area (nay be above
the bar), and store the start and end addresses in single name-token pair.
you just need a loaded program and a search one. this fits for a none
updated file.

HTH
ITschak

On Thu, Mar 23, 2017 at 5:52 PM, Victor Gil 
wrote:

> A co-worker suggested to save CPU by having one job to cache a VSAM file
> [which is frequently looked up by multiple jobs] and introduce a new "look
> up" API to "connect" to that job and locate a particular record with a
> given key.
>
> I am a bit outdated in current systems services, so my first suggestion
> was to use EXCI into a CICS region or call a DB2 stored procedure which
> would act as a "server", but the question is - is there an easier way to
> accomplish this in pure batch?  I am familiar with the cross-memory access
> but this would require heavy assembler coding, APF authorization, etc. all
> of which we are trying to avoid.
>
> TIA!
> -Victor-
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>



-- 
ITschak Mugzach
*|** IronSphere Platform* *|** An IT GRC for Legacy systems* *| Automated
Security Readiness Reviews (SRR) **|*

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Oren, Yifat
Hi,

Are you thinking about something like Hyperbatch? 
https://goo.gl/xJkWB7

Thanks,
Yifat


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Victor Gil
Sent: 23 March 2017 17:52
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Easiest way to share incore data between batch jobs

A co-worker suggested to save CPU by having one job to cache a VSAM file [which 
is frequently looked up by multiple jobs] and introduce a new "look up" API to 
"connect" to that job and locate a particular record with a given key.

I am a bit outdated in current systems services, so my first suggestion was to 
use EXCI into a CICS region or call a DB2 stored procedure which would act as a 
"server", but the question is - is there an easier way to accomplish this in 
pure batch?  I am familiar with the cross-memory access but this would require 
heavy assembler coding, APF authorization, etc. all of which we are trying to 
avoid.

TIA!
-Victor-   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Farley, Peter x23353
Or the newer caching capabilities of SMB (System Managed Buffering), which is 
reputed to be more efficient than BLSR.  I have not personally tested that 
assertion in a real-world situation, so I can't say if it is true or not.

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Feller, Paul
Sent: Thursday, March 23, 2017 12:28 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Easiest way to share incore data between batch jobs

Would it be just easier to look into something like BLSR (Batch Local Shared 
Resources) or some other type of buffering of the VSAM file to improve access 
and save on physical I/O?

Thanks..

Paul Feller
AGT Mainframe Technical Support

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Victor Gil
Sent: Thursday, March 23, 2017 10:52
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Easiest way to share incore data between batch jobs

A co-worker suggested to save CPU by having one job to cache a VSAM file [which 
is frequently looked up by multiple jobs] and introduce a new "look up" API to 
"connect" to that job and locate a particular record with a given key.

I am a bit outdated in current systems services, so my first suggestion was to 
use EXCI into a CICS region or call a DB2 stored procedure which would act as a 
"server", but the question is - is there an easier way to accomplish this in 
pure batch?  I am familiar with the cross-memory access but this would require 
heavy assembler coding, APF authorization, etc. all of which we are trying to 
avoid.
--


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Lizette Koehler
Have you looked at RLS for VSAM?  Data is held in CF structure.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Victor Gil
> Sent: Thursday, March 23, 2017 8:52 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Easiest way to share incore data between batch jobs
> 
> A co-worker suggested to save CPU by having one job to cache a VSAM file
> [which is frequently looked up by multiple jobs] and introduce a new "look up"
> API to "connect" to that job and locate a particular record with a given key.
> 
> I am a bit outdated in current systems services, so my first suggestion was to
> use EXCI into a CICS region or call a DB2 stored procedure which would act as
> a "server", but the question is - is there an easier way to accomplish this in
> pure batch?  I am familiar with the cross-memory access but this would require
> heavy assembler coding, APF authorization, etc. all of which we are trying to
> avoid.
> 
> TIA!
> -Victor-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Feller, Paul
Would it be just easier to look into something like BLSR (Batch Local Shared 
Resources) or some other type of buffering of the VSAM file to improve access 
and save on physical I/O?

Thanks..

Paul Feller
AGT Mainframe Technical Support

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Victor Gil
Sent: Thursday, March 23, 2017 10:52
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Easiest way to share incore data between batch jobs

A co-worker suggested to save CPU by having one job to cache a VSAM file [which 
is frequently looked up by multiple jobs] and introduce a new "look up" API to 
"connect" to that job and locate a particular record with a given key.

I am a bit outdated in current systems services, so my first suggestion was to 
use EXCI into a CICS region or call a DB2 stored procedure which would act as a 
"server", but the question is - is there an easier way to accomplish this in 
pure batch?  I am familiar with the cross-memory access but this would require 
heavy assembler coding, APF authorization, etc. all of which we are trying to 
avoid.

TIA!
-Victor-   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: thoughts on z/OS ftp server enhancement.

2017-03-23 Thread Barry Merrill
The SAS ftp access method is used by many MXG sites who process
their SMF/IMS/zVM/etc data executing MXG on ASCII platforms, 
directly creating the output SAS data library and using disk 
only for those output SAS datasets, which seems to meet the
specifications outlined below.

Except that VSAM is not supported.

Barry


Merrilly yours,

 Herbert W. Barry Merrill, PhD
 President-Programmer
 Merrill Consultants
 MXG Software
 10717 Cromwell Drive  technical questions: supp...@mxg.com
 Dallas, TX 75229
 http://www.mxg.comadmin questions: ad...@mxg.com
 tel: 214 351 1966
 fax: 214 350 3694





-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Charles Mills
Sent: Wednesday, March 22, 2017 3:47 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: thoughts on z/OS ftp server enhancement.

I used to be in the mainframe to ASCII platform file transfer software 
business. There's a name for what you propose -- some 3-letter acronym -- but I 
have forgotten. There is a T in it for Transform. We spent a lot of time 
looking at this, because a recurring customer complaint was "we transferred our 
file and now it's unusable" and it ended up being vendor arguing with customer 
about why you could not translate packed and binary files to ASCII and have 
them be usable.

The problem we wrestled with is there are just so many variables in how record 
layouts work. Non-trivial commercial files inevitably are "well, if there is a 
P in position 51 then it's an accounts payable record and it looks like this, 
but if there is an R then it looks like this, except if there is a C in 
position 92, in which case it's a credit record ..."

You're right, an interesting FTP enhancement might be a generalization of SITE 
FILETYPE=JES|SQL, SITE FILETYPE=somepgm where somepgm would somehow transform a 
file and pass it to FTP. Then the customer could write a COBOL program, say, to 
convert all the packed fields to character on the fly during transmission.

Charles


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of John McKown
Sent: Wednesday, March 22, 2017 1:15 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: thoughts on z/OS ftp server enhancement.

I am wondering if anyone else thinks the following might be a nice enhancement 
to the z/OS FTP server. At present, when you transfer a file to/from another 
system, you can basically only do a BIN (null) or ASCII transformation. We have 
been doing a lot of ftp's to a Windows server, so we really need an ASCII 
transformation. The problem is that our real data, in a VSAM data set, has 
PACKED DECIMAL and 32 bit internal binary numbers and not just character data. 
So, I was thinking that it might be nice to have a FTP server command which 
would set up a "global" data transformation program as in intermediary. That 
is, the client (on Windows) would do something like:

quote outxform somepgm
get vsam.dataset

And what the FTP server would do is invoke "somepgm" with a parameter of 
"vsam.dataset". The "somepgm" would allocate & open the given data set. It 
would then read the data; transform it; then return the transformed
record(s) to ftp. This would be conceptually similar to what COBOL and SORT do 
when the SORT verb in a program has the USING INPUT PROCEDURE phrase.
Perhaps the parameters to "somepgm" would be a character string and the address 
of a "subroutine" to call to return a record back to the ftp server.

Or maybe something similar to how ftp can do an SQL query, but more generic:
https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.halu001/db2sqlquerysubmitftps.htm

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GREAT presentation on the history of the mainframe

2017-03-23 Thread Anne & Lynn Wheeler
Between 360 and 370 there was ACS/360 ... that was killed w/o ever being
announced (executives thought it might advance state-of-the-art too fast
and company might loose control of the market). note discussion that
some of the ACS/360 features show up more than 20yrs later with es/9000
https://people.cs.clemson.edu/~mark/acs_end.html

Old discussion about justification for all 370s moving to dynamic
address translation. Problem was that MVT storage management was so bad,
regions had to be four times larger than normally used (standard 370/165
1mbyte configurations only can practically run four regions). Moving to
virtual memory, MVT could increase number regions by four times with
little or no paging.
http://www.garlic.com/~lynn/2011d.html#73

During Future System program (which was going to be completely different
from 370 and going to replace 370) internal politics was killing off 370
efforts; the lack of 370 products during FS period is credited with
giving clone processor makers a market foothold. With the death of FS,
there was mad rush to get products back into the 370 pipeline. 3033 and
3081 were kicked off in parallel. some detailed references on FS, 3033,
and 3081
http://www.jfsowa.com/computer/memo125.htm

303x external channels (channel director) was 158 engine with integrated
channel microcode (and no 370 microcode). 3033 started out as 168-3 logic
remapped to 20% faster chips. 3032 was 168-3 with new covers and using
channel director. 3031 was 158 engine with just 370 microcode (and no
integrated channel microcode) and 2nd 158 engine with integrated channel
microcode (and no 370 microcode) ... a 3031MP was four 158 engines ...
two dedicated for processors and two dedicated for channel directors.

The head of POK also managed to convince corporate to kill off vm370
product, shutdown the vm370 development group and transfer all the
people to POK (or otherwise they weren't going to be able to ship MVS/XA
on time some 7-8yrs later). They weren't going to tell the vm370 until
the very last minute to minimize the number of people that might
escape. Somehow the information leaked early and number of the people
managed to find other employment in the Boston area (joke that head of
POK was one of the major contributors to the new DEC VAX/VMS
project). Endicott did manage to save the vm370 product mission but had
to reconstitute a development from scratch. During this period there was
customer comments about VM370 code quality on VMSHARE
http://vm.marist.edu/~vmshare

There were some of the original VM370 that went to POK that did work on
the tool virtual machine facility in support of MVS/XA development that
was never intended to be made available to customers. Later when
customers weren't migrating to MVS/XA as planned, there was decision to
release the tool as the migration aid. As part of the tool, there was
SIE (interpretive execution) microcode. SIE was never intended to be
production performance, in part because there was insufficent room for
the microcode, so it had to be swapped in and out. Old email discussion
that for 3090 SIE was designed to be some real production operation
(compared to 3081).
http://www.garlic.com/~lynn/2006j.html#email810630
http://www.garlic.com/~lynn/2003j.html#email831118

old email about Amdahl's hypervisor, I had done a lot of work on
Endicott's ECPS microcode assist and gave presentations on the
implementation at monthly user group meetings.
http://www.garlic.com/~lynn/94.html#21
Amdahl would talk to me about their hypervisor implementation
http://www.garlic.com/~lynn/2006p.html#email801121

a number of yrs later IBM responds to hypervisor with PR/SM for 3090;
3090 announce 12Feb1985
https://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

3090 service processor (3092) was originally to be a 4331 running
customized version of vm370 release 6 with all service screens done in
CMS IOS3270. The service processor was then enhanced to a pair of
redundant 4361s ... which also explains the requirement for pair of 3370
FBA devices (even for MVS customers, which has never supported FBA
devices). CKD devices are still being required, even tho no real CKD
devices have been made for decades ... simulated on industry standard
fixed-block.

Other trivia, original sql/relational (System/R) was done in bldg28 on
vm370 370/145. The official followon new corporate DBMS was EAGLE. While
company was preoccupied with EAGLE, was able to do technology transfer
to Endicott for release as SQL/DS. When EAGLE finally implodes, a
request is made about how long it would take to port to MVS which is
eventually announced as DB2 ... originally for decision support
only. Lots of history at the System/R reunion site
http://www.mcjones.org/System_R/
initial release 1983
https://en.wikipedia.org/wiki/IBM_DB2

other trivia ... before ms/dos
https://en.wikipedia.org/wiki/MS-DOS
there was seattle computer
https://en.wikipedia.org/wiki/Seattle_Computer_Products
before seattle computer 

Re: Easiest way to share incore data between batch jobs

2017-03-23 Thread Grinsell, Don
Can't that be done with a dataspace?

https://www.ibm.com/support/knowledgecenter/SSLTBW_2.1.0/com.ibm.zos.v2r1.ieaa500/ieaa50022.htm


--
 
Donald Grinsell, Systems Programmer
Enterprise Technology Services Bureau
SITSD/Montana Department of Administration
406.444.2983 (D)

"Man is still the most extraordinary computer of all."
~ John F Kennedy

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Victor Gil
> Sent: Thursday, March 23, 2017 9:52 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Easiest way to share incore data between batch jobs
> 
> A co-worker suggested to save CPU by having one job to cache a VSAM file
> [which is frequently looked up by multiple jobs] and introduce a new "look
> up" API to "connect" to that job and locate a particular record with a given
> key.
> 
> I am a bit outdated in current systems services, so my first suggestion was
> to use EXCI into a CICS region or call a DB2 stored procedure which would act
> as a "server", but the question is - is there an easier way to accomplish
> this in pure batch?  I am familiar with the cross-memory access but this
> would require heavy assembler coding, APF authorization, etc. all of which we
> are trying to avoid.
> 
> TIA!
> -Victor-
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions, send email to
> lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Easiest way to share incore data between batch jobs

2017-03-23 Thread Victor Gil
A co-worker suggested to save CPU by having one job to cache a VSAM file [which 
is frequently looked up by multiple jobs] and introduce a new "look up" API to 
"connect" to that job and locate a particular record with a given key.

I am a bit outdated in current systems services, so my first suggestion was to 
use EXCI into a CICS region or call a DB2 stored procedure which would act as a 
"server", but the question is - is there an easier way to accomplish this in 
pure batch?  I am familiar with the cross-memory access but this would require 
heavy assembler coding, APF authorization, etc. all of which we are trying to 
avoid.

TIA!
-Victor-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: track submitted jobs

2017-03-23 Thread Barry Merrill
Worst case scenario, the TYPE26 JES Purge Record Contains three fields 
that might you can control for accounting purposes  using RACF to control
which USERID can access which Library (but I don't think MEMBER name control
is available).

  SUBMUSER $EBCDIC8. /*SMF26SUI*SUBMITTING*USERID */
  NOTIFYND $EBCDIC8. /*SMF26NN-JOB END EXECUTE*NOTIFY*NODE*/
  NOTIFYUS $EBCDIC8. /*SMF26NU-JOB END EXECUTE*NOTIFY*USERID*/

Barry 

Merrilly yours,

 Herbert W. Barry Merrill, PhD
 President-Programmer
 Merrill Consultants
 MXG Software
 10717 Cromwell Drive  technical questions: supp...@mxg.com
 Dallas, TX 75229
 http://www.mxg.comadmin questions: ad...@mxg.com
 tel: 214 351 1966
 fax: 214 350 3694



-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of bILHA ELROY
Sent: Thursday, March 23, 2017 6:10 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: track submitted jobs

For accounting purposes we need to know the name of the library/member that the 
job was submitted from.

Is there any way we can find out. (The job was scheduled not through CONTROL-M 
and similar)

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Joseph Reichman
Thanks 

> On Mar 23, 2017, at 11:05 AM, Lizette Koehler  wrote:
> 
> Generally -
> 
> You can walk the CKPT anytime with $TCKPTDEF,RECONFIG=YES
> 
> It is long, but fairly safe.  You can cancel the process at any time, and JES2
> will not impact much until you end it or allow it to proceed with the
> reconfiguration.
> 
> You only need to cold start jes2 if you want to change the name of the SPOOL
> Volume.  Otherwise you can init other volumes with the mask selected and allow
> volumes to drain, and once drained, remove from the system.
> 
> One of the suggestions in best practice for JES2 CKPT is to have CKPT1 in CF
> Structure and CKPT2 on DASD.
> 
> 
> 
> 
> Lizette
> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Joseph Reichman
>> Sent: Thursday, March 23, 2017 2:36 AM
>> To: IBM-MAIN@LISTSERV.UA.EDU
>> Subject: Re: Spool file
>> 
>> Sorry just got up that was it
>> 
>> 
>> But I ran I to done other problems
>> 
>> Can I move the checkpoint file to another pack As well
>> 
>> Do t know why IBM set it up that way
>> Putting the spool and check point file on the same pack as the parmlib's
>> 
>> 
>>> On Mar 23, 2017, at 2:41 AM, Anthony Thompson 
>> wrote:
>>> 
>>> Of course, you know that implementing a change to the SPOOLDEF VOLUME
>> parameter requires a cold start of JES2? And, unless you're not interested in
>> keeping any of the old spool data, a spool offload/reload.
>>> 
>>> Ant.
>>> 
>>> -Original Message-
>>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Joseph Reichman
>>> Sent: Thursday, 23 March 2017 8:53 AM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Spool file
>>> 
>>> My parmlib member points to SPOOL as the volume but everything else shows
>> B2SYS.
>>> 
>>>   SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)
>>> 
>>> COMMAND INPUT ===>SCROLL ===>
>>> CSR
>>> NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk
>>> 
>>> B2SYS1 ACTIVE   5  3000   164  ANY   00  031B
>>> 
>>> B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
>>> 
>>> 
>>> 3.4 Display
>>> DSLIST - Data Sets on volume SPOOL* Row 1 of
>>> 6
>>> Command ===>  Scroll ===>
>>> PAGE
>>> 
>>> 
>>> Command - Enter "/" to select action  Message
>>> Volume
>>> 
>>> ---
>>>SYS1.HASPACE
>>> SPOOL1
>>>SYS1.HASPACE
>>> SPOOL2
>>>SYS1.HASPACE
>>> SPOOL3
>>>SYS1.VTOCIX.SPOOL1
>>> SPOOL1
>>>SYS1.VTOCIX.SPOOL2
>>> SPOOL2
>>>SYS1.VTOCIX.SPOOL3
>>> SPOOL3
>>> * End of Data Set list
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> $dspool,long
>>> $HASP893 VOLUME(B2SYS1)
>>> VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,
>>>   SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
>>>   TRKPERTGB=3,PERCENT=5,RESERVED=NO,
>>>   MAPTARGET=NO
>>> $HASP893 VOLUME(B2SYS2)
>>> VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS
>>> 
>>> $DSPOLLDEF
>>> 
>>> JES parmlib member
>>> 
>>> SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
>>>DSNAME=SYS1.HASPACE,
>>>FENCE=NO,   /* Don't Force to Min.Vol. oc*/
>>>SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
>>>TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
>>>TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
>>>TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
>>> WARN=80),  /*   =(,% onc*/
>>>TRKCELL=3,  /* 3 Buffers/Track-cell   c*/
>>>VOLUME=SPOOL/* SPOOL VOLUME SERIALc*/
>>> <---
>>> -Original Message-
>>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
>> Behalf Of Lizette Koehler
>>> Sent: Wednesday, March 22, 2017 12:23 AM
>>> To: IBM-MAIN@LISTSERV.UA.EDU
>>> Subject: Re: Spool file
>>> 
>>> Only one HASPACE per JES2 Spool volume
>>> 
>>> Lizette
>>> 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Lizette Koehler
 Sent: Tuesday, March 21, 2017 8:55 PM
 To: IBM-MAIN@LISTSERV.UA.EDU 
 Subject: Re: Spool file
 
 Issue a $DSPOOL,LONG  and see what it shows
 
 Issue $DSPOOLDEF and see what it shows
 
 Next go to ISPF 3.4
 
 Search
 
 **   in the dataset level
 SPOOL*  in the VOLSER and see what it shows
 
 I am not clear about your statement:   The spool 

Re: Spool file

2017-03-23 Thread Lizette Koehler
I seem to recall that IBM had a way to zap the volume mask for spool so you did
not have to cold start.  So it might be worth a call to IBM to see if that is
still available for your version of JES2.

Lizette


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Lizette Koehler
> Sent: Thursday, March 23, 2017 8:05 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> Generally -
> 
> You can walk the CKPT anytime with $TCKPTDEF,RECONFIG=YES
> 
> It is long, but fairly safe.  You can cancel the process at any time, and JES2
> will not impact much until you end it or allow it to proceed with the
> reconfiguration.
> 
> You only need to cold start jes2 if you want to change the name of the SPOOL
> Volume.  Otherwise you can init other volumes with the mask selected and allow
> volumes to drain, and once drained, remove from the system.
> 
> One of the suggestions in best practice for JES2 CKPT is to have CKPT1 in CF
> Structure and CKPT2 on DASD.
> 
> 
> 
> 
> Lizette
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> > On Behalf Of Joseph Reichman
> > Sent: Thursday, March 23, 2017 2:36 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Spool file
> >
> > Sorry just got up that was it
> >
> >
> > But I ran I to done other problems
> >
> > Can I move the checkpoint file to another pack As well
> >
> > Do t know why IBM set it up that way
> > Putting the spool and check point file on the same pack as the
> > parmlib's
> >
> >
> > > On Mar 23, 2017, at 2:41 AM, Anthony Thompson
> > > 
> > wrote:
> > >
> > > Of course, you know that implementing a change to the SPOOLDEF
> > > VOLUME
> > parameter requires a cold start of JES2? And, unless you're not
> > interested in keeping any of the old spool data, a spool offload/reload.
> > >
> > > Ant.
> > >
> > > -Original Message-
> > > From: IBM Mainframe Discussion List
> > > [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> > Behalf Of Joseph Reichman
> > > Sent: Thursday, 23 March 2017 8:53 AM
> > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > Subject: Re: Spool file
> > >
> > > My parmlib member points to SPOOL as the volume but everything else
> > > shows
> > B2SYS.
> > >
> > >SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)
> > >
> > > COMMAND INPUT ===>SCROLL ===>
> > > CSR
> > > NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk
> > >
> > >  B2SYS1 ACTIVE   5  3000   164  ANY   00  031B
> > > 
> > >  B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
> > > 
> > >
> > > 3.4 Display
> > > DSLIST - Data Sets on volume SPOOL* Row 1
> of
> > > 6
> > > Command ===>  Scroll ===>
> > > PAGE
> > >
> > >
> > > Command - Enter "/" to select action  Message
> > > Volume
> > > 
> > > 
> > > ---
> > > SYS1.HASPACE
> > > SPOOL1
> > > SYS1.HASPACE
> > > SPOOL2
> > > SYS1.HASPACE
> > > SPOOL3
> > > SYS1.VTOCIX.SPOOL1
> > > SPOOL1
> > > SYS1.VTOCIX.SPOOL2
> > > SPOOL2
> > > SYS1.VTOCIX.SPOOL3
> > > SPOOL3
> > > * End of Data Set list
> > > 
> > >
> > >
> > >
> > >
> > >
> > >
> > >  $dspool,long
> > >  $HASP893 VOLUME(B2SYS1)
> > > VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,
> > >SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
> > >TRKPERTGB=3,PERCENT=5,RESERVED=NO,
> > >MAPTARGET=NO
> > >  $HASP893 VOLUME(B2SYS2)
> > > VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS
> > >
> > >  $DSPOLLDEF
> > >
> > > JES parmlib member
> > >
> > > SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
> > > DSNAME=SYS1.HASPACE,
> > > FENCE=NO,   /* Don't Force to Min.Vol. oc*/
> > > SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
> > > TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
> > > TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
> > > TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
> > >  WARN=80),  /*   =(,% onc*/
> > > TRKCELL=3,  /* 3 Buffers/Track-cell   c*/
> > > VOLUME=SPOOL/* SPOOL VOLUME SERIALc*/
> > > <---
> > > -Original Message-
> > > From: IBM Mainframe Discussion List
> > > [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> > Behalf Of Lizette Koehler
> > > Sent: Wednesday, March 22, 2017 12:23 AM
> > > To: IBM-MAIN@LISTSERV.UA.EDU
> > > Subject: Re: Spool file
> > >
> > > Only one HASPACE per JES2 Spool volume
> > >
> > > 

Re: Spool file

2017-03-23 Thread Lizette Koehler
Generally -

You can walk the CKPT anytime with $TCKPTDEF,RECONFIG=YES

It is long, but fairly safe.  You can cancel the process at any time, and JES2
will not impact much until you end it or allow it to proceed with the
reconfiguration.

You only need to cold start jes2 if you want to change the name of the SPOOL
Volume.  Otherwise you can init other volumes with the mask selected and allow
volumes to drain, and once drained, remove from the system.

One of the suggestions in best practice for JES2 CKPT is to have CKPT1 in CF
Structure and CKPT2 on DASD.




Lizette

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Joseph Reichman
> Sent: Thursday, March 23, 2017 2:36 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> Sorry just got up that was it
> 
> 
> But I ran I to done other problems
> 
> Can I move the checkpoint file to another pack As well
> 
> Do t know why IBM set it up that way
> Putting the spool and check point file on the same pack as the parmlib's
> 
> 
> > On Mar 23, 2017, at 2:41 AM, Anthony Thompson 
> wrote:
> >
> > Of course, you know that implementing a change to the SPOOLDEF VOLUME
> parameter requires a cold start of JES2? And, unless you're not interested in
> keeping any of the old spool data, a spool offload/reload.
> >
> > Ant.
> >
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Joseph Reichman
> > Sent: Thursday, 23 March 2017 8:53 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Spool file
> >
> > My parmlib member points to SPOOL as the volume but everything else shows
> B2SYS.
> >
> >SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)
> >
> > COMMAND INPUT ===>SCROLL ===>
> > CSR
> > NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk
> >
> >  B2SYS1 ACTIVE   5  3000   164  ANY   00  031B
> > 
> >  B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
> > 
> >
> > 3.4 Display
> > DSLIST - Data Sets on volume SPOOL* Row 1 of
> > 6
> > Command ===>  Scroll ===>
> > PAGE
> >
> >
> > Command - Enter "/" to select action  Message
> > Volume
> > 
> > ---
> > SYS1.HASPACE
> > SPOOL1
> > SYS1.HASPACE
> > SPOOL2
> > SYS1.HASPACE
> > SPOOL3
> > SYS1.VTOCIX.SPOOL1
> > SPOOL1
> > SYS1.VTOCIX.SPOOL2
> > SPOOL2
> > SYS1.VTOCIX.SPOOL3
> > SPOOL3
> > * End of Data Set list
> > 
> >
> >
> >
> >
> >
> >
> >  $dspool,long
> >  $HASP893 VOLUME(B2SYS1)
> > VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,
> >SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
> >TRKPERTGB=3,PERCENT=5,RESERVED=NO,
> >MAPTARGET=NO
> >  $HASP893 VOLUME(B2SYS2)
> > VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS
> >
> >  $DSPOLLDEF
> >
> > JES parmlib member
> >
> > SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
> > DSNAME=SYS1.HASPACE,
> > FENCE=NO,   /* Don't Force to Min.Vol. oc*/
> > SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
> > TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
> > TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
> > TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
> >  WARN=80),  /*   =(,% onc*/
> > TRKCELL=3,  /* 3 Buffers/Track-cell   c*/
> > VOLUME=SPOOL/* SPOOL VOLUME SERIALc*/
> > <---
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of Lizette Koehler
> > Sent: Wednesday, March 22, 2017 12:23 AM
> > To: IBM-MAIN@LISTSERV.UA.EDU
> > Subject: Re: Spool file
> >
> > Only one HASPACE per JES2 Spool volume
> >
> > Lizette
> >
> >> -Original Message-
> >> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
> >> On Behalf Of Lizette Koehler
> >> Sent: Tuesday, March 21, 2017 8:55 PM
> >> To: IBM-MAIN@LISTSERV.UA.EDU 
> >> Subject: Re: Spool file
> >>
> >> Issue a $DSPOOL,LONG  and see what it shows
> >>
> >> Issue $DSPOOLDEF and see what it shows
> >>
> >> Next go to ISPF 3.4
> >>
> >> Search
> >>
> >> **   in the dataset level
> >> SPOOL*  in the VOLSER and see what it shows
> >>
> >> I am not clear about your statement:   The spool volumes are still on the
> > same
> >> pack.  Please provide a display of what you are seeing.
> >>
> >>
> >> Lizette
> >>
> >>
> >>> -Original Message-
> 

Re: SAS user abend code documentation?

2017-03-23 Thread Don Poitras
In article <7157494272622550.wa.nitzibmgmx@listserv.ua.edu> you wrote:
> >We don't document internal abend codes as the vast majority will
> >(hopefully) never be seen. They can change from release to release
> >and even at a maintenance release. They're used when the program
> >can't continue to run for some reason. Usually, they are accompanied
> >by a message to the SAS log, but not always. The expected result is
> >that the customer will open a problem ticket. Sometimes, just the
> >abend code and a description of the job will be enough to shoot the
> >bug, but sometimes a dump is required.
> >
> >For u1335, you should have seen 'Free buffer overwritten.' in the
> >log. It just means that our internal heap has been corrupted. We
> >look for an eyecatcher on the linked-list of free blocks and if
> >it's not there, abend.
> Thanks Don,
> yes, we had that 'free buffer overwritten' in the log. Which is why I really 
> don't understand the relation to the RACF question (and the SAS admins also 
> don't understand it). And yes, the default group for all the users has a GID 
> (and an OMVS segment).
> Should we send the huge sysudump to SAS that got written? Apparently all the 
> jobs (both in prod and in AD) are doing similar things, and the exact same 
> job runs 'after sunset'.
> Barbara

I'm not in tech support, so I can't say why they asked about OMVS. If
they ask for a dump, they're probably going to ask for a SYSMDUMP 
rather than SYSUDUMP as we have tools that can inspect internal 
control blocks and such with IPCS. 'after sunset' is a new one on
me. With the days getting shorter, your batch window is closing fast.
:) (Sorry, couldn't resist.)

-- 
Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
sas...@sas.com   (919) 531-5637Cary, NC 27513

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Barbara Nitz
>We don't document internal abend codes as the vast majority will
>(hopefully) never be seen. They can change from release to release
>and even at a maintenance release. They're used when the program
>can't continue to run for some reason. Usually, they are accompanied
>by a message to the SAS log, but not always. The expected result is
>that the customer will open a problem ticket. Sometimes, just the
>abend code and a description of the job will be enough to shoot the
>bug, but sometimes a dump is required.
>
>For u1335, you should have seen 'Free buffer overwritten.' in the
>log. It just means that our internal heap has been corrupted. We
>look for an eyecatcher on the linked-list of free blocks and if
>it's not there, abend.

Thanks Don,

yes, we had that 'free buffer overwritten' in the log. Which is why I really 
don't understand the relation to the RACF question (and the SAS admins also 
don't understand it). And yes, the default group for all the users has a GID 
(and an OMVS segment).

Should we send the huge sysudump to SAS that got written? Apparently all the 
jobs (both in prod and in AD) are doing similar things, and the exact same job 
runs 'after sunset'.

Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Don Poitras
We don't document internal abend codes as the vast majority will
(hopefully) never be seen. They can change from release to release
and even at a maintenance release. They're used when the program
can't continue to run for some reason. Usually, they are accompanied
by a message to the SAS log, but not always. The expected result is
that the customer will open a problem ticket. Sometimes, just the
abend code and a description of the job will be enough to shoot the
bug, but sometimes a dump is required.

For u1335, you should have seen 'Free buffer overwritten.' in the
log. It just means that our internal heap has been corrupted. We
look for an eyecatcher on the linked-list of free blocks and if
it's not there, abend.

 
In article 

 you wrote:
> Yeah, I sort of figured you had already checked it out, but, it never hurts 
> to ask.
> We don't use SAS here so I can't help much more.

> Good luck on your hunt.


> Charles (Chuck) Hardee
> Senior Systems Engineer/Database Administration
> EAS Information Technology

> Thermo Fisher Scientific
> 300 Industry Drive | Pittsburgh, PA 15275
> Phone +1 (724) 517-2633 | Mobile +1 (412) 877-2809 | FAX: +1 (412) 490-9230
> chuck.har...@thermofisher.com  | www.thermofisher.com

> WORLDWIDE CONFIDENTIALITY NOTE: Dissemination, distribution or copying of 
> this e-mail or the information herein by anyone other than the intended 
> recipient, or an employee or agent of a system responsible for delivering the 
> message to the intended recipient, is prohibited. If you are not the intended 
> recipient, please inform the sender and delete all copies.


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of Barbara Nitz
> Sent: Thursday, March 23, 2017 8:23 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: SAS user abend code documentation?

> >Have you seen this http://support.sas.com/kb/54/206.html

> Chuck,
> yes, we did. And we are at a higher maintenance level already. One of the 
> problems is that u1335 seems to occur in all kinds of problems, which is why 
> I want to see what it actually *means*.

> Barbara


-- 
Don Poitras - SAS Development  -  SAS Institute Inc. - SAS Campus Drive
sas...@sas.com   (919) 531-5637Cary, NC 27513

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: track submitted jobs

2017-03-23 Thread Elardus Engelbrecht
Elardus Engelbrecht wrote:

>And Pual said "The most practical approach is to prohibit submitting jobs 
>except by your scheduler, and let that scheduler do the tracking." - I agree!

Argh! To Paul Gilmartin, I humbly apologize for misspelling your valued name. 
It was a honest typo.

Sorry, Paul.

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Elardus Engelbrecht
Barbara Nitz wrote:

>yes, we did. And we are at a higher maintenance level already. One of the 
>problems is that u1335 seems to occur in all kinds of problems, which is why I 
>want to see what it actually *means*.

I don't have SAS, but I have one question: Are you getting the abend before 
and/or After that maintenance level?

Or, should I ask - at what level are you getting the user abend?

I am not logged in SAS-L anymore, perhaps you can ask there?

Groete / Greetings
Elardus Engelbrecht

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: track submitted jobs

2017-03-23 Thread Elardus Engelbrecht
bILHA ELROY wrote:

>For accounting purposes we need to know the name of the library/member that 
>the job was submitted from.
>Is there any way we can find out. (The job was scheduled not through CONTROL-M 
>and similar)

You already got one reply of "NO". There were some similar threads last year. 
Same old "NO" story.

And Pual said "The most practical approach is to prohibit submitting jobs 
except by your scheduler, and let that scheduler do the tracking." - I agree!


Sources of jobs:

- Programs spitting something out in INTRDR (This is, for example, what I do 
for every day. I use LISTC and CSI to build up DDs as concatenated input before 
submit)
- SJ in SDSF or similar products
- FTP
- Scheduler ( ... and Control-O which can submit something based on activated 
rules)
- ISPF panels for other products (DB2 utilities, zSecure, etc.) can build up 
jobs for you.
- "Launcher job" - A job which spits something in INTRDR - Note, "launcher job" 
is not my invention, it was mentioned last year. [1] 
- Submit it yourself from a PS dataset, OMVS file, etc.
- Use Unix or zLinux (and perhaps others) pipe command '|' to pump something 
into JES2
- Exits (My SMFU29 exit does sort of that automagically that when a MANx fills 
up. I said sort of, because it is actually it issues 'S ' (not Submit) to 
start a STC with that MANx dataset as parameter. Yet another job from a 
library. ;-D )

There are other sources from where jobs can be submitted...


To track: 

You can use RACF and SMF to track who is the OWNER of that job.
Or better, use RACF to control usage of JOB accounting lines and monitor any 
changes of libraries. Think about SMF 42 for member auditing.
Implement better security in Control-M and Control-O using that OWNER in your 
job schedule.

(You can try to enforce job standards, but you will get some crazies who like 
to bypass any standards your trying to enforce.)

Or, just close down all INTRDR for fun. Then you have no z/OS to play with, but 
have lots of time to battle with angry users...

Note: Not even JES2 can see from where the job is coming, because another 
address space is placing contents from a source into INTRDR. Only when you 
close that INTRDR, then JES2 picks up whatever there is and tries to interpret 
it.

Perhaps there is some tracking software/method available...

Groete / Greetings
Elardus Engelbrecht

[1] - I once encounter a loop (definition of loop: 'look at definition of 
loop') from such a job. In one step there was a submitter step using 
SYSOUT=(?,INTRDR). That job also contains a submitter step. It really helps 
that we setup JES2 that duplicate jobs are made waiting instead of running 
immediately. Great fun ...

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Joao Bentes
Barbara,

Do those userids have a default group with an OMVS segment as well?

"Do the difficult things while they are easy and do the great things while 
they are small. A journey of a thousand miles must begin with a single 
step."
Laozi



From:   Barbara Nitz 
To: IBM-MAIN@LISTSERV.UA.EDU
Date:   2017-03-23 11:44
Subject:SAS user abend code documentation?
Sent by:IBM Mainframe Discussion List 



We are currently experiencing abend u1335 in SAS 9.4_m3 in different jobs. 
My first step would be to determine what that user abend actually tells me 
- but I cannot find the book containing the abend code meanings. Does 
anyone know what book contains that information? 

(We already have a ticket open with SAS, and we were told to make sure 
that the users have an OMVS segment because they don't. Well, they DO have 
an OMVS segment, and it is old news that bpx.default.user doesn't work 
anymore under 2.1)

Thanks, Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




Salvo disposto de outra forma acima: / Unless stated otherwise above:
Companhia IBM Portuguesa, S.A. 

Sociedade Anónima com o Capital Social de ? 15.000.000 
Registada na Conservatória do Registo Comercial de Lisboa, sob o número 
único fiscal e de matrícula 500068801 
Edifício ?Office Oriente? 
Rua do Mar da China, Nº 3 
Parque das Nações, 1990-138 LISBOA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: track submitted jobs

2017-03-23 Thread Paul Gilmartin
On 2017-03-23, at 05:24, Vernooij, Kees (ITOPT1) - KLM wrote:

> There was an extensive discussion about this subject a few months ago. 
> Short answer: No.
>  
Slightly longer answer: There are many ways to submit a job other than
from a "library/member".  The most practical approach is to promibit
submitting jobs except by your scheduler, and let that scheduler do the
tracking.


>> -Original Message-
>> From: bILHA ELROY
>> Sent: 23 March, 2017 12:10
>> 
>> For accounting purposes we need to know the name of the library/member
>> that the job was submitted from.
>> 
>> Is there any way we can find out. (The job was scheduled not through
>> CONTROL-M and similar)

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GREAT presentation on the history of the mainframe

2017-03-23 Thread Charles Mills
Intel had an operating system (ISIS) and a language. I worked for a hardware 
startup and we used them. It was an embedded, bare-bones OS that could be 
burned into ROM, PROM or EPROM.

I remember the language better, having struggled with it many a late night. It 
was called PL/M, programming language for microcomputers. Ha! You can find 
anything on Wikipedia: https://en.wikipedia.org/wiki/PL/M and 
https://en.wikipedia.org/wiki/ISIS_(operating_system) . (Did not realize it had 
been written by the late, great Gary K.) It had some quirks. ! (exclamation 
point) was the address-of operator, like & in C. Try spotting an extra or 
missing ! on a dot-matrix printer with a tired ribbon. 8-bit integers that were 
fundamentally unsigned but could be treated as signed. So you could say FOO=-1 
but then (FOO < 0) would be false because fundamentally -1 was just a synonym 
for 255.

Charles


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Tom Marchant
Sent: Thursday, March 23, 2017 4:58 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: GREAT presentation on the history of the mainframe

On Thu, 23 Mar 2017 02:27:32 +, scott Ford wrote:

>One of my first sysprog jobs was on a 370-155 un-dated ..no dynamic 
>address translation ran Intel's DOS look-a-like.

ITYM IBM Disk Operating System, which predated Intel by years.
It is nothing like MSDOS. Did Intel ever have a DOS?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Hardee, Chuck
Yeah, I sort of figured you had already checked it out, but, it never hurts to 
ask.
We don't use SAS here so I can't help much more.

Good luck on your hunt.


Charles (Chuck) Hardee
Senior Systems Engineer/Database Administration
EAS Information Technology

Thermo Fisher Scientific
300 Industry Drive | Pittsburgh, PA 15275
Phone +1 (724) 517-2633 | Mobile +1 (412) 877-2809 | FAX: +1 (412) 490-9230
chuck.har...@thermofisher.com  | www.thermofisher.com

WORLDWIDE CONFIDENTIALITY NOTE: Dissemination, distribution or copying of this 
e-mail or the information herein by anyone other than the intended recipient, 
or an employee or agent of a system responsible for delivering the message to 
the intended recipient, is prohibited. If you are not the intended recipient, 
please inform the sender and delete all copies.


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Barbara Nitz
Sent: Thursday, March 23, 2017 8:23 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: SAS user abend code documentation?

>Have you seen this http://support.sas.com/kb/54/206.html

Chuck,
yes, we did. And we are at a higher maintenance level already. One of the 
problems is that u1335 seems to occur in all kinds of problems, which is why I 
want to see what it actually *means*.

Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GREAT presentation on the history of the mainframe

2017-03-23 Thread John McKown
On Wed, Mar 22, 2017 at 9:27 PM, scott Ford  wrote:

> Charles,
>
> One of my first sysprog jobs was on a 370-155 un-dated ..no dynamic address
> translation ran Intel's DOS look-a-like.
>

​Was that DOS/MVT (Software Pursuits)?

https://books.google.com/books?id=P1x-hwc3_LcC=PA7=PA7=dos/mvt=bl=iKrEzIMXX5=W-VyQJs-Dd30FfQBssSFpx8c_NU=en=X=0ahUKEwjBle6Ez-zSAhUL5WMKHe3DAqgQ6AEILzAE#v=onepage=dos%2Fmvt=false

WARNING! That is a blast from the past and may negatively impact your
productivity today as you fade back into historic nostalgia. Adds for
ADM-3A series terminals. TOPS-10 & TOPS-20 software. 
​



>
> It was a wierd beast.
>
> Scott
>
>

-- 
"Irrigation of the land with seawater desalinated by fusion power is
ancient. It's called 'rain'." -- Michael McClary, in alt.fusion

Maranatha! <><
John McKown

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Barbara Nitz
>Have you seen this http://support.sas.com/kb/54/206.html

Chuck,
yes, we did. And we are at a higher maintenance level already. One of the 
problems is that u1335 seems to occur in all kinds of problems, which is why I 
want to see what it actually *means*.

Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: GREAT presentation on the history of the mainframe

2017-03-23 Thread Tom Marchant
On Thu, 23 Mar 2017 02:27:32 +, scott Ford wrote:

>One of my first sysprog jobs was on a 370-155 un-dated ..no dynamic address
>translation ran Intel's DOS look-a-like.

ITYM IBM Disk Operating System, which predated Intel by years.
It is nothing like MSDOS. Did Intel ever have a DOS?

-- 
Tom Marchant

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SAS user abend code documentation?

2017-03-23 Thread Hardee, Chuck
Barbara,

Have you seen this http://support.sas.com/kb/54/206.html

Chuck

Charles (Chuck) Hardee
Senior Systems Engineer/Database Administration
EAS Information Technology

Thermo Fisher Scientific
300 Industry Drive | Pittsburgh, PA 15275
Phone +1 (724) 517-2633 | Mobile +1 (412) 877-2809 | FAX: +1 (412) 490-9230
chuck.har...@thermofisher.com  | www.thermofisher.com

WORLDWIDE CONFIDENTIALITY NOTE: Dissemination, distribution or copying of this 
e-mail or the information herein by anyone other than the intended recipient, 
or an employee or agent of a system responsible for delivering the message to 
the intended recipient, is prohibited. If you are not the intended recipient, 
please inform the sender and delete all copies.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Barbara Nitz
Sent: Thursday, March 23, 2017 7:44 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: SAS user abend code documentation?

We are currently experiencing abend u1335 in SAS 9.4_m3 in different jobs. My 
first step would be to determine what that user abend actually tells me - but I 
cannot find the book containing the abend code meanings. Does anyone know what 
book contains that information? 

(We already have a ticket open with SAS, and we were told to make sure that the 
users have an OMVS segment because they don't. Well, they DO have an OMVS 
segment, and it is old news that bpx.default.user doesn't work anymore under 
2.1)

Thanks, Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


SAS user abend code documentation?

2017-03-23 Thread Barbara Nitz
We are currently experiencing abend u1335 in SAS 9.4_m3 in different jobs. My 
first step would be to determine what that user abend actually tells me - but I 
cannot find the book containing the abend code meanings. Does anyone know what 
book contains that information? 

(We already have a ticket open with SAS, and we were told to make sure that the 
users have an OMVS segment because they don't. Well, they DO have an OMVS 
segment, and it is old news that bpx.default.user doesn't work anymore under 
2.1)

Thanks, Barbara

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread PINION, RICHARD W.
I don't think HASPINDX is used anymore.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Edward Finnell
Sent: Thursday, March 23, 2017 3:05 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Spool file

Maybe the SDSF parms point to wrong SYS1.HASPINDX or HASPINDX is incorrectly 
cataloged?
 
 
In a message dated 3/23/2017 1:42:06 A.M. Central Daylight Time,  
anthony.thomp...@nt.gov.au writes:

Of  course, you know that implementing a change to the SPOOLDEF VOLUME 
parameter  requires a cold start of JES2? And, unless you're not interested in 
keeping  any of the old spool data, a spool  offload/reload.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
FIRST TENNESSEE

Confidentiality notice: 
This e-mail message, including any attachments, may contain legally privileged 
and/or confidential information. If you are not the intended recipient(s), or 
the employee or agent responsible for delivery of this message to the intended 
recipient(s), you are hereby notified that any dissemination, distribution, or 
copying of this e-mail message is strictly prohibited. If you have received 
this message in error, please immediately notify the sender and delete this 
e-mail message from your computer.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: track submitted jobs

2017-03-23 Thread Vernooij, Kees (ITOPT1) - KLM
There was an extensive discussion about this subject a few months ago. 
Short answer: No.

Kees.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On
> Behalf Of bILHA ELROY
> Sent: 23 March, 2017 12:10
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: track submitted jobs
> 
> For accounting purposes we need to know the name of the library/member
> that the job was submitted from.
> 
> Is there any way we can find out. (The job was scheduled not through
> CONTROL-M and similar)
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

For information, services and offers, please visit our web site: 
http://www.klm.com. This e-mail and any attachment may contain confidential and 
privileged material intended for the addressee only. If you are not the 
addressee, you are notified that no part of the e-mail or any attachment may be 
disclosed, copied or distributed, and that any other action related to this 
e-mail or attachment is strictly prohibited, and may be unlawful. If you have 
received this e-mail by error, please notify the sender immediately by return 
e-mail, and delete this message. 

Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its 
employees shall not be liable for the incorrect or incomplete transmission of 
this e-mail or any attachments, nor responsible for any delay in receipt. 
Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch 
Airlines) is registered in Amstelveen, The Netherlands, with registered number 
33014286



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


track submitted jobs

2017-03-23 Thread bILHA ELROY
For accounting purposes we need to know the name of the library/member that the 
job was submitted from.

Is there any way we can find out. (The job was scheduled not through CONTROL-M 
and similar)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Joseph Reichman
Sorry just got up that was it 


But I ran I to done other problems 

Can I move the checkpoint file to another pack 
As well 

Do t know why IBM set it up that way 
Putting the spool and check point file on the same pack as the parmlib's


> On Mar 23, 2017, at 2:41 AM, Anthony Thompson  
> wrote:
> 
> Of course, you know that implementing a change to the SPOOLDEF VOLUME 
> parameter requires a cold start of JES2? And, unless you're not interested in 
> keeping any of the old spool data, a spool offload/reload.
> 
> Ant.
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of Joseph Reichman
> Sent: Thursday, 23 March 2017 8:53 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> My parmlib member points to SPOOL as the volume but everything else shows 
> B2SYS.
> 
>SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)
> 
> COMMAND INPUT ===>SCROLL ===>
> CSR  
> NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk
> 
>  B2SYS1 ACTIVE   5  3000   164  ANY   00  031B
> 
>  B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
> 
> 
> 3.4 Display
> DSLIST - Data Sets on volume SPOOL* Row 1 of
> 6 
> Command ===>  Scroll ===>
> PAGE 
> 
> 
> Command - Enter "/" to select action  Message
> Volume
> 
> ---
> SYS1.HASPACE
> SPOOL1 
> SYS1.HASPACE
> SPOOL2 
> SYS1.HASPACE
> SPOOL3 
> SYS1.VTOCIX.SPOOL1
> SPOOL1 
> SYS1.VTOCIX.SPOOL2
> SPOOL2 
> SYS1.VTOCIX.SPOOL3
> SPOOL3
> * End of Data Set list
> 
> 
> 
> 
> 
> 
> 
>  $dspool,long  
>  $HASP893 VOLUME(B2SYS1)   
> VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,  
>SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
>TRKPERTGB=3,PERCENT=5,RESERVED=NO,  
>MAPTARGET=NO
>  $HASP893 VOLUME(B2SYS2)   
> VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS
> 
>  $DSPOLLDEF   
> 
> JES parmlib member
> 
> SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
> DSNAME=SYS1.HASPACE, 
> FENCE=NO,   /* Don't Force to Min.Vol. oc*/
> SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
> TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
> TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
> TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
>  WARN=80),  /*   =(,% onc*/
> TRKCELL=3,  /* 3 Buffers/Track-cell   c*/
> VOLUME=SPOOL/* SPOOL VOLUME SERIALc*/
> <---
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On 
> Behalf Of Lizette Koehler
> Sent: Wednesday, March 22, 2017 12:23 AM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: Spool file
> 
> Only one HASPACE per JES2 Spool volume
> 
> Lizette
> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
>> On Behalf Of Lizette Koehler
>> Sent: Tuesday, March 21, 2017 8:55 PM
>> To: IBM-MAIN@LISTSERV.UA.EDU 
>> Subject: Re: Spool file
>> 
>> Issue a $DSPOOL,LONG  and see what it shows
>> 
>> Issue $DSPOOLDEF and see what it shows
>> 
>> Next go to ISPF 3.4
>> 
>> Search
>> 
>> **   in the dataset level
>> SPOOL*  in the VOLSER and see what it shows
>> 
>> I am not clear about your statement:   The spool volumes are still on the
> same
>> pack.  Please provide a display of what you are seeing.
>> 
>> 
>> Lizette
>> 
>> 
>>> -Original Message-
>>> From: IBM Mainframe Discussion List 
>>> [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Joseph Reichman
>>> Sent: Tuesday, March 21, 2017 7:48 PM
>>> To: IBM-MAIN@LISTSERV.UA.EDU 
>>> Subject: Re: Spool file
>>> 
>>> I have 3 vols SPOOL1 SPOOL2 and SPOOL3
>>> 
>>> In jes2 parmlib member on the spool def for volume I specify SPOOL 
>>> and yet after ipl'ing
>>> 
>>> And then going back to SDSF to look for the  spool volumes are on 
>>> still on the same pack
>>> 
>>> 
>>> 
 On Mar 21, 2017, at 11:20 AM, Lizette Koehler 
  >
>>> wrote:
 
 The best practice for JES2 Spool file is standalone.  Never put 
 anything on the same volume as either the 

Re: thoughts on z/OS ftp server enhancement.

2017-03-23 Thread Timothy Sipples
If I understand the use case(s) correctly, IBM already implemented this
capability quite some time ago on z/OS. IBM Transformation Extender for
z/OS (IBM Program No. 5655-R99) provides this capability, and others.
Here's the link to the product page:

http://www.ibm.com/software/products/en/transformation-extender

Careful, though. File transfers tend to be overused. It's generally a
smarter idea for a variety of reasons to bring the processing and analysis
to the data rather than the other way around. File transfers are also
inherently batch-oriented, and the associated business processes often are
not (or at least shouldn't be).

So let's suppose something of a "worst case," that you have an application
written for a Microsoft .NET runtime environment, running on Microsoft
Windows Server, and you want to use that application to process/analyze
some data stored in VSAM. Can you do that, with the data in place -- live,
secured VSAM data that's also available to other applications? And with the
VSAM data presented to the Microsoft .NET application via ODBC in a
reasonably sensible form, just as it would be in some other database? You
sure can. IBM InfoSphere Classic Federation Server for z/OS (5655-IM4)
provides the means to do that. Here's the product page:

http://www.ibm.com/software/products/en/ibminfoclasfedeservforzos

And sometimes (or even very often) you should let the applications that
already know how to interpret VSAM data keep doing so. Often it's better to
provide APIs rather than try to provide direct data-level access. z/OS
Connect is a terrific way to do that, and if you have CICS Transaction
Server or IMS Transaction Manager (recent releases) you have z/OS Connect
Version 1.2. z/OS Connect Enterprise Edition (5655-CEE) is available as an
upgrade option.


Timothy Sipples
IT Architect Executive, Industry Solutions, IBM z Systems, AP/GCG/MEA
E-Mail: sipp...@sg.ibm.com

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Edward Finnell
Maybe the SDSF parms point to wrong SYS1.HASPINDX or HASPINDX is  
incorrectly cataloged?
 
 
In a message dated 3/23/2017 1:42:06 A.M. Central Daylight Time,  
anthony.thomp...@nt.gov.au writes:

Of  course, you know that implementing a change to the SPOOLDEF VOLUME 
parameter  requires a cold start of JES2? And, unless you're not interested in 
keeping  any of the old spool data, a spool  offload/reload.



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Spool file

2017-03-23 Thread Anthony Thompson
Of course, you know that implementing a change to the SPOOLDEF VOLUME parameter 
requires a cold start of JES2? And, unless you're not interested in keeping any 
of the old spool data, a spool offload/reload.

Ant.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Joseph Reichman
Sent: Thursday, 23 March 2017 8:53 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Spool file

My parmlib member points to SPOOL as the volume but everything else shows B2SYS.

SDSF SPOOL DISPLAY S0W1   2% ACT  14095 FRE  13683 LINE 1-2 (2)

 COMMAND INPUT ===>SCROLL ===>
CSR  
 NP   NAME   Status   TGPct TGNum TGUse Command  SAff  Ext LoCylLoTrk

  B2SYS1 ACTIVE   5  3000   164  ANY   00  031B

  B2SYS2 ACTIVE   2 11095   248  ANY   01  0006
  

3.4 Display
DSLIST - Data Sets on volume SPOOL* Row 1 of
6 
Command ===>  Scroll ===>
PAGE 
 

Command - Enter "/" to select action  Message
Volume

---
 SYS1.HASPACE
SPOOL1 
 SYS1.HASPACE
SPOOL2 
 SYS1.HASPACE
SPOOL3 
 SYS1.VTOCIX.SPOOL1
SPOOL1 
 SYS1.VTOCIX.SPOOL2
SPOOL2 
 SYS1.VTOCIX.SPOOL3
SPOOL3
* End of Data Set list

 





  $dspool,long  
  $HASP893 VOLUME(B2SYS1)   
VOLUME(B2SYS1)  STATUS=ACTIVE,DSNAME=SYS1.HASPACE,  
SYSAFF=(ANY),TGNUM=3000,TGINUSE=164,
TRKPERTGB=3,PERCENT=5,RESERVED=NO,  
MAPTARGET=NO
  $HASP893 VOLUME(B2SYS2)   
VOLUME(B2SYS2)  STATUS=ACTIVE,DSNAME=SYS1DisplKCELL=3,VOLUME=B2SYS

  $DSPOLLDEF   

JES parmlib member

SPOOLDEF BUFSIZE=3856,   /* MAXIMUM BUFFER SIZEc*/
 DSNAME=SYS1.HASPACE, 
 FENCE=NO,   /* Don't Force to Min.Vol. oc*/
 SPOOLNUM=32,/* Max. Num. Spool Vols--- c*
 TGBPERVL=5, /* Track Groups per volume in BLOB  ownc*
 TGSIZE=33,  /* 30 BUFFERS/TRACK GROUPwnc*/
 TGSPACE=(MAX=26288, /* Fits TGMs into 4K Page  =(,  c*/
  WARN=80),  /*   =(,% onc*/
 TRKCELL=3,  /* 3 Buffers/Track-cell   c*/
 VOLUME=SPOOL/* SPOOL VOLUME SERIALc*/
<---
-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Lizette Koehler
Sent: Wednesday, March 22, 2017 12:23 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Spool file

Only one HASPACE per JES2 Spool volume

Lizette

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] 
> On Behalf Of Lizette Koehler
> Sent: Tuesday, March 21, 2017 8:55 PM
> To: IBM-MAIN@LISTSERV.UA.EDU 
> Subject: Re: Spool file
> 
> Issue a $DSPOOL,LONG  and see what it shows
> 
> Issue $DSPOOLDEF and see what it shows
> 
> Next go to ISPF 3.4
> 
> Search
> 
> **   in the dataset level
> SPOOL*  in the VOLSER and see what it shows
> 
> I am not clear about your statement:   The spool volumes are still on the
same
> pack.  Please provide a display of what you are seeing.
> 
> 
> Lizette
> 
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List 
> > [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf Of Joseph Reichman
> > Sent: Tuesday, March 21, 2017 7:48 PM
> > To: IBM-MAIN@LISTSERV.UA.EDU 
> > Subject: Re: Spool file
> >
> > I have 3 vols SPOOL1 SPOOL2 and SPOOL3
> >
> > In jes2 parmlib member on the spool def for volume I specify SPOOL 
> > and yet after ipl'ing
> >
> > And then going back to SDSF to look for the  spool volumes are on 
> > still on the same pack
> >
> >
> >
> > > On Mar 21, 2017, at 11:20 AM, Lizette Koehler 
> > >  >
> > wrote:
> > >
> > > The best practice for JES2 Spool file is standalone.  Never put 
> > > anything on the same volume as either the SPOOL Space or the CKPT
> datasets.
> > >
> > > That can lead to unintentional issues.
> > >
> > > Many shops in the past have tended towards multi-use volumes with 
> > > JES2.  This is not recommended.
> > >
> > > With storage arrays, you can code a volume as a MOD9 (9GB, 10,017
> > > Cylinders) and only place the ckpt or the spool space (not both) 
> > > on it. The array will typically only hold the amount of storage on 
> > > the
> > > mod9 as is allocated.  It does not hold the entire 10,017 
> > > cylinders