Re: Soft Capping

2016-06-30 Thread Steve Austin
Thanks for this,  SYSEVENT REQLPDAT look like the answer. 

Steve

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Anthony Thompson
Sent: 30 June 2016 02:33
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Soft Capping

CVT + x'25C' = address of RMCT (SRM control table) RMCT + x'E4' = address of 
RCT (SRM resource control table) RCT + x'C4' = RCTLACS (4 bytes, unsigned, 
long-term no. Of MSU's consumed by LPAR) RCT + x'1C' = RCTIMGW (4 bytes, 
unsigned, no. of MSU's available to LPAR)

If RCTLACS > RCTIMGWU, the LPAR is soft-capped (if they're equal, may be 
hard-capped, may be soft-capped, might just be busy).

SYSEVENT REQLPDAT returns the same information in fields LPDATAVGIMGSERVICE and 
LPDATIMGCAPACITY, as I think you've figured out. 

Ant.  

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Steve Austin
Sent: Wednesday, 29 June 2016 10:16 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Soft Capping

Is there a way to determine programmatically if soft capping is being applied 
to a z/OS image?

Thanks



--
This e-mail message has been scanned and cleared by Google Message Security and 
the UNICOM Global security systems. This message is for the named person's use 
only. If you receive this message in error, please delete it and notify the 
sender. 

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

-- 
This e-mail message has been scanned and cleared by Google Message Security 
and the UNICOM Global security systems. This message is for the named 
person's use only. If you receive this message in error, please delete it 
and notify the sender. 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: JOB cards, procs, and TIOTs, o my!

2016-06-30 Thread Scott Ballentine
Phil,
  For your last question, take a look at the JCL Reference.  There's a chapter 
on started tasks that talks about some of this stuff.  (By the way, the doc 
uses the term "started job" for what you call a "proc with a JOB card", and 
"started proc" for the not-a-JOB case.)
  You might want to experiment with S MYPROC,JOBNAME=, and try running a 
batch job that has a step that EXECs a proc.

-Scott Ballentine
 z/OS Device Allocation 
 sbal...@us.ibm.com


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


DFHSM CDS backup versions

2016-06-30 Thread Vince Getgood
Hi all.
We ran out of scratch tapes on one of the systems I look after.  After 
investigation, it appears that DFRMM (or DFHSM) doesn't seem to be releasing 
old CDS backup tapes back to the scratch pool.

I have the following coded in ARCCMDxx for DFHSM CDS backups: -

SETSYS CDSVERSIONBACKUP(BACKUPCOPIES(4))  
SETSYS CDSVERSIONBACKUP(BACKUPDEVICECATEGORY(DASD))   
SETSYS CDSVERSIONBACKUP(DATAMOVER(DSS))   
SETSYS CDSVERSIONBACKUP( -
BACKUPCOPIES(4)  -
BACKUPDEVICECATEGORY(TAPE(NOPARALLEL -
   RETPD(0) UNITNAME(V3480)))-
DATAMOVER(HSM)   -
) 

When I issue a QUERY CDSVERSIONBACKUP, I get: -

ARC0101I QUERY CDSVERSIONBACKUP COMMAND STARTING ON 610
ARC0101I (CONT.) HOST=3
ARC0375I CDSVERSIONBACKUP, 611 
ARC0375I (CONT.) MCDSBACKUPDSN=HSM.MCDS.BACKUP,
ARC0375I (CONT.) BCDSBACKUPDSN=HSM.BCDS.BACKUP,
ARC0375I (CONT.) OCDSBACKUPDSN=HSM.OCDS.BACKUP,
ARC0375I (CONT.) JRNLBACKUPDSN=HSM.JRNL.BACKUP 
ARC0376I BACKUPCOPIES=0004, BACKUPDEVICECATEGORY=TAPE 612  
ARC0376I (CONT.) UNITNAME=V3480, DENSITY=*, RETPD=, NOPARALLEL,
ARC0376I (CONT.) LATESTFINALQUALIFIER=V0006021, DATAMOVER=HSM  
ARC0101I QUERY CDSVERSIONBACKUP COMMAND COMPLETED ON 613   
ARC0101I (CONT.) HOST=3

So we're backing up DFHSM CDS's to (virtual) tape, and keeping four old 
versions.  Froopy.

In RMM, I found MANY old versions of the CDS files on tape - going back to 
2015.  For instance: -

'HSM.MCDS.BACKUP.V0005355' with an expiration date of 2015/138.  

RMM has these volumes as a status of MASTER, so RMM thinks it contains valid 
user data, which it really doesn't...

How do I get these tapes (preferably automatically) returned to the scratch 
pool?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM CDS backup versions

2016-06-30 Thread Staller, Allan
Is dfHSM defined as an EDM (External Data Manager) to dfRMM?

If not, there is a section in the dfHSM Installation and Customization Guide on 
how to do this.

HTH,


We ran out of scratch tapes on one of the systems I look after.  After 
investigation, it appears that DFRMM (or DFHSM) doesn't seem to be releasing 
old CDS backup tapes back to the scratch pool.

.snippage

RMM has these volumes as a status of MASTER, so RMM thinks it contains valid 
user data, which it really doesn't...

How do I get these tapes (preferably automatically) returned to the scratch 
pool?


This email � including attachments � may contain confidential information. If 
you are not the intended recipient, do not copy, distribute or act on it. 
Instead, notify the sender immediately and delete the message.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM CDS backup versions

2016-06-30 Thread Tom Conley

On 6/30/2016 10:55 AM, Vince Getgood wrote:

Hi all.
We ran out of scratch tapes on one of the systems I look after.  After 
investigation, it appears that DFRMM (or DFHSM) doesn't seem to be releasing 
old CDS backup tapes back to the scratch pool.

I have the following coded in ARCCMDxx for DFHSM CDS backups: -

SETSYS CDSVERSIONBACKUP(BACKUPCOPIES(4))
SETSYS CDSVERSIONBACKUP(BACKUPDEVICECATEGORY(DASD))
SETSYS CDSVERSIONBACKUP(DATAMOVER(DSS))
SETSYS CDSVERSIONBACKUP( -
BACKUPCOPIES(4)  -
BACKUPDEVICECATEGORY(TAPE(NOPARALLEL -
   RETPD(0) UNITNAME(V3480)))-
DATAMOVER(HSM)   -
)

When I issue a QUERY CDSVERSIONBACKUP, I get: -

ARC0101I QUERY CDSVERSIONBACKUP COMMAND STARTING ON 610
ARC0101I (CONT.) HOST=3
ARC0375I CDSVERSIONBACKUP, 611
ARC0375I (CONT.) MCDSBACKUPDSN=HSM.MCDS.BACKUP,
ARC0375I (CONT.) BCDSBACKUPDSN=HSM.BCDS.BACKUP,
ARC0375I (CONT.) OCDSBACKUPDSN=HSM.OCDS.BACKUP,
ARC0375I (CONT.) JRNLBACKUPDSN=HSM.JRNL.BACKUP
ARC0376I BACKUPCOPIES=0004, BACKUPDEVICECATEGORY=TAPE 612
ARC0376I (CONT.) UNITNAME=V3480, DENSITY=*, RETPD=, NOPARALLEL,
ARC0376I (CONT.) LATESTFINALQUALIFIER=V0006021, DATAMOVER=HSM
ARC0101I QUERY CDSVERSIONBACKUP COMMAND COMPLETED ON 613
ARC0101I (CONT.) HOST=3

So we're backing up DFHSM CDS's to (virtual) tape, and keeping four old 
versions.  Froopy.

In RMM, I found MANY old versions of the CDS files on tape - going back to 
2015.  For instance: -

'HSM.MCDS.BACKUP.V0005355' with an expiration date of 2015/138.

RMM has these volumes as a status of MASTER, so RMM thinks it contains valid 
user data, which it really doesn't...

How do I get these tapes (preferably automatically) returned to the scratch 
pool?



Vince,

Please post the LV display for one of your DFHSM tapes.  I'm guessing 
it's retained with a VRS.


Regards,
Tom Conley

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Multithreaded output to stderr and stdout

2016-06-30 Thread Michael Knigge

Peter,


yes, you are right, but afaik this is nto the truth for stdout and 
stderr. IIRC I've read in the C/C++ manual that stdout and stderr are 
handled differently



bye,

Michael


Am 29.06.2016 um 20:52 schrieb Farley, Peter x23353:

In the C/C++ Programmers Guide under Input and Output there is this note:

"Avoiding Undesirable Results when Using I/O

File serialization is not provided for different tasks attempting to access the same 
file. When a C/C++ application is run on one task, and the same application or 
another C/C++ application is run on a different task, any attempts for both 
applications to access the same file is the responsibility of the application."

So the question then becomes whether or not the multiple Java threads are 
running on a single TCB or on multiple TCB's.  Not being a JVM expert I don't 
know the answer to that question.

I'm AssUMing that your "native C library" is a JNI application, so a second 
question would be whether JNI applications are allowed/disallowed from being used in 
multiple threads at the same time.  Which is something a Java guru would need to tell you.

I know I'm only asking more questions here, but I suspect that going down those 
roads may lead you to better answers.

HTH

Peter

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Michael Knigge
Sent: Wednesday, June 29, 2016 1:30 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Multithreaded output to stderr and stdout

All,

I have a multithreaded Java application that writes to stdout (and
stderr) thru a native C library using fputc and fwrite. Now, at our site 
everything is all right with output to stderr and stdout from multiple threads.

At a customer site only output from the main thread is written to the "file" if 
the DD STDOUT (or STDERR) points to a MVS Data Set. All output from the other threads is 
discarded.

If the DD points to SYSOUT=* (instead of a MVS Data Set) the output from the threads is 
written to the "file" as expected

Does anyone have an idea why? LE370 settings are equal to the settings
at our site...  Maybe a special SMS Setting?? I have no idea (and no
access to the customer site)

Thank you,

Michael
--


This message and any attachments are intended only for the use of the addressee 
and may contain information that is privileged and confidential. If the reader 
of the message is not the intended recipient or an authorized representative of 
the intended recipient, you are hereby notified that any dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please notify us immediately by e-mail and delete the message and any 
attachments from your system.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Multithreaded output to stderr and stdout

2016-06-30 Thread Michael Knigge

Tony,


I've tried to reproduce it locally on our z/OS System. I've allocated a 
Data Set in RECFM=VB (and large block size) and a second test with 
an allocated RECFM=V data set. No differecne at our z/OS system. It 
works like a charm in both cases...



Bye,

Michael



Am 29.06.2016 um 21:36 schrieb Tony Harminc:

On 29 June 2016 at 14:00, John McKown  wrote:


I may be blowing smoke here, if so I'm sure someone will point it out :-}.

I think you're on the right (write...?) track.

[...]

  A DD pointing to a sequential data set is _not_ designed to be
written to by two different DCBs concurrently. What I do in this case is
write to a UNIX file. Why? Because it works more like a JES2 sysout, which
is subject of the next paragraph.

Now, with JES2 (or UNIX output), what happens? Well, it's more like a data
base. You still have two DCBs, but the actual write sends the data to JES2
and tells it to place it in the SPOOL file. So JES2 is accepting and
interleaving the data for you. As JR said, this is like what would happen
on a line printer.

I don't think this part is quite accurate... JES2 doesn't work like
HASP of old, where SYSOUT data was written to a virtual line printer,
line at a time.


It would accept and print each line as it is generated
and so would interleave the lines. This is also what happens with UNIX
files. The data is not sent to the disk, but to the UNIX kernel which
places it in a buffer and eventually writes it out. The UNIX kernel, like
JES2, is maintaining a "unified buffer" architecture behind the scenes.

Well... When you use QSAM on a JES2 SYSOUT dataset, most of the
buffering is done locally in a so-called unprotected buffer. Only when
that buffer is filled does the access method PUT routine issue SVC 111
(or a more modern PC, iirc?) to copy the buffer to one owned by JES2,
which is eventaully written to disk. Those disk writes of blocks would
be centrally serialized, so that there would be no possibility of
overlaying already-written data. But it would mean that groups of
records would be interleaved rather than individual ones, and that
might or might not be easily discernible looking at the output.

What puzzles me in all this, though, is how it works fine at the
developer's site but not at the customer's. Could it be that there is
no blocking going on locally, but large(r) buffers are in use at the
customer site?

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Multithreaded output to stderr and stdout

2016-06-30 Thread Michael Knigge

Yes


At our site there is no difference if I use DSN= or SYSOUT=* in my 
JCL for STDOUT and STDERR. Output from all threads is written to the file.



At tzhe customer site if DSN=xxx is used, only output from the main 
thread is written to the file. If SYSOUT=* is used the customer can see 
all output in SDSF.



Bye,

Michael



Am 29.06.2016 um 23:13 schrieb Clark Morris:

[Default] On 29 Jun 2016 12:37:05 -0700, in bit.listserv.ibm-main
t...@harminc.net (Tony Harminc) wrote:


On 29 June 2016 at 14:00, John McKown  wrote:


I may be blowing smoke here, if so I'm sure someone will point it out :-}.

I think you're on the right (write...?) track.

[...]

  A DD pointing to a sequential data set is _not_ designed to be
written to by two different DCBs concurrently. What I do in this case is
write to a UNIX file. Why? Because it works more like a JES2 sysout, which
is subject of the next paragraph.

Now, with JES2 (or UNIX output), what happens? Well, it's more like a data
base. You still have two DCBs, but the actual write sends the data to JES2
and tells it to place it in the SPOOL file. So JES2 is accepting and
interleaving the data for you. As JR said, this is like what would happen
on a line printer.

I don't think this part is quite accurate... JES2 doesn't work like
HASP of old, where SYSOUT data was written to a virtual line printer,
line at a time.

I suspect that the problem is that STDOUT and STDERR are written to
DSN=something at the customer site and that the poster would see the
same result at his site, the difference being in how SYSOUT=something
is handled as opposed DSN=something.

Clark Morris

It would accept and print each line as it is generated
and so would interleave the lines. This is also what happens with UNIX
files. The data is not sent to the disk, but to the UNIX kernel which
places it in a buffer and eventually writes it out. The UNIX kernel, like
JES2, is maintaining a "unified buffer" architecture behind the scenes.

Well... When you use QSAM on a JES2 SYSOUT dataset, most of the
buffering is done locally in a so-called unprotected buffer. Only when
that buffer is filled does the access method PUT routine issue SVC 111
(or a more modern PC, iirc?) to copy the buffer to one owned by JES2,
which is eventaully written to disk. Those disk writes of blocks would
be centrally serialized, so that there would be no possibility of
overlaying already-written data. But it would mean that groups of
records would be interleaved rather than individual ones, and that
might or might not be easily discernible looking at the output.

What puzzles me in all this, though, is how it works fine at the
developer's site but not at the customer's. Could it be that there is
no blocking going on locally, but large(r) buffers are in use at the
customer site?

Tony H.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Behold the Fascinating Nightmare of Debugging a Computer from the 1950s

2016-06-30 Thread Edward Gould
Behold the Fascinating Nightmare of Debugging a Computer from the 1950s



It takes a lot more than some typing.

http://www.popularmechanics.com/technology/a21586/debugging-1959-vacuum-tape-drive/?mag=pop&list=nl_pnl_news&src=nl&date=063016
 

 

Sure brings back some memories that today's generation will never be able to 
relate !
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Minimum Volume Sizes in the Wild

2016-06-30 Thread Bobbie Justice
We still have some mod 9s around, most of ours are mod 27 or 54.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Behold the Fascinating Nightmare of Debugging a Computer from the 1950s

2016-06-30 Thread Tom Brennan
Way later than the 1950's, a tape operator called me to watch a Storage 
Tek reel drive do something strange:  He mounted a scratch tape and the 
job took off spinning the reels so fast that soon the metal capstan got 
hot enough to melt the tape, basically destroying both tape and capstan. 
 Then he pointed to the drive just to the left that had failed the same 
way a few minutes earlier.


Since the tape was moving so fast with no apparent i/o breaks, I thought 
I could reproduce the error by running a big IEBGENER with a large 
blocksize and BUFNO=255.  So I ran that, and sure enough ruined yet 
another capstan!  I think they were about $900 each, but at least we 
found the cause.


Edward Gould wrote:

Behold the Fascinating Nightmare of Debugging a Computer from the 1950s



It takes a lot more than some typing.

http://www.popularmechanics.com/technology/a21586/debugging-1959-vacuum-tape-drive/?mag=pop&list=nl_pnl_news&src=nl&date=063016  


Sure brings back some memories that today's generation will never be able to 
relate !
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Behold the Fascinating Nightmare of Debugging a Computer from the 1950s

2016-06-30 Thread Paul Gilmartin
On Thu, 30 Jun 2016 16:52:17 -0700, Tom Brennan wrote:

>Way later than the 1950's, a tape operator called me to watch a Storage
>Tek reel drive do something strange:  He mounted a scratch tape and the
>job took off spinning the reels so fast that soon the metal capstan got
>hot enough to melt the tape, basically destroying both tape and capstan.
>  Then he pointed to the drive just to the left that had failed the same
>way a few minutes earlier.
>
>Since the tape was moving so fast with no apparent i/o breaks, I thought
>I could reproduce the error by running a big IEBGENER with a large
>blocksize and BUFNO=255.  So I ran that, and sure enough ruined yet
>another capstan!  I think they were about $900 each, but at least we
>found the cause.
> 
Ah!  Black Team at work!

Did StorageTek fix it, perhaps by capping the duty cycle?

It's hard to imagine that this was purely a matter of saturation.  I'd
expect more typical jobs to reach perhaps 75% and just take longer
to melt the tape.  Perhaps the stop-start sequence exercised the
pneumatics and supplied a cooling airflow.

CDC 6400 FORTRAN provided low-level I/O facilities.  With the analogue
of BUFNO=2 in my FORTRAN program I could keep a tape moving
nonstop.  The usual consequence was that the operator perceived
this as "runaway tape"  (all record gap) and cancelled the job unless I
had provided special instructions.  Never overheated.

IIRC, those same drives did high-speed rewind leaving the tape in the
vacuum columns, the capstan driving the tape and the hub servos
controlling the reels.  The heads were elevated from the tape.  A
photosensor detected when the tape got near the hub and slowed down
the process to finish at normal R/W speed.

(I may be confusing manufacturers.)

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Behold the Fascinating Nightmare of Debugging a Computer from the 1950s

2016-06-30 Thread Edward Finnell
Those were the little foil reflectors, programmatically referred to as BOT  
and EOT. For partially overwritten tapes could fwd space to the EOT and do 
a  Read Backwards.
 
Think most of the vendors were Potter Brumfield or Ampex  rebranded. There 
were rumors of a 13k 9trk but the Carts appeared soon  after.








In a message dated 6/30/2016 7:51:52 P.M. Central Daylight Time,  
000433f07816-dmarc-requ...@listserv.ua.edu writes:

The  heads were elevated from the tape.  A
photosensor detected when the  tape got near the hub and slowed down
the process to finish at normal R/W  speed.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Behold the Fascinating Nightmare of Debugging a Computer from the 1950s

2016-06-30 Thread Tom Brennan
I'm sure it got fixed because I never heard of the problem happening 
again.  But I guess my memory is selective because I only remember the 
fun part of making things smoke.  Sorry - no details about the fix.


Paul Gilmartin wrote:


Did StorageTek fix it, perhaps by capping the duty cycle?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN