But isn't the spxtape use cleared up by "set dump off#set dump dasd" ?
That just made it grow...

Q RECORDING shows nothing pending retrieval.
I also cleaned up spool so both systems have roughly the same  number of spool 
files.


Marcy 

"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."


-----Original Message-----
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Duerbusch
Sent: Tuesday, January 05, 2010 11:44 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Amount of CP dump space needed

There was a problem with spxtape which I thought was fixed by z/VM 5.4.

But as we see it, on z/VM 5.2....

SPXTAPE tried to buffer as much as possible.
As spxtape is a CP function, all of its buffers end up requiring slots on the 
CP Dump dataset.
When spxtape finishes, there is no automatic way off reducing the amount of 
slots used, as an CP abend might happen a few seconds, minutes after spxtape 
ended and well, "what if it was the cause?".

I take it that there are other "CP" functions that may end up buffering or 
using storage that requires slots in the CP DUMP area.  Something like "Q 
RECORDING" and if nothing is servicing those records, CP is buffering them.

Of course, the page tables and all that storage related stuff needs to be 
dumped in case of a CP abend, so the CP DUMP slots are allocated.  So the 
memory size of the LPAR is a consideration, as well as the sum of the virtual 
sizes of all the machines that are logged on is also a consideration.

Tom Duerbusch
THD Consulting

>>> Marcy Cortes <marcy.d.cor...@wellsfargo.com> 1/5/2010 1:09 PM >>>
How is the amount of CP dump space calculated?

We have a system that seems to be using at lot more than the others.  
Everything is pretty much identical across the LPARs HW wise (access to all the 
same devices, etc) and SW wise (z/VM 5.4 RSU 0902).  

Here's an example.

System A:

q dump                                    
DASD F1B0 dump unit CP IPL pages 1140129  
Ready; T=0.01/0.01 13:02:40               

System J:
q dump                                  
DASD F00D dump unit CP IPL pages 169959 
Ready; T=0.01/0.01 13:03:32             


Both systems have 32G (28 central, 4 expanded).  If anything System A has less 
memory usage (does not page with 10 linux ) and System J does (16 linux).

I tried set dump off and then set dump DASD to see if that would change the 
numbers.  The number on system A grew (from 1131137 to 1140129) while System J 
did not change.

Why would be they be so radically different?


Marcy 



"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."

Reply via email to