Re: PAV and VSE guest

2007-03-19 Thread Bill Bitner
Dave, you are probably correct in being cautious about making a sweeping change without looking for evidence first. For more details on the PAV support that went out with the APAR you referenced, see http://www.vm.ibm.com/perf/reports/zvm/html/520pav.html. Remember that if there is no queueing on I

Re: Adding help files

2007-03-19 Thread Graeme Moss
Hi Dennis, I have a feeling that when I follow my cheatsheet below that during rebuilds when applying maintenance the rebuilds fail due to storage already in use. Haven't tested but maintenance goes on fine on systems without the increased HELPSEG =

Adding help files

2007-03-19 Thread O'Brien, Dennis L
I'd like to add a few thousand help files (for VM:Manager) to MAINT 19D. What's the proper way to do this? If I just COPYFILE them to the disk, HELP works, but PUT2PROD chokes the next time it tries to build the help segment. Could the segment be getting full, or is there a VMSES command that sho

Re: TCPNJE

2007-03-19 Thread Schuh, Richard
This is a first cut talking to a lab MVS system to verify that their TCPNJE is viable. The final configuration has not been established. It will probably include from 3 to 10 MVS systems and 2 VMs. KEEPALIV is the default for the PARM (CONFIG file) or DEFINE (command) for a TCPNJE link. Obviously

Re: TCPNJE

2007-03-19 Thread Alan Altmark
On Monday, 03/19/2007 at 11:50 MST, "Schuh, Richard" <[EMAIL PROTECTED]> wrote: > What happened? Did you forget about a NOT operator somewhere? The link > stays alive with KEEPALIV=NO and drops with KEEPALIV=YES. Reminds me of > the OS2 dialog boxes. "You told me to do x. Reply YES to not do it; N

Re: PAV and VSE guest

2007-03-19 Thread Rob van der Heij
On 3/19/07, Kris Buelens <[EMAIL PROTECTED]> wrote: I guess that what you found is that VSE adheres to the old rules: it is useless to send an I/O to a disk that is busy, so VSE queues. VM can't change that. In that case, it might help to give VSE its disk space in less than full pack mini dis

Re: PAV and VSE guest

2007-03-19 Thread Kris Buelens
I guess that what you found is that VSE adheres to the old rules: it is useless to send an I/O to a disk that is busy, so VSE queues. VM can't change that. Minidisk cache can help or hurt VSE: CP's MDC will change the IO to make it a fulltrack read. So if you work sequentially, it will help. I y

Re: HCPCLS174E Paging I/O error; IPL failed - on second level z/VM

2007-03-19 Thread Mike Walter
Stefan, As already mentioned, this looks suspiciously like you did not fully format ALL cylinders required for CP's use. In this case the PAGE space -- but CP also needs all the cylinders allocated to SPOL, DRCT, WARM, CKPT, and if used, DUMP to be formatted with CPFMTXA (ICKDSF under the cov

Re: PAV and VSE guest

2007-03-19 Thread Eric Schadow
Dave ESCON or FICON? Eric Eric Schadow Mainframe Technical Support www.davisvision.com The information contained in this communication is intended only for the use of the recipient(s) named above. It may contain infor

Re: PAV and VSE guest

2007-03-19 Thread Dave Reinken
Well, I know that VSE can't do PAV, but z/VM 5.2 w/ APAR VM63952 can do it for VSE, providing that the volumes are minidisks. I just don't think that it is going to get me much (if anything) with a single guest. Would VM minidisk caching help throughput in a large batch environment? The manager i

Re: PAV and VSE guest

2007-03-19 Thread McBride, Catherine
Dave, you may want to cross-post this one on VSE-L..could generate some good dialogue. -Original Message- From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] Behalf Of Dave Reinken Sent: Monday, March 19, 2007 3:19 PM To: IBMVM@LISTSERV.UARK.EDU Subject: PAV and VSE guest I w

Re: PAV and VSE guest

2007-03-19 Thread Eric Schadow
Dave If you made the DASD mini-disks instead of DEDICATED you can try VM mini disk caching. I am pretty sure PAV is z/OS only... All the regular tuning things can reviewed also - VSAM buffer tuning - - Sequential file blocking - Application s/w tuning - VSE or VM paging? etc etc Eric A

PAV and VSE guest

2007-03-19 Thread Dave Reinken
I was recently reviewing this: http://www.vm.ibm.com/storman/pav/pav2.html at the behest of my manager. He is looking to extend the life of and better utilize our current hardware. We are running z/VM 5.2 on a z800, with a single z/VSE 3.1.2 guest, using Shark 2105-F20 disk. We currently use DEDICA

Re: HCPCLS174E Paging I/O error; IPL failed - on second level z/VM

2007-03-19 Thread RPN01
Did you format the entire paging space? It needs to be pre-defined with 4k page blocks before it can be used. CMS minidisk volumes can be defined using ICKDSF by just formatting cyl 0 to contain the correct label, and everything will handle formatting their small piece of the world when the time c

Re: TCPNJE

2007-03-19 Thread Schuh, Richard
What happened? Did you forget about a NOT operator somewhere? The link stays alive with KEEPALIV=NO and drops with KEEPALIV=YES. Reminds me of the OS2 dialog boxes. "You told me to do x. Reply YES to not do it; NO to go ahead and do it." Regards, Richard Schuh -Original Message- From:

HCPCLS174E Paging I/O error; IPL failed - on second level z/VM

2007-03-19 Thread Stefan Raabe
Hello List, i was working on a a second level VM (z/VM 5.2, 5203RSU applied) when i got these messages after IPL when trying to log on users: HCPCLS174E Paging I/O error; IPL failed HCPCLS059E AUTOLOG failed for AUTOLOG1 - IPL failed I was running with 384 Meg, so i made it 512meg and after

Re: Moving on

2007-03-19 Thread William Munson
Richard, Good luck ! I enjoyed working with you. I also enjoyed the MAILBOOK product and all it could do. Thanx for all the help you gave me and all the others you helped in the VM community. thanx again Bill Munson IT Specialist Office of Information Technology State of New Jersey (609) 984-4

Re: Moving on

2007-03-19 Thread Peter Carrier
Hi Richard, We at MIT wish to thank you for your dedication and support for Mailbook over the years. While we're down to only a handful of users, it is still appreciated Peter, and the folks at MIT