Re: MAXCONN
On Friday, 10/03/2008 at 03:52 EDT, Rob van der Heij <[EMAIL PROTECTED]> wrote: > On Fri, Oct 3, 2008 at 9:46 AM, Alan Altmark <[EMAIL PROTECTED]> wrote: > > > A single IUCV connection handles multiple sockets. > > But your average application may not do that and could also establish > multiple connections to the same stack, right? Your "average application" will establish a single IUCV connection. Anyone writing a C socket program will establish just one. Rexx socket('INITIALIZE') creates the IUCV connection, and I haven't seen any normal prgram ever issue more than one INITIALIZE. Alan Altmark z/VM Development IBM Endicott
Re: Paging volumes, size vs. number
Having a good paging system is as much about bandwidth as it is about capacity, so I'd say the other responders are offering sound advice. Lac k of sufficient capacity will certainly hurt badly when you fill it up and run out (causing a PGT004 abend), but lack of sufficient bandwidth will hurt performance and throughput whenever the system is paging, and if bad enou gh, can lead or contribute to abends of its own (e.g., FRF002). - Bill Holder, z/VM Development, IBM
Re: Paging volumes, size vs. number
Probably depends on how much you plan to page! Do you plan to overcommit memory like 6:1 or keep it sane? How robust does your paging system need to be? We were in a situation where we had 100 servers (1/2 fat webshere) on a system with only 28G. To 100 mod 3 on DS8000 over 8 channels, we could routinely page in the tens of thousands per second. Now, we're not the cruel to production :) Production pages very little and it could happily be on a couple of mod 9s. I'd also weigh the DASD vendor heavily. Some are better (way better) than others at lots of writes of 4K blocks. Marcy "This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation." -Original Message- From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of Feller, Paul Sent: Friday, October 03, 2008 12:02 PM To: IBMVM@LISTSERV.UARK.EDU Subject: Re: [IBMVM] Paging volumes, size vs. number If I could, I would stay with the 16 3390-3's. My reason is that the IO load is spread over more volumes. Also, if I could, I would spread the volumes over multiple CUs. That's how I look at. Paul Feller AIT Mainframe Technical Support -Original Message- From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of O'Brien, Dennis L Sent: Friday, October 03, 2008 11:11 AM To: IBMVM@LISTSERV.UARK.EDU Subject: Paging volumes, size vs. number We are sizing a new z/VM system for a Linux guest workload. We traditionally use 3390-3 size devices for paging. We determined that we need 16 3390-3's for this particular workload. Our DASD people asked if we could use 3390-9's instead. Based on space, they want to give us 6 3390-9's for paging (rounding up). Assuming we have four FICON channels, is there any performance benefit to having more than 4 paging devices? I.e. is 16 devices better than 6 on 4 channels? The system isn't built yet, so using a performance monitor isn't possible. Dennis O'Brien We are Borg of America. You will be assimilated. Resistance is futile.
Re: Paging volumes, size vs. number
I don't remember if these numbers are available from Performance Tool Kit. I think so. If you can look at numbers for a present workload and if it's different or you expect to see differences, you should just be able to factor. What I'm getting at is that you should be able to look at present dasd service times and if 3390-3 numbers is all you have available you'll have to go with them and adjust for 3390-9. At any rate, given service times and I/O rates, you should be able to come up with a swag for device utilization. Adjust for expected I/O rates and see if your expected device utilization will get up to or over 40%. That's about the % utilization where a single server queue with random inter arrival times (a page pack) starts to get into queuing trouble. At that point, the graph of expected service times starts to get a really sharp upward bend in it. Jim O'Brien, Dennis L wrote: We are sizing a new z/VM system for a Linux guest workload. We traditionally use 3390-3 size devices for paging. We determined that we need 16 3390-3's for this particular workload. Our DASD people asked if we could use 3390-9's instead. Based on space, they want to give us 6 3390-9's for paging (rounding up). Assuming we have four FICON channels, is there any performance benefit to having more than 4 paging devices? I.e. is 16 devices better than 6 on 4 channels? The system isn't built yet, so using a performance monitor isn't possible. Dennis O'Brien We are Borg of America. You will be assimilated. Resistance is futile. -- Jim Bohnsack Cornell University (972) 596-6377 home/office (972) 342-5823 cell [EMAIL PROTECTED]
Re: Paging volumes, size vs. number
If I could, I would stay with the 16 3390-3's. My reason is that the IO load is spread over more volumes. Also, if I could, I would spread the volumes over multiple CUs. That's how I look at. Paul Feller AIT Mainframe Technical Support -Original Message- From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of O'Brien, Dennis L Sent: Friday, October 03, 2008 11:11 AM To: IBMVM@LISTSERV.UARK.EDU Subject: Paging volumes, size vs. number We are sizing a new z/VM system for a Linux guest workload. We traditionally use 3390-3 size devices for paging. We determined that we need 16 3390-3's for this particular workload. Our DASD people asked if we could use 3390-9's instead. Based on space, they want to give us 6 3390-9's for paging (rounding up). Assuming we have four FICON channels, is there any performance benefit to having more than 4 paging devices? I.e. is 16 devices better than 6 on 4 channels? The system isn't built yet, so using a performance monitor isn't possible. Dennis O'Brien We are Borg of America. You will be assimilated. Resistance is futile.
Re: Paging volumes, size vs. number
How many packs you need depends on the IO rate that is required to be handled by the paging subsystem. That is more important than the volume you seem to know. A pack can sustain a certain IO rate with a good responsetime. For paging packs one usually recommends mdl 3 and not mdl 9, CP will not use PAV for its paging. FICON channels will not be the limiting factor, they can drive high loads, i.e. even with only one FICON channel, 16 mdl 3 packs would be better than 6 mdl 9s 2008/10/3 O'Brien, Dennis L > > We are sizing a new z/VM system for a Linux guest workload. We > traditionally use 3390-3 size devices for paging. We determined that we > need 16 3390-3's for this particular workload. Our DASD people asked if > we could use 3390-9's instead. Based on space, they want to give us 6 > 3390-9's for paging (rounding up). Assuming we have four FICON > channels, is there any performance benefit to having more than 4 paging > devices? I.e. is 16 devices better than 6 on 4 channels? > > The system isn't built yet, so using a performance monitor isn't > possible. > > Dennis O'Brien > > We are Borg of America. You will be assimilated. Resistance is futile. -- Kris Buelens, IBM Belgium, VM customer support
Re: z/Linux CONF file continuation syntax
I haven't looked it up, but I thought the DASD= parameter could take ra nges as well as individual addresses. Try DASD="700,800-809,810-81x, ... more addesses" You might even consolidate the two 8xx ranges as 800-81F or whatever your highest 8xx set of 16 addresses (3F, 8F, FF, etc). /Tom Kern On Fri, 3 Oct 2008 13:34:33 -0400, Martin, Terry R. (CMS/CTR) (CTR) <[EMAIL PROTECTED]> wrote: >Hi > >I am executing a CONF file and Parmfile so to bring my z/Linux guest >into RESCUE mode. On the DASD statement of the CONF file I need to >present all of the MDISK that are defined to this guest. The problem is >that the number of MDISKs are too many for one line and I am not sure >what the syntax is to continue the DASD statement on another line. >Anyone have an idea? > >/* configuration information for autoinstall RH46 */ >HOSTNAME="E4CL021D.CMS.HHS.GOV" >DASD="700,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814" >(This is the line I want to continue) >NETTYPE="qeth" >SUBCHANNELS="0.0.8300,0.0.8301,0.0.8302" >PORTNAME=" " >IPADDR="10.15.49.249" >NETWORK="10.15.0.0" >NETMASK="255.255.0.0" >BROADCAST="10.15.0.255" >GATEWAY="10.15.0.254" >SEARCHDNS="cms.hhs.gov" >LAYER2=0 >DNS="10.15.0.117" >MTU="1500" > >Thank You, > >Terry Martin >Lockheed Martin - Information Technology >z/OS & z/VM Systems - Performance and Tuning >Cell - 443 632-4191 >Work - 410 786-0386 >[EMAIL PROTECTED]
Re: z/Linux CONF file continuation syntax
>>> On 10/3/2008 at 1:34 PM, in message <[EMAIL PROTECTED]>, "Martin, Terry R. (CMS/CTR) (CTR)" <[EMAIL PROTECTED]> wrote: > > Hi > > I am executing a CONF file and Parmfile so to bring my z/Linux guest > into RESCUE mode. On the DASD statement of the CONF file I need to > present all of the MDISK that are defined to this guest. The problem is > that the number of MDISKs are too many for one line and I am not sure > what the syntax is to continue the DASD statement on another line. > Anyone have an idea? > > > > /* configuration information for autoinstall RH46 */ > > HOSTNAME="E4CL021D.CMS.HHS.GOV" > > DASD="700,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814" > (This is the line I want to continue) > > NETTYPE="qeth" > > SUBCHANNELS="0.0.8300,0.0.8301,0.0.8302" > > PORTNAME=" " > > IPADDR="10.15.49.249" > > NETWORK="10.15.0.0" > > NETMASK="255.255.0.0" > > BROADCAST="10.15.0.255" > > GATEWAY="10.15.0.254" > > SEARCHDNS="cms.hhs.gov" > > LAYER2=0 > > DNS="10.15.0.117" > > MTU="1500" It all gets wrapped anyway, so if you put things all the way out to column 80, you can just keep typing in column one of the next line. In your case, you can do this as well: DASD="700,800-809,810-814" I didn't include 80A through 80F because that would shift the device names of the volumes from 810-814. If they had been used, however, you could just do "800-814". Mark Post
z/Linux CONF file continuation syntax
Hi I am executing a CONF file and Parmfile so to bring my z/Linux guest into RESCUE mode. On the DASD statement of the CONF file I need to present all of the MDISK that are defined to this guest. The problem is that the number of MDISKs are too many for one line and I am not sure what the syntax is to continue the DASD statement on another line. Anyone have an idea? /* configuration information for autoinstall RH46 */ HOSTNAME="E4CL021D.CMS.HHS.GOV" DASD="700,800,801,802,803,804,805,806,807,808,809,810,811,812,813,814" (This is the line I want to continue) NETTYPE="qeth" SUBCHANNELS="0.0.8300,0.0.8301,0.0.8302" PORTNAME=" " IPADDR="10.15.49.249" NETWORK="10.15.0.0" NETMASK="255.255.0.0" BROADCAST="10.15.0.255" GATEWAY="10.15.0.254" SEARCHDNS="cms.hhs.gov" LAYER2=0 DNS="10.15.0.117" MTU="1500" Thank You, Terry Martin Lockheed Martin - Information Technology z/OS & z/VM Systems - Performance and Tuning Cell - 443 632-4191 Work - 410 786-0386 [EMAIL PROTECTED]
Re: MAXCONN
That is not a problem. We have control of the EXEC that creates the sockets. Regards, Richard Schuh > -Original Message- > From: The IBM z/VM Operating System > [mailto:[EMAIL PROTECTED] On Behalf Of Rob van der Heij > Sent: Friday, October 03, 2008 12:52 AM > To: IBMVM@LISTSERV.UARK.EDU > Subject: Re: MAXCONN > > On Fri, Oct 3, 2008 at 9:46 AM, Alan Altmark > <[EMAIL PROTECTED]> wrote: > > > A single IUCV connection handles multiple sockets. > > But your average application may not do that and could also > establish multiple connections to the same stack, right? > > From the VMDBK you follow the VMDIUCVB pointer to the IUCVB > of the virtual machine. There you find the count for the > number of active connections and the maximum from the > directory. If you can't explain the number you see, ook for > Chuckie's LSTIUCV (or write your own to walk the chain of blocks). > > Rob >
Re: MAXCONN
Thanks. That is what I thought, but wanted to verify. Regards, Richard Schuh > -Original Message- > From: The IBM z/VM Operating System > [mailto:[EMAIL PROTECTED] On Behalf Of Alan Altmark > Sent: Friday, October 03, 2008 12:46 AM > To: IBMVM@LISTSERV.UARK.EDU > Subject: Re: MAXCONN > > On Thursday, 10/02/2008 at 12:20 EDT, "Schuh, Richard" > <[EMAIL PROTECTED]> > wrote: > > What consumes the connections specified in the MAXCONN option of the > TCPIP > > machine's directory? Does each open socket consume 1 > connection, or is > it 1 > > connection for each guest that opens 1 or more sockets? > > A single IUCV connection handles multiple sockets. > > Alan Altmark > z/VM Development > IBM Endicott >
Re: Tracking Hot Spots
I had come to that conclusion. There is no data that I can find in the monitor records that can be used for this type of profiling. Regards, Richard Schuh > -Original Message- > From: The IBM z/VM Operating System > [mailto:[EMAIL PROTECTED] On Behalf Of Alan Altmark > Sent: Friday, October 03, 2008 1:32 AM > To: IBMVM@LISTSERV.UARK.EDU > Subject: Re: Tracking Hot Spots > > On Thursday, 10/02/2008 at 09:36 EDT, LOREN CHARNLEY > <[EMAIL PROTECTED]> wrote: > > Your best bet would be to investigate the Velocity Software Suite, > > they > are the > > only ones to have the performance monitor for VM. > > Loren, your statement that Velocity the only vendor of > performance monitoring software is incorrect. Both IBM and > Velocity offer solutions. > > But a VM performance monitor is not an "execution profiler". > To really know where a machine is spending its time, you have > to look at an instruction trace. Depending on the workload, > you MAY be able to get some idea by sampling the PSW as Rob suggests. > > Alan Altmark > z/VM Development > IBM Endicott >
Paging volumes, size vs. number
We are sizing a new z/VM system for a Linux guest workload. We traditionally use 3390-3 size devices for paging. We determined that we need 16 3390-3's for this particular workload. Our DASD people asked if we could use 3390-9's instead. Based on space, they want to give us 6 3390-9's for paging (rounding up). Assuming we have four FICON channels, is there any performance benefit to having more than 4 paging devices? I.e. is 16 devices better than 6 on 4 channels? The system isn't built yet, so using a performance monitor isn't possible. Dennis O'Brien We are Borg of America. You will be assimilated. Resistance is futile.
Re: AUTOLOG
Thanks Alan, that is what I will do! Thank You, Terry Martin Lockheed Martin - Information Technology z/OS & z/VM Systems - Performance and Tuning Cell - 443 632-4191 Work - 410 786-0386 [EMAIL PROTECTED] -Original Message- From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of Alan Altmark Sent: Thursday, October 02, 2008 10:55 PM To: IBMVM@LISTSERV.UARK.EDU Subject: Re: AUTOLOG On Thursday, 10/02/2008 at 04:05 EDT, "Martin, Terry R. (CMS/CTR) (CTR)" <[EMAIL PROTECTED]> wrote: > Yes it was a privilege issue. Thanks again for all the responses! Terry, you should do those CP commands in AUTOLOG1's profile, before you XAUTOLOG RACFVM. AUTOLOG2 doesn't need any privilege except to XAUTOLOG users. Alan Altmark z/VM Development IBM Endicott
Re: VMARC download date
On Thu, Oct 2, 2008 at 5:41 PM, Schuh, Richard <[EMAIL PROTECTED]> wrote: > You could make that perhaps a little better: >"...| sort 57.19 d | take 1 |..." Or ".. | substr 57.19 | sort d | take | .. " to reduce the working storage for the sort stage (but only for very large data sets). With the latest pipes, we can even make it much better* spec printonly eof a: 57.19 - if first() then set #0:=a elseif a>>#0 then set #0:=a fi eof print #0 1 * But "better" is so subjective. The spec 407 compare eliminates the need for sorting and buffering the list, but just scans them and retains the largest entry. As I expected it uses ~ 100 K less memory, and as I should have predicted it uses a lot more CPU... -Rob
Re: Tracking Hot Spots
On Thursday, 10/02/2008 at 09:36 EDT, LOREN CHARNLEY <[EMAIL PROTECTED]> wrote: > Your best bet would be to investigate the Velocity Software Suite, they are the > only ones to have the performance monitor for VM. Loren, your statement that Velocity the only vendor of performance monitoring software is incorrect. Both IBM and Velocity offer solutions. But a VM performance monitor is not an "execution profiler". To really know where a machine is spending its time, you have to look at an instruction trace. Depending on the workload, you MAY be able to get some idea by sampling the PSW as Rob suggests. Alan Altmark z/VM Development IBM Endicott
Re: MAXCONN
On Fri, Oct 3, 2008 at 9:46 AM, Alan Altmark <[EMAIL PROTECTED]> wrote: > A single IUCV connection handles multiple sockets. But your average application may not do that and could also establish multiple connections to the same stack, right? >From the VMDBK you follow the VMDIUCVB pointer to the IUCVB of the virtual machine. There you find the count for the number of active connections and the maximum from the directory. If you can't explain the number you see, ook for Chuckie's LSTIUCV (or write your own to walk the chain of blocks). Rob
Re: MAXCONN
On Thursday, 10/02/2008 at 12:20 EDT, "Schuh, Richard" <[EMAIL PROTECTED]> wrote: > What consumes the connections specified in the MAXCONN option of the TCPIP > machine's directory? Does each open socket consume 1 connection, or is it 1 > connection for each guest that opens 1 or more sockets? A single IUCV connection handles multiple sockets. Alan Altmark z/VM Development IBM Endicott