Re: RSCS-z/OS question....

2006-04-25 Thread Dave Jones
Many thanks to all those who responded to my queryI've got it 
working now. The list comes through again. ;-)


DJ

Roy, Bruce wrote:
Dave, 


Here's the RSCS definition for our MVS link where 1C0 is the address of
our 
CTCTA:  


LINKDEF XMVS TYPE NJE  LINE 1C0  Q PRI  DP 4 NOAST
LINKDEF XMVS SLOWDOWN 9900 9500
   PARM XMVS BUFF=3840 ST=1 TA=0


Here's an RSCS GCS EXEC for handling a vCTCA and starting the link:
/**/
ADDRESS 'RSCS'
'CP DETACH 1C0'
'CP DEFINE CTCA 1C0'
'CP COUPLE 1C0 MVSTEST EC0'
'START XMVS'


Here's an MVS EXEC that our CMS users use to send a JOB to MVS: 
/* This exec send a file to MVS as a job for execution. */

address 'COMMAND'
parse upper arg fn ft fm .
select
 when fn = '' then do /* No argument passed */
  'CP SPOOL PUNCH RSCS'   /* Set PUNCH to send to RSCS */
  'CP TAG DEV PUNCH XMVS JOB' /* Set TAG values so file goes to
MVS*/
   exit rc /* Exit exec */
  end
 when ft = '' then signal help /* Only one argument passed */
 when fm = '' then fm = 'A' /* If filemode is not passed, set to A
disk*/
 otherwise nop
end /* select */
'CP SPOOL PUNCH RSCS'   /* Set PUNCH to send to RSCS */
'CP TAG DEV PUNCH XMVS JOB' /* Set TAG values so file goes to MVS */
'PUNCH' fn ft fm '(NOHEADER' /* PUNCH job with NOHEADER option */
exit rc
HELP:
say 'The MVS EXEC will issue the following commands in order to send a
job to'
say 'MVS through RSCS:'
say '  CP SPOOL PUNCH RSCS'
say '  CP TAG DEV PUNCH XMVS JOB'
say '  PUNCH fn ft fm (NOHEADER'
say 'where fn ft fm is the filename, filetype, and filemode of the
file.'


Here's an MVSQ EXEC that's used to send commands to MVS (yes, it was
written before we had REXX!):
 
TRACE OFF

A1 = 1
IF .A1 EQ .? GOTO -TELL
IDENTIFY (STACK LIFO
READ VARS USERID D D D RSCS
IF .A1 EQ . A1 = USERID
A = PIECE OF A1 1 1
IF .A EQ .$ GOTO -FIX
A = $DJOBQ'
A = CONCAT OF A A1 '
-SMSG
NODE = XMVS
EXECIO * CP (STEM CPREP STRING TAG QUERY DEV PUN
IF CPREP0 LT 2 GOTO -BYNODE
   N = LOCATION OF JOB CPREP2
   IF N EQ 0 GOTO -BYNODE
   N = N - 1
   NODE = SUBSTR OF CPREP2 1 N
-BYNODE
CP SMSG RSCS CMD NODE A
IF RETCODE EQ -3 TYPE Communications with MVS temporarily unavailable
EXIT
-FIX
A = A1
GOTO -SMSG
-TELL
TYPE The format for the MVSQ command is:
TYPE MVSQ  jobname
TYPE (where the userid is used as the jobname if no name is
given)
TYPE or
TYPE MVSQ  command
TYPE (where 'command' is a JES2 command, preceded by a $)


If you need the MVS end of the definitions, let me know and I'll contact
our MVS colleague.  I'm pretty sure that the MVS end does NOT use VTAM
or the  LINKDEF TYPE in the above definition would be SNANJE.

Hope this helps. 

Bruce Roy 
University of Connecticut


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Dave Jones
Sent: Thursday, April 20, 2006 2:19 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: RSCS-z/OS question

Hi, Gang

I have a problem I'm hoping somebody on this list can assist me 
with. I need to configure an NJE connection between a first level 
RSCS server and a 2nd level z/OS guest. The client wants to be able to 
submit z/OS batch type jobs from CMS user id on the 1st level VM system 
to the 2nd level z/OS one, and get the prt/pun output back to the CMS

user.

I have a virtual CTC connection set up between VM and z/OS already, and 
z/OS can see the CTC, vary it online, etc. What I am missing are the 
RSCS and JES2 (I think, or maybe I need to use VTAM?) line configuration


statements.

Does anyone have a set of working examples they would be willing to 
share? TIA.


Have a good one.

DJ


Re: Online Tutorial

2006-04-25 Thread Pamela Christina in z/VM - Endicott NY
Hi Seb,
The VM/ESA-based online tutorial was removed in 2004.
It was based on the book, CMS Primer.

There is a z/VM V5.1-based CMS Primer in the online library.
http://www.vm.ibm.com/library/

Here's the PDF of the CMS Primer,
 http://publibz.boulder.ibm.com/epubs/pdf/hcsd6b00.pdf

You can also download copies from the book center/bookshelves.


Regards, Pam C


3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread Dusha, Cecelia Ms. WHS/ITMD
I have a question that pertains to performance.

 

We currently have 3390 mod 3 defined volumes.  The customer requires a
larger mini disk size than what will fit on a 3390 mod 3.  We are planning
to create 3390 mod 9s for their larger mini disks.  Would someone explain
the performance hit that will occur by placing their data on a larger
volume.  Maybe it is insignificant, but I seem to recall the architecture
permits a limited number of accesses to the device.  If there are a large
number of users who require access at a given time, then the users could end
up waiting for the device?

 

Please advice.

 

Thank you.

Cecelia Dusha

 


Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread David Boyes
 We currently have 3390 mod 3 defined volumes.  The customer requires a
 larger mini disk size than what will fit on a 3390 mod 3.  We are
planning
 to create 3390 mod 9s for their larger mini disks. 

If it's a CMS user, use SFS. That's exactly what it's for, and all those
restrictions are pretty much moot. The semantics of SFS from a user
viewpoint -- with one exception -- are pretty much the same as
minidisks. The exception is what happens to open files if an application
is accidentally (or intentionally) interrupted -- on SFS, those files
get rolled back to their initial state unless you explicitly commit
interim changes. You can control this, but that's the default behavior.
IMHO, users depending on partial results should be corrected, but c'est
la vie. 

Note: do NOT put users in one of the default VMSYSx SFS pools -- you'll
hate yourself come your next upgrade! Create a separate pool for user
data. 

 Would someone explain
 the performance hit that will occur by placing their data on a larger
 volume.  

For background information, the problem is that in the XA/ESA I/O
architecture, there can be only one physical I/O in progress per device
number. If you create larger volumes, you still have the limitation of
one I/O per device number, and you've put more data behind that single
device number, which usually makes the bottleneck worse. That's what PAV
is supposed to help with - it provides additional device exposures per
device number for CP to use to schedule physical I/O. 

SFS uses multiple extents (it's really a cut-down version of DB/2 and
shares a lot of behaviors with DB/2 VM), and blocks from a file are
distributed across multiple physical disk extents internally. Given
that, you bypass the 1 I/O per device limitation in that record or block
I/O to a CMS file translates to multiple physical I/O operations on
multiple physical devices (multiple I/O ops can occur simultaneously on
different physical devices). You also get hierarchical directories and
file-level ACLs, and a bunch of other nice stuff that I suspect your
employer would like a lot. 

As a system admin, you do still have to think a little bit about how you
distribute the chunks of disk assigned to the SFS server, but it's at a
higher level than an individual user. An unfortunate side effect is also
that using SFS will change your DR and backup strategy somewhat in that
you can't just DDR a SFS filepool, and you need to understand the log
and recovery parts of SFS, but in general it's worth the effort. 


David Boyes
Sine Nomine Associates


Re: z/VM I/O Concurrency

2006-04-25 Thread Dave Jones

David, would it be possible for you to collect some performance data on
the i/o improvements you're seeing there? Perhaps some CP MONITOR data
that we could analyze? It would be nice to know if PAV is really 
something that Linux on z/VM sites should really be taking advantage of.


What was that saying again? Can't measure it; not interested? 
Something like that from somebody on the West Coastthat ring any 
bells? ;-)


DJ

David Kreuter wrote:

With PAV support  for attached/dedicated devices you can achieve I/O
concurrency. I have been running some tests from linux guests for
simple I/O driving and the results with PAV are most impressive. I
have gotten as far as 1 base and 2 aliases so far (I'll keep
increasing the # of aliases until break even point). This is with
z/VM 520 and SUSE SLES9.

You need to do some setup in IOCP, z/VM, and linux (EVMS). Works
well.

I look forward to seeing this soon with my Oracle servers. David

-Original Message- From: The IBM z/VM Operating System on
behalf of Raymond Noal Sent: Tue 4/25/2006 1:45 PM To:
IBMVM@LISTSERV.UARK.EDU Subject: [IBMVM] z/VM I/O Concurrency

Dear List,

I was following a topic thread on the IBM-MAIN list server where
someone wanted to create multiple Linux LPARs instead of running
Linux under z/VM. One respondent stated that z/VM only allows one I/O
to a disk at a time. Is this really true (Allan??). Does one I/O
per disk only apply to mini disks or to attached disks as well? I
find it hard to believe that z/VM would be this restrictive.

Your thoughts.

HITACHI DATA SYSTEMS

Raymond E. Noal Lab Manager, San Diego Facility Office: (858) 537 -
3268 Cell:   (858) 248 - 1172


Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread Dusha, Cecelia Ms. WHS/ITMD
Presently the application is split onto several disks.  The data is to be
combined onto one disk.  The data is a NOMAD database.

The DASD is on DS6800.

I thought PAV was not an option for VM.  Does z/VM 5.2 support PAV?

I had not considered SFS as a possibility...  SFS would add more overhead.

Thank you.

Cecelia Dusha
-Original Message-
From: Schuh, Richard [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 25, 2006 12:48 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: 3390 Mod 3 verses 3390 Mod 9s

Tom and Co.,

I see no statement that there is any intent to combine the 3 existing disks.
The note simply states that 3390-09s are to be created. If the existing
disks, excepting the one that requires more space, are to remain where they
are, there probably would be no performance hit if no other active minidisk
is created on the large volume. If this is an already existing minidisk that
is to be expanded, the existing use patterns would probably continue.

Regards,
Richard Schuh

 -Original Message-
From:   The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]  On
Behalf Of Tom Duerbusch
Sent:   Tuesday, April 25, 2006 9:32 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject:Re: 3390 Mod 3 verses 3390 Mod 9s

If you were near the performance limits of your current (3) 3390-3
volumes, then you don't want to combine them to a 3390-9.

To really know, you need to know the I/O rates on the 3 volumes.  

Also, you need to know what dasd you are actually using.  Modern dasd
(raid that is), with sufficient cache can really substain much higher
I/O rates then we are use to thinking about.

If this is for minidisks and not SFS, DB2 or Guests machines, minidisk
cache will eliminate most of your concerns.

A standard CMS user only does 1 I/O at a time.  So if it is for just
one user, don't worry about it.  If it is for multiple users reading it,
then MDC takes over.  If you have multiple users writing to it, now we
go back to the I/O load.

Tom Duerbusch
THD Consulting

 [EMAIL PROTECTED] 4/25/2006 8:03 AM 
I have a question that pertains to performance.

 

We currently have 3390 mod 3 defined volumes.  The customer requires a
larger mini disk size than what will fit on a 3390 mod 3.  We are
planning
to create 3390 mod 9s for their larger mini disks.  Would someone
explain
the performance hit that will occur by placing their data on a larger
volume.  Maybe it is insignificant, but I seem to recall the
architecture
permits a limited number of accesses to the device.  If there are a
large
number of users who require access at a given time, then the users
could end
up waiting for the device?

 

Please advice.

 

Thank you.

Cecelia Dusha

 


Re: z/VM I/O Concurrency

2006-04-25 Thread Schuh, Richard








Do you
also still have a 601?



Regards,

Richard Schuh



-Original
Message-
From: The IBM z/VM Operating
System [mailto:[EMAIL PROTECTED]On
Behalf Of Bruce Hayden
Sent: Tuesday, April 25, 2006
12:04 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: z/VM I/O Concurrency



We still have one in the museum (also
known as the heritage center) here at Endicott... :-)

On 4/25/06, Huegel, Thomas
 [EMAIL PROTECTED] wrote:



And I thought I was the only one left that remembered the six foot
tall disk drive on the RAMAC 305.






-- 
Bruce Hayden
IBM Global Technology Services System z Linux
Endicott, NY 








Re: z/VM I/O Concurrency

2006-04-25 Thread Jim Bohnsack


Wasn't that the one with the single read/write head on an arm that would
do a seek to the platter before it started going after the rec. If
I remember correctly, CE's would normally keep a big pan underneath to
catch the constant drippings of hydraulic oil. I remember something
like that, altho it may have been the S/360 reincarnation of the 2nd
generation drive, just as the 2311 bore a striking resemblance to a 1311.

Jim 
At 02:34 PM 4/25/2006, you wrote:
This is a multi-part message in
MIME format.
--_=_NextPart_001_01C66896.F17B2D44
X-EC0D2A8E-5CB7-4969-9C36-46D859D137BE-PartID:
A90797A4-37B9-4082-92BE-6647E3C1BC61
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;

charset=iso-8859-1
And I thought I was the only one left that remembered the six foot tall
disk
drive on the RAMAC 305.

-Original Message-
From: The IBM z/VM Operating System
[
mailto:IBMVM@LISTSERV.UARK.EDU]On
Behalf Of Schuh, Richard
Sent: Tuesday, April 25, 2006 1:13 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: z/VM I/O Concurrency



Regards,
Richard Schuh


. Again, this is a HARDWARE restriction that goes back to the dark ages
of
s/360.


Or farther. Anyone remember the disks used by the 1410 (the model
number
escapes me, as do so many things of late) or the RAMAC 305? Besides,
the
S/360 introduced The Age of Enlightenment, didn't it?
--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology


 _ 
 ella for Spam Control  has removed 3894 VSE-List
messages and set aside
2216 VM-List for me
You can use it too - and it's FREE!

www.ellaforspam.com

http://www.ellaforspam.com

--_=_NextPart_001_01C66896.F17B2D44
X-EC0D2A8E-5CB7-4969-9C36-46D859D137BE-PartID:
F3A60029-5237-43CE-BF77-570CD74FF5E7
Content-Type: text/html;

charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.0
Transitional//EN
And I=20 thought I was the only one left that remembered the six foot
tall disk = drive on=20 the RAMAC 305.



-Original Message-

From: The IBM z/VM = Operating=20 System
[
mailto:IBMVM@LISTSERV.UARK.EDU]On Behalf Of Schuh,=20
Richard

Sent: Tuesday, April 25, 2006 1:13 PM

To:=20 IBMVM@LISTSERV.UARK.EDU

Subject: Re: z/VM I/O=20 Concurrency





Regards,

Richard = Schuh

= SPAN class=3DEmailStyle16 



. Again, this is=20 a HARDWARE restriction that goes back to the dark
ages of=20 s/360.





Or farther. Anyone = remember the disks used=20 by the 1410 (the
model number escapes me, as do so many things of late)=20 or the RAMAC
305? Besides, the S/360 introduced The Age of =
Enlightenment,=20 didn't it?

--

John=20 McKown

Senior Systems Programmer

HealthMarkets

Keeping the = Promise of=20 Affordable Coverage

Administrative Services Group

Information=20 Technology




 ella for = "" Control  has removed=20 3894
VSE-List messages and set aside 2216 VM-List = for="" me
You can use it too - and it's FREE!
www.ellaforspam.com
--_=_NextPart_001_01C66896.F17B2D44-- 

Jim Bohnsack
Cornell Univ.
(607) 255-1760





Re: z/VM I/O Concurrency

2006-04-25 Thread Mike Myers
Title: Message



'fraid not! They had one at John Deere in Atlantic, 
Iowa. I was a field engineer at the time and that was one of my accounts. It was 
glass enclosed and you could watch the access arm move. 

Mike Myers

  - Original Message - 
  From: 
  Huegel, Thomas 
  
  To: IBMVM@LISTSERV.UARK.EDU 
  Sent: Tuesday, April 25, 2006 2:34 
  PM
  Subject: Re: z/VM I/O Concurrency
  
  And 
  I thought I was the only one left that remembered the six foot tall disk drive 
  on the RAMAC 305.
  
  
-Original Message-From: The IBM z/VM Operating 
System [mailto:[EMAIL PROTECTED]On Behalf Of Schuh, 
RichardSent: Tuesday, April 25, 2006 1:13 PMTo: IBMVM@LISTSERV.UARK.EDUSubject: 
Re: z/VM I/O Concurrency



Regards,
Richard Schuh


.Again, this 
is a HARDWARE restriction that goes back to the dark ages of 
s/360.


Or farther. Anyone remember the disks 
used by the 1410(the model number escapes me, as do so many things of 
late) or the RAMAC 305? Besides, the S/360 introduced "The Age of 
Enlightenment", didn't it?
--John 
McKownSenior Systems ProgrammerHealthMarketsKeeping the Promise 
of Affordable CoverageAdministrative Services GroupInformation 
Technology
  

  


   ella for Spam Control  has removed 
3894 VSE-List messages and set aside 2216 VM-List for 
meYou can use it too - and it's FREE!www.ellaforspam.com


Re: 3390 Mod 3 verses 3390 Mod 9s

2006-04-25 Thread Tom Duerbusch
VM supports PAV for guests.  I don't think it does, or needs to, support
it natively.  

PAV is a chargable feature on the DS6800.  I elected not to buy it on
our DS6800.

Tom Duerbusch
THD Consulting

 [EMAIL PROTECTED] 4/25/2006 12:17 PM 
Presently the application is split onto several disks.  The data is to
be
combined onto one disk.  The data is a NOMAD database.

The DASD is on DS6800.

I thought PAV was not an option for VM.  Does z/VM 5.2 support PAV?

I had not considered SFS as a possibility...  SFS would add more
overhead.

Thank you.

Cecelia Dusha
 


Re: DFSMS RMS ISPF

2006-04-25 Thread Shimon Lebowitz
It is correct.
We got rid of ISPF long ago (we had only installed
it for the NDM product to begin with) but we do use
RMS for the tape library.

Shimon

On 25 Apr 2006 at 16:24, Lee Stewart wrote:

 Hi all...I must be getting old, but
 
 I seem to remember not too long ago a discussion here about
 installing 
 only the RMS part of DFSMS for 3494 support and *I think* someone
 said 
 you don't need ISPF for that part of DFSMS.
 
 I searched the archives several ways and couldn't find any
 discussion 
 like that, so I'm asking here...   Is it correct?  And does anyone
 else 
 remember that discussion or am I.?
 
 Thanks as always,
 Lee
 -- 
 
 Lee Stewart, Senior SE
 Sirius Enterprise Systems Group
 Phone: (303) 798-2954
 Fax: (720) 228-2321
 [EMAIL PROTECTED]
 www.siriuscom.com


Re: DFSMS RMS ISPF

2006-04-25 Thread Lee Stewart

Thanks Marcy  Shimon...

Marcy Cortes wrote:

Yes, you are correct.   We have several systems with just rmsonly
DFSMS installed for 3494 support.   See the program directory for it for
it.  http://www.vm.ibm.com/related/dfsms



Marcy Cortes


Lee Stewart, Senior SE
Sirius Enterprise Systems Group
Phone: (303) 798-2954
Fax: (720) 228-2321
[EMAIL PROTECTED]
www.siriuscom.com