Re: TCPNJE

2007-03-16 Thread Les Geer (607-429-3580)
>Not yet. I haven't been able to get in contact with anyone in the MVS or
>Network groups since Wednesday morning - it must be the cursed caller id
>thing - and could not pry the answer from them prior to that.
>
>
>Regards,
>Richard Schuh
>

Just for the heck of it, try using KEEPALIV=NO and see if that makes
a difference.

Best Regards,
Les Geer
IBM z/VM and Linux Development


Re: TCPNJE

2007-03-16 Thread Schuh, Richard
Not yet. I haven't been able to get in contact with anyone in the MVS or
Network groups since Wednesday morning - it must be the cursed caller id
thing - and could not pry the answer from them prior to that. 


Regards, 
Richard Schuh 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Les Geer (607-429-3580)
Sent: Friday, March 16, 2007 1:43 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

>Since the JES link apparently has taken the link down, has done a 
>restart, and is in the active state, waiting for us to come back on. 
>The connection is immediate when RSCS does its restart.
>
>BTW, the KEEPALIV=3DYES in the PARM for the link had no effect.  The 
>full parm specification was "host=3Dipaddr ito=3D100 keepaliv=3Dyes".
>
>Is there someone who has a working TCPNJE link between z/OS and VM who 
>would be willing to share their definitions for both RSCS and JES?

If you had not previously used the KEEPALIV= PARM, YES is the default.
Do we know what is taking the JES side down?  This isn't clear to me.

Best Regards,
Les Geer
IBM z/VM and Linux Development


Re: TCPNJE

2007-03-16 Thread Les Geer (607-429-3580)
>Since the JES link apparently has taken the link down, has done a
>restart, and is in the active state, waiting for us to come back on. The
>connection is immediate when RSCS does its restart.
>
>BTW, the KEEPALIV=3DYES in the PARM for the link had no effect.  The
>full
>parm specification was "host=3Dipaddr ito=3D100 keepaliv=3Dyes".
>
>Is there someone who has a working TCPNJE link between z/OS and VM who
>would be willing to share their definitions for both RSCS and JES?

If you had not previously used the KEEPALIV= PARM, YES is the default.
Do we know what is taking the JES side down?  This isn't clear to me.

Best Regards,
Les Geer
IBM z/VM and Linux Development


Re: TCPNJE

2007-03-16 Thread Schuh, Richard
Since the JES link apparently has taken the link down, has done a
restart, and is in the active state, waiting for us to come back on. The
connection is immediate when RSCS does its restart.

BTW, the KEEPALIV=YES in the PARM for the link had no effect.  The full
parm specification was "host=ipaddr ito=100 keepaliv=yes".

Is there someone who has a working TCPNJE link between z/OS and VM who
would be willing to share their definitions for both RSCS and JES?


Regards, 
Richard Schuh 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Les Geer (607-429-3580)
Sent: Thursday, March 15, 2007 4:46 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

>As far as I know, JES is aware of the state. Their side apparently has 
>an autostart because whenever the link drops and RSCS restarts it from 
>the VM side, the signs are done and the link reestablished for another
>22 minutes.
>

But what is causing the JES side to drain?  Is it the keepalive (like is
causing RSCS to drain), or is the JES side going down first then the
keepalive takes down RSCS?

Best Regards,
Les Geer
IBM z/VM and Linux Development


Re: TCPNJE

2007-03-16 Thread Schuh, Richard
That is not explicitly coded, so it depends on the default setting. That said, 
I believe that the wording of the parameter documentation says that it is 
needed for NJE of SNANJE links between VM and JES, or a CONNECT can be used in 
the JES initialization. It does not appear to be pertinent to a TCPNJE link.  


Regards, 
Richard Schuh 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of 
Raymond Noal
Sent: Thursday, March 15, 2007 5:05 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

Richard,

Just curious, do you have PATHMGR=NO for your RSCS JES/2 NODE definition?

HITACHI
 DATA SYSTEMS
Raymond E. Noal
Senior Technical Engineer
Office: (408) 970 - 7978 

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On Behalf Of 
Schuh, Richard
Sent: Thursday, March 15, 2007 4:46 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

As far as I know, JES is aware of the state. Their side apparently has an 
autostart because whenever the link drops and RSCS restarts it from the VM 
side, the signs are done and the link reestablished for another
22 minutes.

Regards,
Richard Schuh 


-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Les Geer (607-429-3580)
Sent: Thursday, March 15, 2007 2:12 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: TCPNJE

>You've gotta be kidding. That or your publications people are. The only

>mention of KEEPALIVE=3DYES in any RSCS manual is in a sample config 
>file in Appendix B of the RSCS Planning and Installation. Is this one 
>of those, "It is so obvious we don't need to document it," things.=20
>
>Where is the default documented?

It is documented in the RSCS Planning and Installation manual although
the option is KEEPALIV= (not keepalive) and the default is yes.  So RSCS
would have enabled keepalive for the socket session, that would probably
be what is happening.
>From the other threads, if JES has defined an ITO= parameter, I would
think they would have sent a signoff record when the link went down, not
just leave the link in a 'gone' state.  I bet something else is taking
the link away.  Is the JES side aware of the link state?

Best Regards,
Les Geer
IBM z/VM and Linux Development


Re: Historical curiousity question.

2007-03-16 Thread Tom Duerbusch
Why, is easy
 
CMS was developed in the '60s.  There was no concept of PCs or their disk 
structure at that time.  Memory was very expensive (hence the 512 byte blocks) 
and disk was too expensive to waste.  Most of what was going to be under CMS 
was files like we xedit with.  Not data files.  
 
The CMS minidisk structure is very efficient and very forgiving with crashes.  
It is very rare that a crash would corrupt a minidisk.  
 
And when CMS was put with CP, the only concern was being able to have multiple 
smaller minidisks mapped to a larger volume as efficiently as possible.  
 
Back in the early '70s, a programmer might cost you $10k.  A MB of main memory 
might cost you $1M.  With that type of cost difference, you solved what 
problems you could, with manpower.  
 
Most of us laughed when VSAM was announced.  Buffersin memory?  Forget that 
garbage!  We are paging too much as it is.  In '79 with the R*star white paper 
(when relational database concept was defined).  Never going to work!  Direct 
I/O!  Now that works!
 
I laugh at a lot of things we use to believe.  And in 10 years, I will laugh at 
what I believe now
 
Tom Duerbusch
THD Consulting

>>> LOREN CHARNLEY <[EMAIL PROTECTED]> 3/15/2007 10:30 AM >>>
John,

I have a PFKEY set up in MAINT to list mdisks in different ways, one of
which might be what you are looking for. I actually run this every time
I up date the directory and run an edit on it, I can spot an overlap on
files easily this way also.

PF06 DELAY DISKMAP USER#DIRMAP USER(GAPFILE INCLUDE LINKS#DIRECTXA (EDIT

Loren Charnley, Jr.
IT Systems Engineer
Family Dollar Stores, Inc.
(704) 847-6961 Ext. 7043
(704) 708-7043
[EMAIL PROTECTED] 

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of McKown, John
Sent: Thursday, March 15, 2007 10:43 AM
To: IBMVM@LISTSERV.UARK.EDU 
Subject: Historical curiousity question.

This is not important, but I just have to ask this. Does anybody know
why the original designers of VM did not do something for "minidisks"
akin to a OS/360 VTOC? Actually, it would be more akin to a "partition
table" on a PC disk. It just seems that it would be easier to maintain
if there was "something" on the physical disk which contained
information about the minidisks on it. Perhaps with information such as:
start cylinder, end cylinder, owning guest, read password, etc. CP owned
volumes have an "allocation map", this seems to me to be an extention of
that concept.

Just curious.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it. 

-
 NOTE:
This e-mail message contains PRIVILEGED and CONFIDENTIAL
information and is intended only for the use of the specific
individual or individuals to which it is addressed. If you are not
an intended recipient of this e-mail, you are hereby notified that
any unauthorized use, dissemination or copying of this e-mail or
the information contained herein or attached hereto is strictly
prohibited. If you receive this e-mail in error, notify the person
named above by reply e-mail and please delete it. Thank you.



Re: PERFKIT error

2007-03-16 Thread Dave Ross
You probably need APAR VM64152.

--- Mikhael Ramirez Joaquin <[EMAIL PROTECTED]>
wrote:

>  
> Hi Guys,
> 
> We are running z/VM 4.4 in one of our z9 box and we
> implement PERFKIT to
> monitor our z/VM. But when I'm running the PERFKIT
> after a few hours
> (almost a day) the PERFKIT dumps and give me an
> error like the one
> below:
> 
> Dumping LOC R0979
> 
> Dumping LOC R097A
> 
> Dumping LOC R097B
> 
> Dumping LOC R097C
> 
> Dumping LOC R097D
> 
> Dumping LOC R097E
> 
> Dumping LOC R097F
> 
> Command complete
> 
> RDR FILE 0040 SENT FROM PERFSVM  PRT WAS 0040 RECS
> 411K CPY  001 A
> NOHOLD NOKEEP
> DMSABE141T Operation exception occurred at 80E4BCA8
> in routine PERFKIT
> 
> 
> Has anyone experience this before?
> 
> Thanks for your replies!
> 
> Regards,
> 
> Mikhael
> 


Re: Historical curiousity question.

2007-03-16 Thread Rob van der Heij

On 3/16/07, Alan Altmark <[EMAIL PROTECTED]> wrote:


Sir, let's not throw the baby out with the bathwater, eh?  There are all
sorts of places where the underlying hardware does *not* shine through to
the guest.  Example: the integrated 3270 console.  VM continues to run
under VM just fine, albeit at a lower level of "awareness" of its
surroundings.


I did not mean to say that imperfect virtualization is bad. It's an
obvious trade-off and in most cases goodness because for
virtualization to make sense you do not want to be bothered with the
obligations that come with having all the bits in place.
Virtualization works because you *can* abstract from details.

What I tried to explain is that in-band controls means that CP takes
part of the resources for itself, and hands out the rest to the
guests. With out-of-band controls CP requires some other technology
(outside the architecture) that it does not virtualize for a guest. At
that point you're unable to stack.

And the built-in 3270 that you bring up is indeed one of those,
because CP does not virtualize it for the guest (unlike the line mode
system console with VINPUT and friends). Fortunately we don't need
that to be virtualized to run VM because we have something else that
works.

Rob


Re: PERFKIT error

2007-03-16 Thread Aria Bamdad
This was a known bug.  Please apply the latest service.



On Fri, 16 Mar 2007 11:15:35 +0800 Mikhael Ramirez Joaquin said:
>
>Hi Guys,
>
>We are running z/VM 4.4 in one of our z9 box and we implement PERFKIT to
>monitor our z/VM. But when I'm running the PERFKIT after a few hours
>(almost a day) the PERFKIT dumps and give me an error like the one
>below:
>
>Dumping LOC R0979
>
>Dumping LOC R097A
>
>Dumping LOC R097B
>
>Dumping LOC R097C
>
>Dumping LOC R097D
>
>Dumping LOC R097E
>
>Dumping LOC R097F
>
>Command complete
>
>RDR FILE 0040 SENT FROM PERFSVM  PRT WAS 0040 RECS 411K CPY  001 A
>NOHOLD NOKEEP
>DMSABE141T Operation exception occurred at 80E4BCA8 in routine PERFKIT
>
>
>Has anyone experience this before?
>
>Thanks for your replies!
>
>Regards,
>
>Mikhael


Re: Historical curiousity question.

2007-03-16 Thread Alan Altmark
On Friday, 03/16/2007 at 10:32 CET, Rob van der Heij <[EMAIL PROTECTED]> 
wrote:

> Suppose IBM would come up with something on the HMC to maintain the CP
> directory. An easy-to-use GUI application that talked to CP through
> some new hack in the SCLP area. That would make it impossible to run
> VM under VM unless also major parts of the HMC were virtualized
> (unlikely, at best you would have an option to deal with multiple VM
> images).

Sir, let's not throw the baby out with the bathwater, eh?  There are all 
sorts of places where the underlying hardware does *not* shine through to 
the guest.  Example: the integrated 3270 console.  VM continues to run 
under VM just fine, albeit at a lower level of "awareness" of its 
surroundings.

So we design CP to exist in an environment where he may be frustrated by 
the lack of underlying hardware functionality.  When we reach a point that 
doing that becomes too expensive and can no longer tolerate missing 
function, we declare a new "Architectural Level Set" (ALS) and the Great 
Wheel begins another revolution.  Further, you can rest assured that we 
will adjust CP to virtualize the new ALS so that we may continue with VM 
under VM.  (We couldn't develop z/VM without it! :-) )

Alan Altmark
z/VM Development
IBM Endicott


Re: PERFKIT error

2007-03-16 Thread Kris Buelens

Such problems are for the support centre. (and keep the dump to send it in)

2007/3/16, Mikhael Ramirez Joaquin <[EMAIL PROTECTED]>:



Hi Guys,

We are running z/VM 4.4 in one of our z9 box and we implement PERFKIT to
monitor our z/VM. But when I'm running the PERFKIT after a few hours
(almost a day) the PERFKIT dumps and give me an error like the one
below:

Dumping LOC R0979

Dumping LOC R097A

Dumping LOC R097B

Dumping LOC R097C

Dumping LOC R097D

Dumping LOC R097E

Dumping LOC R097F

Command complete

RDR FILE 0040 SENT FROM PERFSVM  PRT WAS 0040 RECS 411K CPY  001 A
NOHOLD NOKEEP
DMSABE141T Operation exception occurred at 80E4BCA8 in routine PERFKIT


Has anyone experience this before?

Thanks for your replies!

Regards,

Mikhael





--
Kris Buelens,
IBM Belgium, VM customer support


Re: Historical curiousity question.

2007-03-16 Thread Rob van der Heij

On 3/16/07, Jeff Gribbin, EDS <[EMAIL PROTECTED]> wrote:


To allow complete virtualisation of minidisks of any size up to and
including full-pack. Virtualising a full-pack minidisk makes it
intrinsically impossible to save hypervisor-related information on the
physical pack that's being virtualised - there's nowhere to put it!


Indeed. But VM does not virtualize full-pack mini disks because we
share the volser and some more that is found from there. This is why
in general we do not hand out mini disks that start at cylinder 0,
including full pack mini disks. As the world is today, cylinder 0 is
something of the hypervisor. I don't see a problem in using that for
hypervisor things (like CSE tracks).


Remember - one is virtualising HARDWARE - so there's no scope
for, "agreement" with the (software running in the) guests to not use part
of the pack - at best this would lead to a less-than-complete
virtualisation.


I beg to differ slightly, SIr.  (warning, long post follows)

Virtualization is never perfect and differences will always shine
through at the edges (most obvious is low-level timing, but there is
more). The cost of virtualization (hardware, overhead) gets higher
with increased perfection.
The reason this still works is because we have an abstraction of the
real hardware, and the guest is such that it does not care about
details beyond the abstraction (some guests are better than others in
that aspect). It gets expensive to achieve a high perfection at low
level of abstraction. And unless one specifically cares about those
low level details, the guest is better off not to face the real world
(e.g. be isolated from hardware errors).

Also remember that we have a mix of server virtualization and resource
virtualization which blurs the picture.

The mini disk is an imperfect (but cheap) implementation of a low
level abstraction. The main "defect" is the number of cylinders, but
for many purposes the virtualization is good enough because the guest
can live with that.
But at considerable additional cost, VM could have been designed to
span a mini disk over multiple volumes. With such support, you could
give out more perfect 3390's (i.e. not being restricted by the actual
models on the hardware). And VM could have played tricks not to
allocate real disk space for unused tracks. This is exactly what was
done in the RAMAC Virtual Array.

All VM configuration is in-band in that it lives on VM itself. VM is
its own hostage. This is a good thing because that's what allows us to
run VM under VM, why you cannot run LPAR within LPAR.

Suppose IBM would come up with something on the HMC to maintain the CP
directory. An easy-to-use GUI application that talked to CP through
some new hack in the SCLP area. That would make it impossible to run
VM under VM unless also major parts of the HMC were virtualized
(unlikely, at best you would have an option to deal with multiple VM
images).

PS There's of course hierarchical directories (e.g. LDAP) that would
implicitly support such virtualization. I can see some way cool things
when a 2nd level VM system would inherit most of the host, except for
the parts that you made different.

Rob


Re: Historical curiosity question

2007-03-16 Thread Anne & Lynn Wheeler

"McKown, John" wrote:

This is not important, but I just have to ask this. Does anybody know
why the original designers of VM did not do something for "minidisks"
akin to a OS/360 VTOC? Actually, it would be more akin to a "partition
table" on a PC disk. It just seems that it would be easier to maintain
if there was "something" on the physical disk which contained
information about the minidisks on it. Perhaps with information such
as: start cylinder, end cylinder, owning guest, read password, etc. CP
owned volumes have an "allocation map", this seems to me to be an
extention of that concept.


CP67 had a global directory ... that was indexed and paged ... so it
didn't need individual volume index.

it also avoided the horrendous overhead of multi-track search that
os/360 used to search the volume VTOC on every open. lots of past
posts mentioning multi-tract paradigm for VTOC & PDS directory was
io/memory trade-off ... os/360 target in the mid-60s was to burn
enormous i/o capacity to save having in-memory index.
http://www.garlic.com/~lynn/subtopic.html#dasd

that resource trade-off had changed by at least the mid-70s ...  and
it wasn't ever true for the machine configurations that cp67 ran on.

the other characteristic was that both cp67 and cms treated disks as
fixed-block architecture ... even if they were CKD ... CKD disks would
be formated into fixed-blocks ... and then treated as fixed-block
devices ... and avoid the horrible i/o performance penalty of ever
doing multi-track searches for looking up location and/or other
information on disk.

recent thread in bit.listserv.ibm-main
http://www.garlic.com/~lynn/2007e.html#35 FBA rant
http://www.garlic.com/~lynn/2007e.html#38 FBA rant
http://www.garlic.com/~lynn/2007e.html#39 FBA rant
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007e.html#42 FBA rant
http://www.garlic.com/~lynn/2007e.html#43 FBA rant
http://www.garlic.com/~lynn/2007e.html#46 FBA rant
http://www.garlic.com/~lynn/2007e.html#51 FBA rant
http://www.garlic.com/~lynn/2007e.html#59 FBA rant
http://www.garlic.com/~lynn/2007e.html#60 FBA rant
http://www.garlic.com/~lynn/2007e.html#63 FBA rant
http://www.garlic.com/~lynn/2007e.html#64 FBA rant
http://www.garlic.com/~lynn/2007f.html#0 FBA rant
http://www.garlic.com/~lynn/2007f.html#2 FBA rant
http://www.garlic.com/~lynn/2007f.html#3 FBA rant
http://www.garlic.com/~lynn/2007f.html#5 FBA rant
http://www.garlic.com/~lynn/2007f.html#12 FBA rant

the one possible exception was loosely-coupled single-system-image
support done for HONE system. HONE mini-disk volumes had an in-use
bitmap directory on each volume ... that was used to manage "LINK"
consistency across all machines in the cluster. it basically used
a channel program with search operation to implement i/o logical 
equivalent to the atomic compare&swap instruction ... avoiding having 
to do reserve/release with intervening i/o operations. I have some 
recollection talking to the JES2 people about them trying a similar 
strategy for multi-system JES2 spool allocation. post from above 
mentioning HONE "compare&swap" channel program for multi-system 
cluster operation

http://www.garlic.com/~lynn/2007e.html#38 FBA rant

HONE was vm-based online interactive for world-wide sales, marketing,
and field people. It originally started in the early 70s with a clone
of the science center's cp67 system
http://www.garlic.com/~lynn/subtopic.html#545tech

and eventually propogated to several regional US datacenters ...  and
also started to propogate overseas. I provided highly modified cp67
and then later vm370 systems for HONE operation for something like 15
yrs. I also handled some of the overseas clones ... like when EMEA
hdqtrs moved from the states to just outside paris in the early 70s.
In the mid-70s, the US HONE datacenters were consolidated in northern
cal. ... and single-system-image software support quickly emerge
... running multiple "attached processors" in cluster operation.  HONE
applications were heavily APL ... so it was quite compute intensive.
With four-channel controllers and string-switch ... you could get
eight system paths to every disk. Going with "attached processors"
... effectively two processors made use of a single set of channels
... so you could get 16 processors in single-system-image ... with
load-balancing and failure-fallover-recovery.

Later in the early 80s, the northern cal. HONE datacenter was
replicated first in Dallas and then a third center in Boulder ... for
triple redundancy, load-balancing and fall-over (in part concern about
natural disasters like earthquakes). 


lots of past posts mentioning HONE
http://www.garlic.com/~lynn/subtopic.html#hone

At one point in SJR after the 370/195 machine ... recent reference
http://www.garlic.com/~lynn/2007f.html#10 Beyond multicore
http://www.garlic.com/~lynn/2007f.html#11 Is computer history taught now?
http://www.garlic.com/~lynn/2007f.html#12 FBA rant

was replaced with mvs/168 system ... an