Re: Duplicate VOLID's

2010-08-30 Thread Schuh, Richard
But if the normally lower one does not respond ...


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of David Boyes
Sent: Friday, August 27, 2010 5:45 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Who's using it is a different issue, and with the size of systems we're 
starting to see, the name of the disk isn't going to tell you anything 
worthwhile anyway. 6 characters just isn't enough to overload with any useful 
meaning beyond a fairly small environment. I manage that problem in the change 
management system - OTRS is really quite nice for that. 8-)


On 8/27/10 7:21 PM, Schuh, Richard rsc...@visa.com wrote:

Order of acquisition will stay static. Is V04371 a spare or is it used by one 
of the n LPARS? If the latter, which one?



Duplicate VOLID's

2010-08-28 Thread Richard Corak

There is my post of Thu, 20 Nov 2003 07:34:22 -0500

Richard Corak


Re: Duplicate VOLID's

2010-08-27 Thread Michael MacIsaac
  That is absolutely the wrong thing to do. 
Is it so black and white?

Our team has been using this convention for a number of years and it has 
worked fine for us.  However, I will admit we have never been asked to 
migrate to a new disk array. I believe the main argument for not using the 
rdevs in the volser is that after migration, the volsers will be wrong 
(e.g. migrate VM at rdev  to a new DASD at rdev  and the 
volser convention is now wrong - the DASD at  now has a volser of 
VM).

I understand that changing the volsers throughout the (perhaps) many z/VM 
systems would be labor-intensive and perhaps error prone (e.g. clipping 
the VM to VM and then making sure that all references to VM 
are updated correctly). I agree that allowing this would break the 
convention and then things would become very confusing.

But, is there not a model whereby the new DASD can have the same rdev 
range as the old DASD?  For example:
1) Put the new DASD online assigning it a temporary range of rdevs (e.g. 
)
2) FLASHCOPY/DDR the old DASD to the new DASD (e.g.  = , and 
every other DASD in a z/VM LPAR)
3) Reassign the new DASD the old DASD's address ranges, and take the old 
DASD offline
4) Restart the LPAR.

This should work on paper, but I may be missing something obvious. Yes, it 
would require more interaction with the hardware guys.  Has anyone used 
this model?

I'm just trying to understand and appreciate the wisdom of the community, 
where many have much more experience as sysadmins than I. Thanks.

Mike MacIsaac mike...@us.ibm.com   (845) 433-7061

Re: Duplicate VOLID's

2010-08-27 Thread Ray Waters
I often bring up a second level VM system with the same volsers: 540RES, 
540WK1, 540WK2, ... I always use a higher address for the duplicated volume 
when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume 
would be something like 847.  This way, if 1st level VM happens to go down, 
before I finish my testing, it should find the lower addressed volumes 
(production volumes) and use these to IPL with. VM will start looking at each 
address with lower addresses prior to higher addresses. The only way this would 
fail, would be if VM fails on the read of the production VM volume or volumes. 
Once testing is completed, I ICKDSF these copied volumes back to their original 
state/volid. Been doing this for 30 years and haven't got burned yet, yet, yet.

Ray

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Scott Rohling
Sent: Thursday, August 26, 2010 9:45 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Would definitely agree an rdev specification on the SLOT def would be very 
useful.I just recently built a 2nd level guest and neglected to relabel the 
volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't have 
happened if the real address was specified...   good idea!

Scott Rohling
On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel 
tehue...@gmail.commailto:tehue...@gmail.com wrote:
So I guess there is no way to absolutly protect z/VM from using the wrong pack 
at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on the SLOT 
deffinations. comments?

On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
DIRMAINT is just a directory manager. It is similar to the directory manager 
component of VM:Secure. DIRMAINT does have the capability to do mass updates of 
the directory. VM:Secure does not. I have my own form of mass updater. I create 
code to perform the update of a generic single user and temporarily EXECLOAD it 
as PROFILE XEDIT. Then I run a pipe that looks something like this:

'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

It usually runs quickly because our directory has fewer than 2000 userids in 
it. It might not be acceptable on a system with 1+ userids.

Regards,
Richard Schuh





From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Scott Rohling
Sent: Thursday, August 26, 2010 4:56 PM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at 
least - and doesn't care about volsers the mindisks are on.The process for 
DIRMAINT is probably similar to the things that need doing on VM:Secure to do 
the directory changes:

-  Make a 'monolithic' copy of the directory and run a PIPE to change all 
volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
-  Put the directory online (DIRM DIRECT)
-  Change EXTENT CONTROL similarly and do a DIRM RLDE

I'm in favor of labels using the rdev - unless you really have frequent changes 
of DASD - to me, the benefits outweigh the occassional need to update the 
directory.   YMMV, as this thread indicates.

Scott Rohling
On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
That is absolutely the wrong thing to do. I am now suffering because someone 
else did that to dasd that is EMFFd to 3 LPARS. (It was all ZLccuu). It 
requires meticulous record keeping and is very error prone. I did wipe out a 
disk needed by one system because the records I received were not complete 
Fortunately, it was a disk that was to be used by a new system and had not been 
updated; it was easy to restore. Also, it is a huge headache if you ever 
replace your DASD. I don't know about RACF, but there is no mechanism built 
into VM:Secure for easily doing a mass update of volsers ( I know, you can 
change the volser with one command - if it is a VM:Secure .controlled disk and 
nobody is linked to it. The latter is hard to achieve around here.)

Regards,
Richard Schuh





From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Michael MacIsaac
Sent: Thursday, August 26, 2010 10:55 AM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's


I do know what addresses my system disks are on,
Ah! - an argument for the convention of using the RDEV as the last four 
characters of the volser :))

Mike MacIsaac mike...@us.ibm.commailto:mike...@us.ibm.com   (845) 433-7061





NOTICE:
This e-mail is intended solely for the use of the individual to whom it is 
addressed and may contain information that is privileged, confidential or 
otherwise exempt from disclosure. If the reader of this e-mail is not the 
intended

Re: Duplicate VOLID's

2010-08-27 Thread Tom Huegel
OK Ray, that is the best tip yet. But there are flaws you are assuming that
you are the one in control. Picture a situation where there is a pool of
DASD that is available for 'anyone' to use, for anything they want to use
them for. They could have z/VM system volumes, z/OS, z/LINUX, z/VSE, or just
application data. At some point they all get reinitialized but not until the
user is done which could be weeks.
Also consider that there are several z/VM 'production' systems in different
LPARS or CEC's They can't all be at the lowest address..

Some others have suggested labeling schemes ie VMrdev.
As for labeling the volumes with the rdev ie VM1234, yes I use that scheme
when returning the packs to the user pool.
Labeling the packs as VMrdev doesn't address the problem, I am not
interested tracking those volid's I only care about xxxRES, xxxWK1, xxxWK2,
etc

On Fri, Aug 27, 2010 at 7:47 AM, Ray Waters ray.wat...@opensolutions.comwrote:

  I often bring up a second level VM system with the same volsers: 540RES,
 540WK1, 540WK2, … I always use a higher address for the duplicated volume
 when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume
 would be something like 847.  This way, if 1st level VM happens to go
 down, before I finish my testing, it should find the lower addressed volumes
 (production volumes) and use these to IPL with. VM will start looking at
 each address with lower addresses prior to higher addresses. The only way
 this would fail, would be if VM fails on the read of the production VM
 volume or volumes. Once testing is completed, I ICKDSF these copied volumes
 back to their original state/volid. Been doing this for 30 years and haven’t
 got burned yet, yet, yet.



 Ray



 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 9:45 PM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's



 Would definitely agree an rdev specification on the SLOT def would be very
 useful.I just recently built a 2nd level guest and neglected to relabel
 the volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't
 have happened if the real address was specified...   good idea!

 Scott Rohling

 On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel tehue...@gmail.com wrote:

 So I guess there is no way to absolutly protect z/VM from using the wrong
 pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on
 the SLOT deffinations. comments?



 On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard rsc...@visa.com wrote:

 DIRMAINT is just a directory manager. It is similar to the directory
 manager component of VM:Secure. DIRMAINT does have the capability to do mass
 updates of the directory. VM:Secure does not. I have my own form of mass
 updater. I create code to perform the update of a generic single user and
 temporarily EXECLOAD it as PROFILE XEDIT. Then I run a pipe that looks
 something like this:



 'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'



 It usually runs quickly because our directory has fewer than 2000 userids
 in it. It might not be acceptable on a system with 1+ userids.

 Regards,
 Richard Schuh






  --

 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 4:56 PM


 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's



 Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at
 least - and doesn't care about volsers the mindisks are on.The process
 for DIRMAINT is probably similar to the things that need doing on VM:Secure
 to do the directory changes:

 -  Make a 'monolithic' copy of the directory and run a PIPE to change all
 volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
 -  Put the directory online (DIRM DIRECT)
 -  Change EXTENT CONTROL similarly and do a DIRM RLDE

 I'm in favor of labels using the rdev - unless you really have frequent
 changes of DASD - to me, the benefits outweigh the occassional need to
 update the directory.   YMMV, as this thread indicates.

 Scott Rohling

 On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard rsc...@visa.com wrote:

 That is absolutely the wrong thing to do. I am now suffering because
 someone else did that to dasd that is EMFFd to 3 LPARS. (It was all
 ZLccuu). It requires meticulous record keeping and is very error prone. I
 did wipe out a disk needed by one system because the records I received were
 not complete Fortunately, it was a disk that was to be used by a new system
 and had not been updated; it was easy to restore. Also, it is a huge
 headache if you ever replace your DASD. I don't know about RACF, but there
 is no mechanism built into VM:Secure for easily doing a mass update of
 volsers ( I know, you can change the volser with one command - if it is a
 VM:Secure .controlled disk and nobody

Re: Duplicate VOLID's

2010-08-27 Thread Scott Rohling
Yes --  that's a good strategy too ...   I usually pay attention to the
addresses - but this was our test/play system, so didn't give it the careful
thought I normally would.   Higher addresses can also prevent an issue with
duplicates

Scott Rohling

On Fri, Aug 27, 2010 at 6:47 AM, Ray Waters ray.wat...@opensolutions.comwrote:

  I often bring up a second level VM system with the same volsers: 540RES,
 540WK1, 540WK2, … I always use a higher address for the duplicated volume
 when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume
 would be something like 847.  This way, if 1st level VM happens to go
 down, before I finish my testing, it should find the lower addressed volumes
 (production volumes) and use these to IPL with. VM will start looking at
 each address with lower addresses prior to higher addresses. The only way
 this would fail, would be if VM fails on the read of the production VM
 volume or volumes. Once testing is completed, I ICKDSF these copied volumes
 back to their original state/volid. Been doing this for 30 years and haven’t
 got burned yet, yet, yet.



 Ray



 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 9:45 PM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's



 Would definitely agree an rdev specification on the SLOT def would be very
 useful.I just recently built a 2nd level guest and neglected to relabel
 the volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't
 have happened if the real address was specified...   good idea!

 Scott Rohling

 On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel tehue...@gmail.com wrote:

 So I guess there is no way to absolutly protect z/VM from using the wrong
 pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on
 the SLOT deffinations. comments?



 On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard rsc...@visa.com wrote:

 DIRMAINT is just a directory manager. It is similar to the directory
 manager component of VM:Secure. DIRMAINT does have the capability to do mass
 updates of the directory. VM:Secure does not. I have my own form of mass
 updater. I create code to perform the update of a generic single user and
 temporarily EXECLOAD it as PROFILE XEDIT. Then I run a pipe that looks
 something like this:



 'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'



 It usually runs quickly because our directory has fewer than 2000 userids
 in it. It might not be acceptable on a system with 1+ userids.

 Regards,
 Richard Schuh






  --

 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 4:56 PM


 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's



 Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at
 least - and doesn't care about volsers the mindisks are on.The process
 for DIRMAINT is probably similar to the things that need doing on VM:Secure
 to do the directory changes:

 -  Make a 'monolithic' copy of the directory and run a PIPE to change all
 volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
 -  Put the directory online (DIRM DIRECT)
 -  Change EXTENT CONTROL similarly and do a DIRM RLDE

 I'm in favor of labels using the rdev - unless you really have frequent
 changes of DASD - to me, the benefits outweigh the occassional need to
 update the directory.   YMMV, as this thread indicates.

 Scott Rohling

 On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard rsc...@visa.com wrote:

 That is absolutely the wrong thing to do. I am now suffering because
 someone else did that to dasd that is EMFFd to 3 LPARS. (It was all
 ZLccuu). It requires meticulous record keeping and is very error prone. I
 did wipe out a disk needed by one system because the records I received were
 not complete Fortunately, it was a disk that was to be used by a new system
 and had not been updated; it was easy to restore. Also, it is a huge
 headache if you ever replace your DASD. I don't know about RACF, but there
 is no mechanism built into VM:Secure for easily doing a mass update of
 volsers ( I know, you can change the volser with one command - if it is a
 VM:Secure .controlled disk and nobody is linked to it. The latter is hard to
 achieve around here.)



 Regards,
 Richard Schuh






  --

 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Michael MacIsaac
 *Sent:* Thursday, August 26, 2010 10:55 AM


 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's





 I do know what addresses my system disks are on,
 Ah! - an argument for the convention of using the RDEV as the last four
 characters of the volser :))

 Mike MacIsaac mike...@us.ibm.com   (845) 433-7061







 --
 NOTICE:
 This e-mail

Re: Duplicate VOLID's

2010-08-27 Thread Ray Waters
Very True Tom. I was just giving a situation with one second level guest. I 
realize, some shops cannot control how many duplicated volsers and who is 
creating these duplicates (such as a school). My situation is a controlled 
environment, where only Tech Support does this kind of stuff, and we 
communicate well.

Ray

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Huegel
Sent: Friday, August 27, 2010 8:22 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

OK Ray, that is the best tip yet. But there are flaws you are assuming that you 
are the one in control. Picture a situation where there is a pool of DASD that 
is available for 'anyone' to use, for anything they want to use them for. They 
could have z/VM system volumes, z/OS, z/LINUX, z/VSE, or just application data. 
At some point they all get reinitialized but not until the user is done which 
could be weeks.
Also consider that there are several z/VM 'production' systems in different 
LPARS or CEC's They can't all be at the lowest address..

Some others have suggested labeling schemes ie VMrdev.
As for labeling the volumes with the rdev ie VM1234, yes I use that scheme when 
returning the packs to the user pool.
Labeling the packs as VMrdev doesn't address the problem, I am not interested 
tracking those volid's I only care about xxxRES, xxxWK1, xxxWK2, etc
On Fri, Aug 27, 2010 at 7:47 AM, Ray Waters 
ray.wat...@opensolutions.commailto:ray.wat...@opensolutions.com wrote:
I often bring up a second level VM system with the same volsers: 540RES, 
540WK1, 540WK2, ... I always use a higher address for the duplicated volume 
when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume 
would be something like 847.  This way, if 1st level VM happens to go down, 
before I finish my testing, it should find the lower addressed volumes 
(production volumes) and use these to IPL with. VM will start looking at each 
address with lower addresses prior to higher addresses. The only way this would 
fail, would be if VM fails on the read of the production VM volume or volumes. 
Once testing is completed, I ICKDSF these copied volumes back to their original 
state/volid. Been doing this for 30 years and haven't got burned yet, yet, yet.

Ray

From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Scott Rohling
Sent: Thursday, August 26, 2010 9:45 PM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Would definitely agree an rdev specification on the SLOT def would be very 
useful.I just recently built a 2nd level guest and neglected to relabel the 
volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't have 
happened if the real address was specified...   good idea!

Scott Rohling
On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel 
tehue...@gmail.commailto:tehue...@gmail.com wrote:
So I guess there is no way to absolutly protect z/VM from using the wrong pack 
at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on the SLOT 
deffinations. comments?

On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
DIRMAINT is just a directory manager. It is similar to the directory manager 
component of VM:Secure. DIRMAINT does have the capability to do mass updates of 
the directory. VM:Secure does not. I have my own form of mass updater. I create 
code to perform the update of a generic single user and temporarily EXECLOAD it 
as PROFILE XEDIT. Then I run a pipe that looks something like this:

'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

It usually runs quickly because our directory has fewer than 2000 userids in 
it. It might not be acceptable on a system with 1+ userids.

Regards,
Richard Schuh





From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Scott Rohling
Sent: Thursday, August 26, 2010 4:56 PM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at 
least - and doesn't care about volsers the mindisks are on.The process for 
DIRMAINT is probably similar to the things that need doing on VM:Secure to do 
the directory changes:

-  Make a 'monolithic' copy of the directory and run a PIPE to change all 
volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
-  Put the directory online (DIRM DIRECT)
-  Change EXTENT CONTROL similarly and do a DIRM RLDE

I'm in favor of labels using the rdev - unless you really have frequent changes 
of DASD - to me, the benefits outweigh the occassional need to update the 
directory.   YMMV, as this thread indicates.

Scott Rohling
On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
That is absolutely

Re: Duplicate VOLID's

2010-08-27 Thread Schuh, Richard
Also, if you have dasd that is EMIFd to n LPARS, how do you know (a) whether a 
disk is spare or in use if all of the dasd have volsers ZL, and (b) if in 
use, by which LPAR? IO can see using VM as an indicator that a volume has 
not yet been deployed, but once it has, it needs to be labeled using another 
convention. Since we seem to replace our DASD farm every 2 or 3 years, device 
number has no place in the scheme other than to denote a spare volume.

Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Michael MacIsaac
Sent: Friday, August 27, 2010 5:31 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's


  That is absolutely the wrong thing to do.
Is it so black and white?

Our team has been using this convention for a number of years and it has worked 
fine for us.  However, I will admit we have never been asked to migrate to a 
new disk array. I believe the main argument for not using the rdevs in the 
volser is that after migration, the volsers will be wrong (e.g. migrate VM 
at rdev  to a new DASD at rdev  and the volser convention is now wrong 
- the DASD at  now has a volser of VM).

I understand that changing the volsers throughout the (perhaps) many z/VM 
systems would be labor-intensive and perhaps error prone (e.g. clipping the 
VM to VM and then making sure that all references to VM are updated 
correctly). I agree that allowing this would break the convention and then 
things would become very confusing.

But, is there not a model whereby the new DASD can have the same rdev range as 
the old DASD?  For example:
1) Put the new DASD online assigning it a temporary range of rdevs (e.g. )
2) FLASHCOPY/DDR the old DASD to the new DASD (e.g.  = , and every 
other DASD in a z/VM LPAR)
3) Reassign the new DASD the old DASD's address ranges, and take the old DASD 
offline
4) Restart the LPAR.

This should work on paper, but I may be missing something obvious. Yes, it 
would require more interaction with the hardware guys.  Has anyone used this 
model?

I'm just trying to understand and appreciate the wisdom of the community, where 
many have much more experience as sysadmins than I. Thanks.

Mike MacIsaac mike...@us.ibm.com   (845) 433-7061


Re: Duplicate VOLID's

2010-08-27 Thread Ron Schmiedge
Tom,

Been following this with interest. It sounds like your environment is
a lot different from most of the responders, so it has been
educational looking at what everyone else does.

You have said that you know the volumes addresses of the production VM
volumes. If so, why not code the SYSTEM CONFIG to only bring those
online at IPL? Leave all the rest offline until you are up, then use
an exec (perhaps on AUTOLOG1) to bring the rest of the volumes online?
In ancient days, before there was a SYSTEM CONFIG, my AUTOLOG1 always
brought the volumes up and attached them to the system.

There have been two approaches for multiple VM res volumes discussed
on the list in the past. One is to make the production volumes have
different volsers than everyone else. You only need to do that for the
floor system and let your many testers sort through whose 540RES is
this? If you have to do that for many LPAR, then you have the fun of
coming up with many unique names.

Personally, my test VM system uses volumes that are copies of the
floor system but one cylinder less, so I can leave the real volser
untouched and give my testbed mdisk volumes that start at cylinder 1.
IBM still makes RES, W01 and W02 stop 1 cylinder before full
(something that they were doing when I put in my first VM system on
3310s in 1979) and I think that is so you can create a complete copy
on a second volume but not need to touch cyl 0 of the second volume.
These volumes are not IPLable on different LPAR, or different machines
sharing the DASD subsystem, but it works okay on the same LPAR. It
works for me because I have one machine and no LPAR.

Ron

On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel tehue...@gmail.com wrote:
 So I guess there is no way to absolutly protect z/VM from using the wrong
 pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on
 the SLOT deffinations. comments?



Re: Duplicate VOLID's

2010-08-27 Thread Schuh, Richard
It will only take one use of an incorrect spool volume to really burn you.

Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Ray Waters
Sent: Friday, August 27, 2010 5:47 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

I often bring up a second level VM system with the same volsers: 540RES, 
540WK1, 540WK2, ... I always use a higher address for the duplicated volume 
when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume 
would be something like 847.  This way, if 1st level VM happens to go down, 
before I finish my testing, it should find the lower addressed volumes 
(production volumes) and use these to IPL with. VM will start looking at each 
address with lower addresses prior to higher addresses. The only way this would 
fail, would be if VM fails on the read of the production VM volume or volumes. 
Once testing is completed, I ICKDSF these copied volumes back to their original 
state/volid. Been doing this for 30 years and haven't got burned yet, yet, yet.

Ray

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Scott Rohling
Sent: Thursday, August 26, 2010 9:45 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Would definitely agree an rdev specification on the SLOT def would be very 
useful.I just recently built a 2nd level guest and neglected to relabel the 
volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't have 
happened if the real address was specified...   good idea!

Scott Rohling
On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel 
tehue...@gmail.commailto:tehue...@gmail.com wrote:
So I guess there is no way to absolutly protect z/VM from using the wrong pack 
at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on the SLOT 
deffinations. comments?

On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
DIRMAINT is just a directory manager. It is similar to the directory manager 
component of VM:Secure. DIRMAINT does have the capability to do mass updates of 
the directory. VM:Secure does not. I have my own form of mass updater. I create 
code to perform the update of a generic single user and temporarily EXECLOAD it 
as PROFILE XEDIT. Then I run a pipe that looks something like this:

'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

It usually runs quickly because our directory has fewer than 2000 userids in 
it. It might not be acceptable on a system with 1+ userids.

Regards,
Richard Schuh





From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Scott Rohling
Sent: Thursday, August 26, 2010 4:56 PM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at 
least - and doesn't care about volsers the mindisks are on.The process for 
DIRMAINT is probably similar to the things that need doing on VM:Secure to do 
the directory changes:

-  Make a 'monolithic' copy of the directory and run a PIPE to change all 
volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
-  Put the directory online (DIRM DIRECT)
-  Change EXTENT CONTROL similarly and do a DIRM RLDE

I'm in favor of labels using the rdev - unless you really have frequent changes 
of DASD - to me, the benefits outweigh the occassional need to update the 
directory.   YMMV, as this thread indicates.

Scott Rohling
On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
That is absolutely the wrong thing to do. I am now suffering because someone 
else did that to dasd that is EMFFd to 3 LPARS. (It was all ZLccuu). It 
requires meticulous record keeping and is very error prone. I did wipe out a 
disk needed by one system because the records I received were not complete 
Fortunately, it was a disk that was to be used by a new system and had not been 
updated; it was easy to restore. Also, it is a huge headache if you ever 
replace your DASD. I don't know about RACF, but there is no mechanism built 
into VM:Secure for easily doing a mass update of volsers ( I know, you can 
change the volser with one command - if it is a VM:Secure .controlled disk and 
nobody is linked to it. The latter is hard to achieve around here.)

Regards,
Richard Schuh





From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Michael MacIsaac
Sent: Thursday, August 26, 2010 10:55 AM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's


I do know what addresses my system disks are on,
Ah! - an argument for the convention of using the RDEV as the last four 
characters of the volser :))

Mike MacIsaac mike

Re: Duplicate VOLID's

2010-08-27 Thread Schuh, Richard
Do higher addresses really protect you? I do not think that there is a 
guarantee that they do. And if IBM does not guarantee it, depending on 
something that seems to be is a risk that has no place in a production system. 
The volsers belonging to the production system should not be used for disks 
belonging to any second level system that can be ipled on the bare iron or 
whose disks begin at cylinder 0. Yes, you have been doing it for 30 years. That 
is kind of like cheating on your spouse for 30 years and not getting caught. 
You only need to get caught once.


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Scott Rohling
Sent: Friday, August 27, 2010 6:25 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Yes --  that's a good strategy too ...   I usually pay attention to the 
addresses - but this was our test/play system, so didn't give it the careful 
thought I normally would.   Higher addresses can also prevent an issue with 
duplicates

Scott Rohling

On Fri, Aug 27, 2010 at 6:47 AM, Ray Waters 
ray.wat...@opensolutions.commailto:ray.wat...@opensolutions.com wrote:
I often bring up a second level VM system with the same volsers: 540RES, 
540WK1, 540WK2, ... I always use a higher address for the duplicated volume 
when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume 
would be something like 847.  This way, if 1st level VM happens to go down, 
before I finish my testing, it should find the lower addressed volumes 
(production volumes) and use these to IPL with. VM will start looking at each 
address with lower addresses prior to higher addresses. The only way this would 
fail, would be if VM fails on the read of the production VM volume or volumes. 
Once testing is completed, I ICKDSF these copied volumes back to their original 
state/volid. Been doing this for 30 years and haven't got burned yet, yet, yet.

Ray

From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Scott Rohling
Sent: Thursday, August 26, 2010 9:45 PM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Would definitely agree an rdev specification on the SLOT def would be very 
useful.I just recently built a 2nd level guest and neglected to relabel the 
volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't have 
happened if the real address was specified...   good idea!

Scott Rohling
On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel 
tehue...@gmail.commailto:tehue...@gmail.com wrote:
So I guess there is no way to absolutly protect z/VM from using the wrong pack 
at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on the SLOT 
deffinations. comments?

On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
DIRMAINT is just a directory manager. It is similar to the directory manager 
component of VM:Secure. DIRMAINT does have the capability to do mass updates of 
the directory. VM:Secure does not. I have my own form of mass updater. I create 
code to perform the update of a generic single user and temporarily EXECLOAD it 
as PROFILE XEDIT. Then I run a pipe that looks something like this:

'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

It usually runs quickly because our directory has fewer than 2000 userids in 
it. It might not be acceptable on a system with 1+ userids.

Regards,
Richard Schuh





From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Scott Rohling
Sent: Thursday, August 26, 2010 4:56 PM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at 
least - and doesn't care about volsers the mindisks are on.The process for 
DIRMAINT is probably similar to the things that need doing on VM:Secure to do 
the directory changes:

-  Make a 'monolithic' copy of the directory and run a PIPE to change all 
volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
-  Put the directory online (DIRM DIRECT)
-  Change EXTENT CONTROL similarly and do a DIRM RLDE

I'm in favor of labels using the rdev - unless you really have frequent changes 
of DASD - to me, the benefits outweigh the occassional need to update the 
directory.   YMMV, as this thread indicates.

Scott Rohling
On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
That is absolutely the wrong thing to do. I am now suffering because someone 
else did that to dasd that is EMFFd to 3 LPARS. (It was all ZLccuu). It 
requires meticulous record keeping and is very error prone. I did wipe out a 
disk needed by one system because the records I received were not complete 
Fortunately, it was a disk

Re: Duplicate VOLID's

2010-08-27 Thread Alain Benveniste
Richard is right on  this as far as i remember to have read on the online help 
that order of chpids in the config is the way the rdev are known by the system. 
We have 4 clones of our production + 2 ranges of contigus addresses cloned on 
the same array... all the possibilities to IPl is driven by a specific config 
file. At salpl, the ipled dasd on thé Hmc is the filename of the config file. 
So all these files are propagated at clone time. Never had a problem that way.

Alain

Envoyé de mon iPhone

Le 27 août 2010 à 17:55, Schuh, Richard rsc...@visa.com a écrit :

 Do higher addresses really protect you? I do not think that there is a 
 guarantee that they do. And if IBM does not guarantee it, depending on 
 something that seems to be is a risk that has no place in a production 
 system. The volsers belonging to the production system should not be used for 
 disks belonging to any second level system that can be ipled on the bare iron 
 or whose disks begin at cylinder 0. Yes, you have been doing it for 30 years. 
 That is kind of like cheating on your spouse for 30 years and not getting 
 caught. You only need to get caught once.
  
 Regards, 
 Richard Schuh
 
  
 
  
 
 From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On 
 Behalf Of Scott Rohling
 Sent: Friday, August 27, 2010 6:25 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Duplicate VOLID's
 
 Yes --  that's a good strategy too ...   I usually pay attention to the 
 addresses - but this was our test/play system, so didn't give it the careful 
 thought I normally would.   Higher addresses can also prevent an issue with 
 duplicates
 
 Scott Rohling
 
 On Fri, Aug 27, 2010 at 6:47 AM, Ray Waters ray.wat...@opensolutions.com 
 wrote:
 I often bring up a second level VM system with the same volsers: 540RES, 
 540WK1, 540WK2, … I always use a higher address for the duplicated volume 
 when I DDR copy. In other words if 540WK1 is say 209, the DDR copied volume 
 would be something like 847.  This way, if 1st level VM happens to go down, 
 before I finish my testing, it should find the lower addressed volumes 
 (production volumes) and use these to IPL with. VM will start looking at each 
 address with lower addresses prior to higher addresses. The only way this 
 would fail, would be if VM fails on the read of  the production VM volume 
 or volumes. Once testing is completed, I ICKDSF these copied volumes back to 
 their original state/volid. Been doing this for  30 years and haven’t got 
 burned yet, yet, yet.
 
  
 
 Ray
 
  
 
 From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On 
 Behalf Of Scott Rohling
 Sent: Thursday, August 26, 2010 9:45 PM
 
 
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Duplicate VOLID's
  
 
 Would definitely agree an rdev specification on the SLOT def would be very 
 useful.I just recently built a 2nd level guest and neglected to relabel 
 the volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't have 
 happened if the real address was specified...   good idea!
 
 Scott Rohling
 
 On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel tehue...@gmail.com wrote:
 
 So I guess there is no way to absolutly protect z/VM from using the wrong 
 pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on 
 the SLOT deffinations. comments?
 
  
 
 On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard rsc...@visa.com wrote:
 
 DIRMAINT is just a directory manager. It is similar to the directory manager 
 component of VM:Secure. DIRMAINT does have the capability to do mass updates 
 of the directory. VM:Secure does not. I have my own form of mass updater. I 
 create code to perform the update of a generic single user and temporarily 
 EXECLOAD it as PROFILE XEDIT. Then I run a pipe that looks something like 
 this:
 
  
 
 'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'
 
  
 
 It usually runs quickly because our directory has fewer than 2000 userids in 
 it. It might not be acceptable on a system with 1+ userids.
 
 Regards, 
 Richard Schuh
 
  
 
  
 
  
 
 From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On 
 Behalf Of Scott Rohling
 Sent: Thursday, August 26, 2010 4:56 PM 
 
 
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Duplicate VOLID's
 
  
 
 Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at 
 least - anddoesn't care about volsers the mindisks are on.The 
 processfor DIRMAINT is probably similar to the things that need doing 
 on VM:Secure to do the directory changes:
 
 -  Make a 'monolithic' copy of the directory and run a PIPE to change all 
 volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
 -  Put the directory online (DIRM DIRECT)
 -  Change EXTENT CONTROL similarly and do a DIRM RLDE
 
 I'm in favor of labels using the rdev - unless you really have frequent 
 changes of DASD - to me, the benefits outweigh the occassional need to update

Re: Duplicate VOLID's

2010-08-27 Thread Alan Altmark
On Thursday, 08/26/2010 at 09:58 EDT, Tom Huegel tehue...@gmail.com 
wrote:
 So I guess there is no way to absolutly protect z/VM from using the 
wrong pack 
 at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on 
the SLOT 
 deffinations. comments?

Rich Corak has explained on previous occasions how volser recognition 
works.  If CP detects a duplicate volser for a device to be attached to 
SYSTEM at IPL, you will get
  HCP954I DASD rdev1 VOLID volid IS A DUPLICATE OF DASD rdev2
and you are responsible to ensure that rdev2 is the one you intended to be 
attached to the system.  Out of all the volumes on the system with that 
volid, the lowest *device number* (address) will win, without regard to 
who responds first.

As has been suggested, this is no guarantee since you can find yourself in 
trouble if you have a dasd problem or someone dinks with the I/O config 
and makes a mistake.  Being able to place the RDEV on the CP_owned and 
user_volume_ statements is one way to help mitigate the problem.

There is another way to identify devices: by their architected node 
element descriptor (NED).  Or in the lingua franca, universally unique 
identifiers (UUIDs).  When Single System Image hits the streets, you will 
find UUIDs very much in evidence, though not in the context of IPL or 
SYSTEM CONFIG.

But I could imagine something like this in SYSTEM CONFIG:

CP_Owned Slot 1 6X0RES ID 2105.000.IBM.13.3737504EE.0D0A 
CP_owned Slot 2 6X0TD1 ID 2107.900.IBM.13.29839621A.0D0A 
CP_owned Slot 3 6X0PG1 ID 2107.900.IBM.13.4924295DC.0D0A, (yes, there are 
two IDs for slot 3)
  2107.900.IBM.13.0358113AA.0D0A  (..choose in the 
order given)

so that you wouldn't have to worry about the RDEV.  However, there is a 
Heisenberg-esque trade-off to be made between flexibility and security. 
Good news - the above syntax ensures you won't have any bogus volumes. Bad 
news - it won't tolerate any copies or change in DASD, so it's generally 
hostile to a dynamic DR environment, and you can't get the NED until you 
IPL.

Hmmmorwhen IPL has finished, CP could write the found UUIDs to the 
warm start area and use those in preference to any other on a subsequent 
IPL.  For PPRC pairs, write both UUIDs and allow either.   If a volume 
with the needed UUID is not available, then (based on configuration) ask 
which RDEV to use a la z/OS, or just take the lowest-numbered one 
available.  In either case, update the UUID in the warm start area. 
h.

Alan Altmark
z/VM Development
IBM Endicott


Re: Duplicate VOLID's

2010-08-27 Thread Frank M. Ramaekers
Yes, and it's specified in examples in some documentation (Redbook Paper
Linux on IBM eserver zSeries and S/390: VSWITCH and VLAN Features of
z/VM 4.4).   Yeah, it's old!  :)

 
Frank M. Ramaekers Jr.
 
 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Alan Altmark
Sent: Friday, August 27, 2010 1:38 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

On Thursday, 08/26/2010 at 09:58 EDT, Tom Huegel tehue...@gmail.com 
wrote:
 So I guess there is no way to absolutly protect z/VM from using the 
wrong pack 
 at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on 
the SLOT 
 deffinations. comments?

Rich Corak has explained on previous occasions how volser recognition 
works.  If CP detects a duplicate volser for a device to be attached to 
SYSTEM at IPL, you will get
  HCP954I DASD rdev1 VOLID volid IS A DUPLICATE OF DASD rdev2
and you are responsible to ensure that rdev2 is the one you intended to
be 
attached to the system.  Out of all the volumes on the system with that 
volid, the lowest *device number* (address) will win, without regard to 
who responds first.

As has been suggested, this is no guarantee since you can find yourself
in 
trouble if you have a dasd problem or someone dinks with the I/O config 
and makes a mistake.  Being able to place the RDEV on the CP_owned and 
user_volume_ statements is one way to help mitigate the problem.

There is another way to identify devices: by their architected node 
element descriptor (NED).  Or in the lingua franca, universally unique 
identifiers (UUIDs).  When Single System Image hits the streets, you
will 
find UUIDs very much in evidence, though not in the context of IPL or 
SYSTEM CONFIG.

But I could imagine something like this in SYSTEM CONFIG:

CP_Owned Slot 1 6X0RES ID 2105.000.IBM.13.3737504EE.0D0A 
CP_owned Slot 2 6X0TD1 ID 2107.900.IBM.13.29839621A.0D0A 
CP_owned Slot 3 6X0PG1 ID 2107.900.IBM.13.4924295DC.0D0A, (yes, there
are 
two IDs for slot 3)
  2107.900.IBM.13.0358113AA.0D0A  (..choose in
the 
order given)

so that you wouldn't have to worry about the RDEV.  However, there is a 
Heisenberg-esque trade-off to be made between flexibility and security. 
Good news - the above syntax ensures you won't have any bogus volumes.
Bad 
news - it won't tolerate any copies or change in DASD, so it's generally

hostile to a dynamic DR environment, and you can't get the NED until you

IPL.

Hmmmorwhen IPL has finished, CP could write the found UUIDs to
the 
warm start area and use those in preference to any other on a subsequent

IPL.  For PPRC pairs, write both UUIDs and allow either.   If a volume 
with the needed UUID is not available, then (based on configuration) ask

which RDEV to use a la z/OS, or just take the lowest-numbered one 
available.  In either case, update the UUID in the warm start area. 
h.

Alan Altmark
z/VM Development
IBM Endicott

_
This message contains information which is privileged and confidential and is 
solely for the use of the
intended recipient. If you are not the intended recipient, be aware that any 
review, disclosure,
copying, distribution, or use of the contents of this message is strictly 
prohibited. If you have
received this in error, please destroy it immediately and notify us at 
privacy...@ailife.com.


Re: Duplicate VOLID's

2010-08-27 Thread Tom Huegel
BINGO!! Problem solved.

On Fri, Aug 27, 2010 at 11:38 AM, Alan Altmark alan_altm...@us.ibm.comwrote:

 On Thursday, 08/26/2010 at 09:58 EDT, Tom Huegel tehue...@gmail.com
 wrote:
  So I guess there is no way to absolutly protect z/VM from using the
 wrong pack
  at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on
 the SLOT
  deffinations. comments?

 Rich Corak has explained on previous occasions how volser recognition
 works.  If CP detects a duplicate volser for a device to be attached to
 SYSTEM at IPL, you will get
  HCP954I DASD rdev1 VOLID volid IS A DUPLICATE OF DASD rdev2
 and you are responsible to ensure that rdev2 is the one you intended to be
 attached to the system.  Out of all the volumes on the system with that
 volid, the lowest *device number* (address) will win, without regard to
 who responds first.

 As has been suggested, this is no guarantee since you can find yourself in
 trouble if you have a dasd problem or someone dinks with the I/O config
 and makes a mistake.  Being able to place the RDEV on the CP_owned and
 user_volume_ statements is one way to help mitigate the problem.

 There is another way to identify devices: by their architected node
 element descriptor (NED).  Or in the lingua franca, universally unique
 identifiers (UUIDs).  When Single System Image hits the streets, you will
 find UUIDs very much in evidence, though not in the context of IPL or
 SYSTEM CONFIG.

 But I could imagine something like this in SYSTEM CONFIG:

 CP_Owned Slot 1 6X0RES ID 2105.000.IBM.13.3737504EE.0D0A
 CP_owned Slot 2 6X0TD1 ID 2107.900.IBM.13.29839621A.0D0A
 CP_owned Slot 3 6X0PG1 ID 2107.900.IBM.13.4924295DC.0D0A, (yes, there are
 two IDs for slot 3)
  2107.900.IBM.13.0358113AA.0D0A  (..choose in the
 order given)

 so that you wouldn't have to worry about the RDEV.  However, there is a
 Heisenberg-esque trade-off to be made between flexibility and security.
 Good news - the above syntax ensures you won't have any bogus volumes. Bad
 news - it won't tolerate any copies or change in DASD, so it's generally
 hostile to a dynamic DR environment, and you can't get the NED until you
 IPL.

 Hmmmorwhen IPL has finished, CP could write the found UUIDs to the
 warm start area and use those in preference to any other on a subsequent
 IPL.  For PPRC pairs, write both UUIDs and allow either.   If a volume
 with the needed UUID is not available, then (based on configuration) ask
 which RDEV to use a la z/OS, or just take the lowest-numbered one
 available.  In either case, update the UUID in the warm start area.
 h.

 Alan Altmark
 z/VM Development
 IBM Endicott



Re: Duplicate VOLID's

2010-08-27 Thread Alan Altmark
On Friday, 08/27/2010 at 12:22 EDT, Mike Walter mike.wal...@hewitt.com 
wrote:
 
 There have been previous reliable posts (which I can't seem to find 
right
 now) relating that the order in which real devices (RDEVs) come online
 during IPL is NOT GUARANTEED to be in serially ascending order. 
Certainly
 not since the XA subsystem was implemented.  Perhaps not even before 
then,
 some ancient, little-used synapses are whispering contingent 
connection
 in my ear (along with all those other people who just whisper in my 
mind).

That's how it worked ages ago, but it was specifically re-engineered to 
work in ascending RDEV order, regardless of variation in DASD response 
times or subchannel assignments.

Alan Altmark
z/VM Development
IBM Endicott


Re: Duplicate VOLID's

2010-08-27 Thread Mike Walter
Well, there you go, Alan.  There's a reason I (and you) remember it from 
ages ago.
It's hard to break some habits, and even harder to unlearn some 
painfully learned knowledge.

BTW,  I *did* search the list many different ways for Rich Corak's posts 
(his name was one of the first search arguments I tried).  No joy.

Mike (grey-beard) Walter
Hewitt Associates
The opinions expressed herein are mine alone, not my employer's.



Alan Altmark alan_altm...@us.ibm.com 

Sent by: The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU
08/27/2010 04:09 PM
Please respond to
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU



To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
Re: Duplicate VOLID's






On Friday, 08/27/2010 at 12:22 EDT, Mike Walter mike.wal...@hewitt.com 
wrote:
 
 There have been previous reliable posts (which I can't seem to find 
right
 now) relating that the order in which real devices (RDEVs) come online
 during IPL is NOT GUARANTEED to be in serially ascending order. 
Certainly
 not since the XA subsystem was implemented.  Perhaps not even before 
then,
 some ancient, little-used synapses are whispering contingent 
connection
 in my ear (along with all those other people who just whisper in my 
mind).

That's how it worked ages ago, but it was specifically re-engineered to 
work in ascending RDEV order, regardless of variation in DASD response 
times or subchannel assignments.

Alan Altmark
z/VM Development
IBM Endicott





The information contained in this e-mail and any accompanying documents may 
contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if this 
message has been addressed to you in error, please immediately alert the sender 
by reply e-mail and then delete this message, including any attachments. Any 
dissemination, distribution or other use of the contents of this message by 
anyone other than the intended recipient is strictly prohibited. All messages 
sent to and from this e-mail address may be monitored as permitted by 
applicable law and regulations to ensure compliance with our internal policies 
and to protect our business. E-mails are not secure and cannot be guaranteed to 
be error free as they can be intercepted, amended, lost or destroyed, or 
contain viruses. You are deemed to have accepted these risks if you communicate 
with us by e-mail. 




Re: Duplicate VOLID's

2010-08-27 Thread Schuh, Richard
The order it searches is guaranteed, as is the order of non-responsive devices. 
 If a spool volume does not respond but there is a duplicate volser in the 
configuration, you still have problems, especially if the operator says GO. 
Having a guaranteed order does lessen the threat considerably, but it does not 
completely eliminate it. There have only been two ways to eliminate it 
mentioned in this thread. 1. Insure that each device has a unique volser, or 2. 
have the duplicates on devices that are Offline_at_IPL and wait until the 
system initialization completes before varying them on.


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Mike Walter
Sent: Friday, August 27, 2010 2:14 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's


Well, there you go, Alan.  There's a reason I (and you) remember it from ages 
ago.
It's hard to break some habits, and even harder to unlearn some painfully 
learned knowledge.

BTW,  I *did* search the list many different ways for Rich Corak's posts (his 
name was one of the first search arguments I tried).  No joy.

Mike (grey-beard) Walter

Hewitt Associates
The opinions expressed herein are mine alone, not my employer's.


Alan Altmark alan_altm...@us.ibm.com

Sent by: The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU

08/27/2010 04:09 PM
Please respond to
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU



To
IBMVM@LISTSERV.UARK.EDU
cc
Subject
Re: Duplicate VOLID's





On Friday, 08/27/2010 at 12:22 EDT, Mike Walter mike.wal...@hewitt.com
wrote:

 There have been previous reliable posts (which I can't seem to find
right
 now) relating that the order in which real devices (RDEVs) come online
 during IPL is NOT GUARANTEED to be in serially ascending order.
Certainly
 not since the XA subsystem was implemented.  Perhaps not even before
then,
 some ancient, little-used synapses are whispering contingent
connection
 in my ear (along with all those other people who just whisper in my
mind).

That's how it worked ages ago, but it was specifically re-engineered to
work in ascending RDEV order, regardless of variation in DASD response
times or subchannel assignments.

Alan Altmark
z/VM Development
IBM Endicott





The information contained in this e-mail and any accompanying documents may 
contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if this 
message has been addressed to you in error, please immediately alert the sender 
by reply e-mail and then delete this message, including any attachments. Any 
dissemination, distribution or other use of the contents of this message by 
anyone other than the intended recipient is strictly prohibited. All messages 
sent to and from this e-mail address may be monitored as permitted by 
applicable law and regulations to ensure compliance with our internal policies 
and to protect our business. E-mails are not secure and cannot be guaranteed to 
be error free as they can be intercepted, amended, lost or destroyed, or 
contain viruses. You are deemed to have accepted these risks if you communicate 
with us by e-mail.


Re: Duplicate VOLID's

2010-08-27 Thread Bill Munson
Mike I understand your pain.  
Bill Bitner just this week had to tell me my thoughts on processor utiliization 
was OUTDATED. 
So sad. 
Even grayer beard Bill Munson 

 


- Original Message -
From: Mike Walter [mike.wal...@hewitt.com]
Sent: 08/27/2010 04:14 PM EST
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's



Well, there you go, Alan.  There's a reason I (and you) remember it from 
ages ago.
It's hard to break some habits, and even harder to unlearn some 
painfully learned knowledge.

BTW,  I *did* search the list many different ways for Rich Corak's posts 
(his name was one of the first search arguments I tried).  No joy.

Mike (grey-beard) Walter
Hewitt Associates
The opinions expressed herein are mine alone, not my employer's.



Alan Altmark alan_altm...@us.ibm.com 

Sent by: The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU
08/27/2010 04:09 PM
Please respond to
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU



To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
Re: Duplicate VOLID's






On Friday, 08/27/2010 at 12:22 EDT, Mike Walter mike.wal...@hewitt.com 
wrote:
 
 There have been previous reliable posts (which I can't seem to find 
right
 now) relating that the order in which real devices (RDEVs) come online
 during IPL is NOT GUARANTEED to be in serially ascending order. 
Certainly
 not since the XA subsystem was implemented.  Perhaps not even before 
then,
 some ancient, little-used synapses are whispering contingent 
connection
 in my ear (along with all those other people who just whisper in my 
mind).

That's how it worked ages ago, but it was specifically re-engineered to 
work in ascending RDEV order, regardless of variation in DASD response 
times or subchannel assignments.

Alan Altmark
z/VM Development
IBM Endicott





The information contained in this e-mail and any accompanying documents may 
contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if this 
message has been addressed to you in error, please immediately alert the sender 
by reply e-mail and then delete this message, including any attachments. Any 
dissemination, distribution or other use of the contents of this message by 
anyone other than the intended recipient is strictly prohibited. All messages 
sent to and from this e-mail address may be monitored as permitted by 
applicable law and regulations to ensure compliance with our internal policies 
and to protect our business. E-mails are not secure and cannot be guaranteed to 
be error free as they can be intercepted, amended, lost or destroyed, or 
contain viruses. You are deemed to have accepted these risks if you communicate 
with us by e-mail. 




*** IMPORTANT
NOTE*-- The opinions expressed in this
message and/or any attachments are those of the author and not
necessarily those of Brown Brothers Harriman  Co., its
subsidiaries and affiliates (BBH). There is no guarantee that
this message is either private or confidential, and it may have
been altered by unauthorized sources without your or our knowledge.
Nothing in the message is capable or intended to create any legally
binding obligations on either party and it is not intended to
provide legal advice. BBH accepts no responsibility for loss or
damage from its use, including damage from virus.


Re: Duplicate VOLID's

2010-08-27 Thread David Boyes
I’ve always liked Vn, where n is the order of acquisition in your 
organization, eg V1 for the first volume acquired, V2 for the second, 
etc. The V is there to not confuse commands that need a volser as input into 
thinking they’re dealing with a real address (once had that problem with a 
system that had all numeric userids --- ick), and I’ve never rolled over within 
the same complex during my time there.

If you stick to decimal numbering most people don’t confuse them with 
addresses, and if you go through a lot of disks (as I assume Visa does), you’re 
still unlikely to replace 99,999 at a time.  Works with LPARs, VM, z/OS, pretty 
much any situation you come to.


On 8/27/10 10:45 AM, Schuh, Richard rsc...@visa.com wrote:

Also, if you have dasd that is EMIFd to n LPARS, how do you know (a) whether a 
disk is spare or in use if all of the dasd have volsers ZL, and (b) if in 
use, by which LPAR? IO can see using VM as an indicator that a volume has 
not yet been deployed, but once it has, it needs to be labeled using another 
convention. Since we seem to replace our DASD farm every 2 or 3 years, device 
number has no place in the scheme other than to denote a spare volume.
Regards,
Richard Schuh


Re: Duplicate VOLID's

2010-08-27 Thread David Boyes
On 8/27/10 1:38 PM, Alan Altmark alan_altm...@us.ibm.com wrote:

 
 But I could imagine something like this in SYSTEM CONFIG:
 
 CP_Owned Slot 1 6X0RES ID 2105.000.IBM.13.3737504EE.0D0A
 CP_owned Slot 2 6X0TD1 ID 2107.900.IBM.13.29839621A.0D0A
 CP_owned Slot 3 6X0PG1 ID 2107.900.IBM.13.4924295DC.0D0A, (yes, there are
 two IDs for slot 3)
   2107.900.IBM.13.0358113AA.0D0A  (..choose in the
 order given)

Meh. Kinda has the same problem that ATM addresses had -- too much detail
stored as context. 
 
 Hmmmorwhen IPL has finished, CP could write the found UUIDs to the
 warm start area and use those in preference to any other on a subsequent
 IPL.  For PPRC pairs, write both UUIDs and allow either.   If a volume
 with the needed UUID is not available, then (based on configuration) ask
 which RDEV to use a la z/OS, or just take the lowest-numbered one
 available.  In either case, update the UUID in the warm start area.
 h.

Much better. 


Re: Duplicate VOLID's

2010-08-27 Thread Tom Huegel
Who wants to write the requirement?

On Fri, Aug 27, 2010 at 4:04 PM, David Boyes dbo...@sinenomine.net wrote:

 On 8/27/10 1:38 PM, Alan Altmark alan_altm...@us.ibm.com wrote:


  But I could imagine something like this in SYSTEM CONFIG:
 
  CP_Owned Slot 1 6X0RES ID 2105.000.IBM.13.3737504EE.0D0A
  CP_owned Slot 2 6X0TD1 ID 2107.900.IBM.13.29839621A.0D0A
  CP_owned Slot 3 6X0PG1 ID 2107.900.IBM.13.4924295DC.0D0A, (yes, there are
  two IDs for slot 3)
2107.900.IBM.13.0358113AA.0D0A  (..choose in
 the
  order given)

 Meh. Kinda has the same problem that ATM addresses had -- too much detail
 stored as context.

  Hmmmorwhen IPL has finished, CP could write the found UUIDs to
 the
  warm start area and use those in preference to any other on a subsequent
  IPL.  For PPRC pairs, write both UUIDs and allow either.   If a volume
  with the needed UUID is not available, then (based on configuration) ask
  which RDEV to use a la z/OS, or just take the lowest-numbered one
  available.  In either case, update the UUID in the warm start area.
  h.

 Much better.



Re: Duplicate VOLID's

2010-08-27 Thread Schuh, Richard
Order of acquisition will stay static. Is V04371 a spare or is it used by one 
of the n LPARS? If the latter, which one?


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of David Boyes
Sent: Friday, August 27, 2010 4:00 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

I've always liked Vn, where n is the order of acquisition in your 
organization, eg V1 for the first volume acquired, V2 for the second, 
etc. The V is there to not confuse commands that need a volser as input into 
thinking they're dealing with a real address (once had that problem with a 
system that had all numeric userids --- ick), and I've never rolled over within 
the same complex during my time there.

If you stick to decimal numbering most people don't confuse them with 
addresses, and if you go through a lot of disks (as I assume Visa does), you're 
still unlikely to replace 99,999 at a time.  Works with LPARs, VM, z/OS, pretty 
much any situation you come to.


On 8/27/10 10:45 AM, Schuh, Richard rsc...@visa.com wrote:

Also, if you have dasd that is EMIFd to n LPARS, how do you know (a) whether a 
disk is spare or in use if all of the dasd have volsers ZL, and (b) if in 
use, by which LPAR? IO can see using VM as an indicator that a volume has 
not yet been deployed, but once it has, it needs to be labeled using another 
convention. Since we seem to replace our DASD farm every 2 or 3 years, device 
number has no place in the scheme other than to denote a spare volume.
Regards,
Richard Schuh


Re: Duplicate VOLID's

2010-08-27 Thread Schuh, Richard
Just submit Alan's suggestion as-is and make sure that he gets full credit for 
it. :-)


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Huegel
Sent: Friday, August 27, 2010 5:12 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Who wants to write the requirement?

On Fri, Aug 27, 2010 at 4:04 PM, David Boyes 
dbo...@sinenomine.netmailto:dbo...@sinenomine.net wrote:
On 8/27/10 1:38 PM, Alan Altmark 
alan_altm...@us.ibm.commailto:alan_altm...@us.ibm.com wrote:


 But I could imagine something like this in SYSTEM CONFIG:

 CP_Owned Slot 1 6X0RES ID 2105.000.IBM.13.3737504EE.0D0A
 CP_owned Slot 2 6X0TD1 ID 2107.900.IBM.13.29839621A.0D0A
 CP_owned Slot 3 6X0PG1 ID 2107.900.IBM.13.4924295DC.0D0A, (yes, there are
 two IDs for slot 3)
   2107.900.IBM.13.0358113AA.0D0A  (..choose in the
 order given)

Meh. Kinda has the same problem that ATM addresses had -- too much detail
stored as context.

 Hmmmorwhen IPL has finished, CP could write the found UUIDs to the
 warm start area and use those in preference to any other on a subsequent
 IPL.  For PPRC pairs, write both UUIDs and allow either.   If a volume
 with the needed UUID is not available, then (based on configuration) ask
 which RDEV to use a la z/OS, or just take the lowest-numbered one
 available.  In either case, update the UUID in the warm start area.
 h.

Much better.



Re: Duplicate VOLID's

2010-08-27 Thread David Boyes
Who’s using it is a different issue, and with the size of systems we’re 
starting to see, the name of the disk isn’t going to tell you anything 
worthwhile anyway. 6 characters just isn’t enough to overload with any useful 
meaning beyond a fairly small environment. I manage that problem in the change 
management system — OTRS is really quite nice for that. 8-)


On 8/27/10 7:21 PM, Schuh, Richard rsc...@visa.com wrote:

Order of acquisition will stay static. Is V04371 a spare or is it used by one 
of the n LPARS? If the latter, which one?



Re: Duplicate VOLID's

2010-08-26 Thread Brian Nielsen
On Wed, 25 Aug 2010 23:07:04 -0500, Tom Huegel tehue...@gmail.com wrote
:

In a normal production environment this is not such an insurmountable
problem.
The problem is this is a test lab, and I don't necessarily know what 
happens
to the different disk volumes.
I do know what addresses my system disks are on, but there may be copies

that someone was testing with floating around.
It may have been a in a second level machine, or a first level test..
It's only a problem at IPL ..

The suggestions for using ONLINE_AT_IPL and OFFLINE_AT_IPL are great when
 
you are in configuration where you know what DASD devices addresses are 

available and your systems disks have been restored to the appropriate 

addresses.  Examples would be a 2nd level guest or a known 1st level LPAR
.

If you want the flexibility to be able to use any LPAR configuration with
 
any DASD device addresses that happen to be configured to it then you nee
d 
to take an extra step.  Build a 1 pack recovery system that allows all 

devices to be online at IPL time.  You restore and IPL the recovery syste
m 
which won't care what volids are on any DASD address except itself.  Then
 
you have multiple options: A) You could relabel all (or just the problem 

subset of) the online DASD, thus solving your problem, or B) You could 

restore your main system packs and use the recovery system to update the 

SYSTEM CONFIG on your main RES volume with the appropriate ONLINE_AT_IPL 

and OFFLINE_AT_IPL addresses.

Brian Nielsen


Re: Duplicate VOLID's

2010-08-26 Thread Michael MacIsaac
I do know what addresses my system disks are on,
Ah! - an argument for the convention of using the RDEV as the last four 
characters of the volser :)) 

Mike MacIsaac mike...@us.ibm.com   (845) 433-7061

Re: Duplicate VOLID's

2010-08-26 Thread Mark Wheeler

Alas, that's about the worst possible convention to adopt if you ever plan to 
replace your DASD farm. 
 
Mark Wheeler
UnitedHealth Group 




 


Date: Thu, 26 Aug 2010 13:55:05 -0400
From: mike...@us.ibm.com
Subject: Re: Duplicate VOLID's
To: IBMVM@LISTSERV.UARK.EDU


I do know what addresses my system disks are on, 
Ah! - an argument for the convention of using the RDEV as the last four 
characters of the volser :)) 

Mike MacIsaac mike...@us.ibm.com   (845) 433-7061   
  

Re: Duplicate VOLID's

2010-08-26 Thread RPN01
The obvious argument against using the rdev in the volser is  when you end
up needing to move the data to a new volume, or restore the pack after a
physical problem, then you no longer have a match between the volser and the
rdev, and it becomes very confusing from there.

There really isn¹t one ideal way to do it.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OC-1-18 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.



On 8/26/10 12:55 PM, Michael MacIsaac mike...@us.ibm.com wrote:

 
 I do know what addresses my system disks are on,
 Ah! - an argument for the convention of using the RDEV as the last four
 characters of the volser :))
 
 Mike MacIsaac mike...@us.ibm.com   (845) 433-7061



Re: Duplicate VOLID's

2010-08-26 Thread Schuh, Richard
That is absolutely the wrong thing to do. I am now suffering because someone 
else did that to dasd that is EMFFd to 3 LPARS. (It was all ZLccuu). It 
requires meticulous record keeping and is very error prone. I did wipe out a 
disk needed by one system because the records I received were not complete 
Fortunately, it was a disk that was to be used by a new system and had not been 
updated; it was easy to restore. Also, it is a huge headache if you ever 
replace your DASD. I don't know about RACF, but there is no mechanism built 
into VM:Secure for easily doing a mass update of volsers ( I know, you can 
change the volser with one command - if it is a VM:Secure .controlled disk and 
nobody is linked to it. The latter is hard to achieve around here.)

Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Michael MacIsaac
Sent: Thursday, August 26, 2010 10:55 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's


I do know what addresses my system disks are on,
Ah! - an argument for the convention of using the RDEV as the last four 
characters of the volser :))

Mike MacIsaac mike...@us.ibm.com   (845) 433-7061


Re: Duplicate VOLID's

2010-08-26 Thread Scott Rohling
Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at
least - and doesn't care about volsers the mindisks are on.The process
for DIRMAINT is probably similar to the things that need doing on VM:Secure
to do the directory changes:

-  Make a 'monolithic' copy of the directory and run a PIPE to change all
volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
-  Put the directory online (DIRM DIRECT)
-  Change EXTENT CONTROL similarly and do a DIRM RLDE

I'm in favor of labels using the rdev - unless you really have frequent
changes of DASD - to me, the benefits outweigh the occassional need to
update the directory.   YMMV, as this thread indicates.

Scott Rohling

On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard rsc...@visa.com wrote:

  That is absolutely the wrong thing to do. I am now suffering because
 someone else did that to dasd that is EMFFd to 3 LPARS. (It was all
 ZLccuu). It requires meticulous record keeping and is very error prone. I
 did wipe out a disk needed by one system because the records I received were
 not complete Fortunately, it was a disk that was to be used by a new system
 and had not been updated; it was easy to restore. Also, it is a huge
 headache if you ever replace your DASD. I don't know about RACF, but there
 is no mechanism built into VM:Secure for easily doing a mass update of
 volsers ( I know, you can change the volser with one command - if it is a
 VM:Secure .controlled disk and nobody is linked to it. The latter is hard to
 achieve around here.)

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Michael MacIsaac
 *Sent:* Thursday, August 26, 2010 10:55 AM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's


 I do know what addresses my system disks are on,
 Ah! - an argument for the convention of using the RDEV as the last four
 characters of the volser :))

 Mike MacIsaac mike...@us.ibm.com   (845) 433-7061




Re: Duplicate VOLID's

2010-08-26 Thread Schuh, Richard
DIRMAINT is just a directory manager. It is similar to the directory manager 
component of VM:Secure. DIRMAINT does have the capability to do mass updates of 
the directory. VM:Secure does not. I have my own form of mass updater. I create 
code to perform the update of a generic single user and temporarily EXECLOAD it 
as PROFILE XEDIT. Then I run a pipe that looks something like this:

'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

It usually runs quickly because our directory has fewer than 2000 userids in 
it. It might not be acceptable on a system with 1+ userids.

Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Scott Rohling
Sent: Thursday, August 26, 2010 4:56 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's

Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at 
least - and doesn't care about volsers the mindisks are on.The process for 
DIRMAINT is probably similar to the things that need doing on VM:Secure to do 
the directory changes:

-  Make a 'monolithic' copy of the directory and run a PIPE to change all 
volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
-  Put the directory online (DIRM DIRECT)
-  Change EXTENT CONTROL similarly and do a DIRM RLDE

I'm in favor of labels using the rdev - unless you really have frequent changes 
of DASD - to me, the benefits outweigh the occassional need to update the 
directory.   YMMV, as this thread indicates.

Scott Rohling

On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard 
rsc...@visa.commailto:rsc...@visa.com wrote:
That is absolutely the wrong thing to do. I am now suffering because someone 
else did that to dasd that is EMFFd to 3 LPARS. (It was all ZLccuu). It 
requires meticulous record keeping and is very error prone. I did wipe out a 
disk needed by one system because the records I received were not complete 
Fortunately, it was a disk that was to be used by a new system and had not been 
updated; it was easy to restore. Also, it is a huge headache if you ever 
replace your DASD. I don't know about RACF, but there is no mechanism built 
into VM:Secure for easily doing a mass update of volsers ( I know, you can 
change the volser with one command - if it is a VM:Secure .controlled disk and 
nobody is linked to it. The latter is hard to achieve around here.)

Regards,
Richard Schuh






From: The IBM z/VM Operating System 
[mailto:IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU] On Behalf Of 
Michael MacIsaac
Sent: Thursday, August 26, 2010 10:55 AM

To: IBMVM@LISTSERV.UARK.EDUmailto:IBMVM@LISTSERV.UARK.EDU
Subject: Re: Duplicate VOLID's


I do know what addresses my system disks are on,
Ah! - an argument for the convention of using the RDEV as the last four 
characters of the volser :))

Mike MacIsaac mike...@us.ibm.commailto:mike...@us.ibm.com   (845) 433-7061



Re: Duplicate VOLID's

2010-08-26 Thread Tom Huegel
So I guess there is no way to absolutly protect z/VM from using the wrong
pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on
the SLOT deffinations. comments?

On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard rsc...@visa.com wrote:

  DIRMAINT is just a directory manager. It is similar to the directory
 manager component of VM:Secure. DIRMAINT does have the capability to do mass
 updates of the directory. VM:Secure does not. I have my own form of mass
 updater. I create code to perform the update of a generic single user and
 temporarily EXECLOAD it as PROFILE XEDIT. Then I run a pipe that looks
 something like this:

 'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

 It usually runs quickly because our directory has fewer than 2000 userids
 in it. It might not be acceptable on a system with 1+ userids.

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 4:56 PM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's

  Hmm..   RACF isn't really related as it's protecting minidisks on z/VM,
 at least - and doesn't care about volsers the mindisks are on.The
 process for DIRMAINT is probably similar to the things that need doing on
 VM:Secure to do the directory changes:

 -  Make a 'monolithic' copy of the directory and run a PIPE to change all
 volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
 -  Put the directory online (DIRM DIRECT)
 -  Change EXTENT CONTROL similarly and do a DIRM RLDE

 I'm in favor of labels using the rdev - unless you really have frequent
 changes of DASD - to me, the benefits outweigh the occassional need to
 update the directory.   YMMV, as this thread indicates.

 Scott Rohling

 On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard rsc...@visa.com wrote:

  That is absolutely the wrong thing to do. I am now suffering because
 someone else did that to dasd that is EMFFd to 3 LPARS. (It was all
 ZLccuu). It requires meticulous record keeping and is very error prone. I
 did wipe out a disk needed by one system because the records I received were
 not complete Fortunately, it was a disk that was to be used by a new system
 and had not been updated; it was easy to restore. Also, it is a huge
 headache if you ever replace your DASD. I don't know about RACF, but there
 is no mechanism built into VM:Secure for easily doing a mass update of
 volsers ( I know, you can change the volser with one command - if it is a
 VM:Secure .controlled disk and nobody is linked to it. The latter is hard to
 achieve around here.)

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Michael MacIsaac
 *Sent:* Thursday, August 26, 2010 10:55 AM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's


 I do know what addresses my system disks are on,
 Ah! - an argument for the convention of using the RDEV as the last four
 characters of the volser :))

 Mike MacIsaac mike...@us.ibm.com   (845) 433-7061





Re: Duplicate VOLID's

2010-08-26 Thread Scott Rohling
Then it sounds like changing volids isn't such a big deal?   ;-)
Automation can really help simplify the annoying stuff..

Scott Rohling

On Thu, Aug 26, 2010 at 6:15 PM, Schuh, Richard rsc...@visa.com wrote:

  DIRMAINT is just a directory manager. It is similar to the directory
 manager component of VM:Secure. DIRMAINT does have the capability to do mass
 updates of the directory. VM:Secure does not. I have my own form of mass
 updater. I create code to perform the update of a generic single user and
 temporarily EXECLOAD it as PROFILE XEDIT. Then I run a pipe that looks
 something like this:

 'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

 It usually runs quickly because our directory has fewer than 2000 userids
 in it. It might not be acceptable on a system with 1+ userids.

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 4:56 PM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's

 Hmm..   RACF isn't really related as it's protecting minidisks on z/VM, at
 least - and doesn't care about volsers the mindisks are on.The process
 for DIRMAINT is probably similar to the things that need doing on VM:Secure
 to do the directory changes:

 -  Make a 'monolithic' copy of the directory and run a PIPE to change all
 volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
 -  Put the directory online (DIRM DIRECT)
 -  Change EXTENT CONTROL similarly and do a DIRM RLDE

 I'm in favor of labels using the rdev - unless you really have frequent
 changes of DASD - to me, the benefits outweigh the occassional need to
 update the directory.   YMMV, as this thread indicates.

 Scott Rohling

 On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard rsc...@visa.com wrote:

  That is absolutely the wrong thing to do. I am now suffering because
 someone else did that to dasd that is EMFFd to 3 LPARS. (It was all
 ZLccuu). It requires meticulous record keeping and is very error prone. I
 did wipe out a disk needed by one system because the records I received were
 not complete Fortunately, it was a disk that was to be used by a new system
 and had not been updated; it was easy to restore. Also, it is a huge
 headache if you ever replace your DASD. I don't know about RACF, but there
 is no mechanism built into VM:Secure for easily doing a mass update of
 volsers ( I know, you can change the volser with one command - if it is a
 VM:Secure .controlled disk and nobody is linked to it. The latter is hard to
 achieve around here.)

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Michael MacIsaac
 *Sent:* Thursday, August 26, 2010 10:55 AM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's


 I do know what addresses my system disks are on,
 Ah! - an argument for the convention of using the RDEV as the last four
 characters of the volser :))

 Mike MacIsaac mike...@us.ibm.com   (845) 433-7061





Re: Duplicate VOLID's

2010-08-26 Thread Scott Rohling
Would definitely agree an rdev specification on the SLOT def would be very
useful.I just recently built a 2nd level guest and neglected to relabel
the volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't
have happened if the real address was specified...   good idea!

Scott Rohling

On Thu, Aug 26, 2010 at 7:58 PM, Tom Huegel tehue...@gmail.com wrote:

 So I guess there is no way to absolutly protect z/VM from using the wrong
 pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev on
 the SLOT deffinations. comments?


 On Thu, Aug 26, 2010 at 5:15 PM, Schuh, Richard rsc...@visa.com wrote:

  DIRMAINT is just a directory manager. It is similar to the directory
 manager component of VM:Secure. DIRMAINT does have the capability to do mass
 updates of the directory. VM:Secure does not. I have my own form of mass
 updater. I create code to perform the update of a generic single user and
 temporarily EXECLOAD it as PROFILE XEDIT. Then I run a pipe that looks
 something like this:

 'PIPE  id list a | spec /vmsecure edit/ 1 w1 nw | cms |  log file a'

 It usually runs quickly because our directory has fewer than 2000 userids
 in it. It might not be acceptable on a system with 1+ userids.

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Scott Rohling
 *Sent:* Thursday, August 26, 2010 4:56 PM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's

  Hmm..   RACF isn't really related as it's protecting minidisks on z/VM,
 at least - and doesn't care about volsers the mindisks are on.The
 process for DIRMAINT is probably similar to the things that need doing on
 VM:Secure to do the directory changes:

 -  Make a 'monolithic' copy of the directory and run a PIPE to change all
 volsers..  then initialize DIRMAINT using the new directory (USER INPUT)
 -  Put the directory online (DIRM DIRECT)
 -  Change EXTENT CONTROL similarly and do a DIRM RLDE

 I'm in favor of labels using the rdev - unless you really have frequent
 changes of DASD - to me, the benefits outweigh the occassional need to
 update the directory.   YMMV, as this thread indicates.

 Scott Rohling

 On Thu, Aug 26, 2010 at 3:33 PM, Schuh, Richard rsc...@visa.com wrote:

  That is absolutely the wrong thing to do. I am now suffering because
 someone else did that to dasd that is EMFFd to 3 LPARS. (It was all
 ZLccuu). It requires meticulous record keeping and is very error prone. I
 did wipe out a disk needed by one system because the records I received were
 not complete Fortunately, it was a disk that was to be used by a new system
 and had not been updated; it was easy to restore. Also, it is a huge
 headache if you ever replace your DASD. I don't know about RACF, but there
 is no mechanism built into VM:Secure for easily doing a mass update of
 volsers ( I know, you can change the volser with one command - if it is a
 VM:Secure .controlled disk and nobody is linked to it. The latter is hard to
 achieve around here.)

 Regards,
 Richard Schuh




  --
 *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Michael MacIsaac
 *Sent:* Thursday, August 26, 2010 10:55 AM

 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Re: Duplicate VOLID's


 I do know what addresses my system disks are on,
 Ah! - an argument for the convention of using the RDEV as the last four
 characters of the volser :))

 Mike MacIsaac mike...@us.ibm.com   (845) 433-7061






Re: Duplicate VOLID's

2010-08-26 Thread Dale R. Smith
On Thu, 26 Aug 2010 18:58:08 -0700, Tom Huegel tehue...@gmail.com wrote
:

So I guess there is no way to absolutly protect z/VM from using the wron
g
pack at IPL.. Maybe a requirement? In SYSTEM CONFIG allow optional rdev 
on
the SLOT deffinations. comments?


I would rather see z/VM issue prompt messages the way z/OS does for 
duplicate volsers/duplicate SYSRES packs:

IEA213A DUPLICATE VOLUME volname FOUND ON DEVICES dev1 AND dev2. REPLY 

DEVICE NUMBER WHICH IS TO REMAIN OFFLINE

IEA214A DUPLICATE SYSRES volname FOUND ON DEVICE dev. VERIFY THAT CORRECT
 
DEVICE WAS USED FOR IPL. DUPLICATE DEVICE WILL REMAIN OFFLINE. 
REPLY ’CONT’ TO CONTINUE IPL

This gives you/operations the chance to pick which duplicate volume 
should be offline, (or which one should be online), instead of z/VM 
picking one for you and probably using the wrong one.  This is one of the
 
few things that z/OS does better then z/VM!  :-)

-- 
Dale R. Smith


Re: Duplicate VOLID's

2010-08-26 Thread Rich Smrcina
 How much 'protection' is really necessary here?  If something like this is implemented 
*AND* the wrong device number is used in the slot definition, your system could be 
rendered non-IPLable because the volid on the slot doesn't match the one on the volume.  
What takes precedence?  Should the IPL stop and CP go through a dialog asking a bunch of 
silly questions about which volume you 'really' intended to use?


The volumes should be labeled properly in the first place.

I'll vote strongly negative for a requirement to change this process for the reasons 
indicated so far.


On 08/26/2010 09:45 PM, Scott Rohling wrote:
Would definitely agree an rdev specification on the SLOT def would be very useful.
I just recently built a 2nd level guest and neglected to relabel the volumes before 
they IPL'd the 1st level system ...  ugly.   Wouldn't have happened if the real 
address was specified...   good idea!


Scott Rohling



--
Rich Smrcina
Phone: 414-491-6001
http://www.linkedin.com/in/richsmrcina

Catch the WAVV! http://www.wavv.org
WAVV 2011 - April 15-19, 2011 Colorado Springs, CO


Re: Duplicate VOLID's

2010-08-26 Thread Tom Huegel
One can't always be sure the packs are labeled properly.. Besides why depend
on people when software can do it better. I suspect the reasons for VM just
choosing by volid goes back to when the disk packs were removable, put your
second level testRES on the shelf when you were done. Putting an OPTIONAL
rdev on the SLOT makes sense, similiar to a DEDICATE statement with both
rdev and volid.

 If there is no rdev on the SLOT and there are dups, or the voild on the
rdev doesn't match, then ask questions.
Maybe be able to run a mini editor from SAPL to correct SYSTEM CONFIG
errors.



On Thu, Aug 26, 2010 at 10:09 PM, Rich Smrcina r...@velocitysoftware.comwrote:

  How much 'protection' is really necessary here?  If something like this is
 implemented *AND* the wrong device number is used in the slot definition,
 your system could be rendered non-IPLable because the volid on the slot
 doesn't match the one on the volume.  What takes precedence?  Should the IPL
 stop and CP go through a dialog asking a bunch of silly questions about
 which volume you 'really' intended to use?

 The volumes should be labeled properly in the first place.

 I'll vote strongly negative for a requirement to change this process for
 the reasons indicated so far.


 On 08/26/2010 09:45 PM, Scott Rohling wrote:

 Would definitely agree an rdev specification on the SLOT def would be very
 useful.I just recently built a 2nd level guest and neglected to relabel
 the volumes before they IPL'd the 1st level system ...  ugly.   Wouldn't
 have happened if the real address was specified...   good idea!

 Scott Rohling


  --
 Rich Smrcina
 Phone: 414-491-6001
 http://www.linkedin.com/in/richsmrcina

 Catch the WAVV! http://www.wavv.org
 WAVV 2011 - April 15-19, 2011 Colorado Springs, CO



Duplicate VOLID's

2010-08-25 Thread Tom Huegel
I am looking for a way to verify at IPL time that z/VM is using the volumes
I intended.
It is possible that there are more than one volume with a volid of xxxRES
xxxWK1 xxxWK2 etc.
I could put something in the AUTOLOG1 PROFILE EXEC to do Q DASD and verify
that xxxRES is at address x'100' and xxxWK1 is at x'101' and xxxWK2 is at
x'102'.. etc. but that can be messy and diffacult to maintain.

Then I thought I could do something in the SYSTEM CONFIG file.
SYSTEM_IDENTIFIER_2094_123ABC_GOODSYS
SYSTEM_IDENTIFER_DEFAULT_BADSYS

IMBED -SYSTEM- CONFIG

In GOODSYS CONFIG I would have all of the normal stuff.
In BADSYS CONFIG maybe just a bunch of SAY 'WRONG SYSTEM' statements..

This doesn't verify anything beyond that this is the correct xxxRES for this
LPAR. If the volume was cloned (DDR) it would pass this test anway.

Just courious as how others handle this, if at all.

Thanks for any thoughts.


Re: Duplicate VOLID's

2010-08-25 Thread Schuh, Richard
You could change the SAIPL defaults on iplable volumes so that it would be 
obvious to everyone which disk was IPLed. That is assuming that whoever ipls 
will look at the comments before hitting PF10.

Also, make sure that no two of the systems have a matching volser in the spool 
configuration. That could be very bad. (On the other hand, if you do include a 
few duplicates in the spool, the operators will also get messages about spool 
errors and have to give permission for the start-up to continue. Just hope they 
are wise enough to stop the process and call you.).


Regards,
Richard Schuh






From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Huegel
Sent: Wednesday, August 25, 2010 4:49 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Duplicate VOLID's

I am looking for a way to verify at IPL time that z/VM is using the volumes I 
intended.
It is possible that there are more than one volume with a volid of xxxRES 
xxxWK1 xxxWK2 etc.
I could put something in the AUTOLOG1 PROFILE EXEC to do Q DASD and verify that 
xxxRES is at address x'100' and xxxWK1 is at x'101' and xxxWK2 is at x'102'.. 
etc. but that can be messy and diffacult to maintain.

Then I thought I could do something in the SYSTEM CONFIG file.
SYSTEM_IDENTIFIER_2094_123ABC_GOODSYS
SYSTEM_IDENTIFER_DEFAULT_BADSYS

IMBED -SYSTEM- CONFIG

In GOODSYS CONFIG I would have all of the normal stuff.
In BADSYS CONFIG maybe just a bunch of SAY 'WRONG SYSTEM' statements..

This doesn't verify anything beyond that this is the correct xxxRES for this 
LPAR. If the volume was cloned (DDR) it would pass this test anway.

Just courious as how others handle this, if at all.

Thanks for any thoughts.




Re: Duplicate VOLID's

2010-08-25 Thread Rich Smrcina
 Cloned systems immediately are assigned a new set of volid's.  You may want to 
consider assigning new sets of volids.


An alternative is to take the other systems devices offline at IPL, but if they move for 
whatever reason that is one more thing to keep track of and maintain.


On 08/25/2010 06:49 PM, Tom Huegel wrote:

I am looking for a way to verify at IPL time that z/VM is using the volumes I 
intended.
It is possible that there are more than one volume with a volid of xxxRES xxxWK1 
xxxWK2 etc.
I could put something in the AUTOLOG1 PROFILE EXEC to do Q DASD and verify that xxxRES 
is at address x'100' and xxxWK1 is at x'101' and xxxWK2 is at x'102'.. etc. but that 
can be messy and diffacult to maintain.

Then I thought I could do something in the SYSTEM CONFIG file.
SYSTEM_IDENTIFIER_2094_123ABC_GOODSYS
SYSTEM_IDENTIFER_DEFAULT_BADSYS
IMBED -SYSTEM- CONFIG
In GOODSYS CONFIG I would have all of the normal stuff.
In BADSYS CONFIG maybe just a bunch of SAY 'WRONG SYSTEM' statements..
This doesn't verify anything beyond that this is the correct xxxRES for this LPAR. If 
the volume was cloned (DDR) it would pass this test anway.

Just courious as how others handle this, if at all.
Thanks for any thoughts.



--
Rich Smrcina
Phone: 414-491-6001
http://www.linkedin.com/in/richsmrcina

Catch the WAVV! http://www.wavv.org
WAVV 2011 - April 15-19, 2011 Colorado Springs, CO


Re: Duplicate VOLID's

2010-08-25 Thread Feller, Paul
We code stuff in the SYSTEM CONFIG to control which lpar sees which volumes at 
IPL time.  It does require some work to set up and to maintain it.

Paul Feller
AIT Mainframe Technical Support

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Huegel
Sent: Wednesday, August 25, 2010 6:49 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Duplicate VOLID's

I am looking for a way to verify at IPL time that z/VM is using the volumes I 
intended.
It is possible that there are more than one volume with a volid of xxxRES 
xxxWK1 xxxWK2 etc.
I could put something in the AUTOLOG1 PROFILE EXEC to do Q DASD and verify that 
xxxRES is at address x'100' and xxxWK1 is at x'101' and xxxWK2 is at x'102'.. 
etc. but that can be messy and diffacult to maintain.

Then I thought I could do something in the SYSTEM CONFIG file.
SYSTEM_IDENTIFIER_2094_123ABC_GOODSYS
SYSTEM_IDENTIFER_DEFAULT_BADSYS

IMBED -SYSTEM- CONFIG

In GOODSYS CONFIG I would have all of the normal stuff.
In BADSYS CONFIG maybe just a bunch of SAY 'WRONG SYSTEM' statements..

This doesn't verify anything beyond that this is the correct xxxRES for this 
LPAR. If the volume was cloned (DDR) it would pass this test anway.

Just courious as how others handle this, if at all.

Thanks for any thoughts.




Re: Duplicate VOLID's

2010-08-25 Thread Alan Altmark
On Wednesday, 08/25/2010 at 07:49 EDT, Tom Huegel tehue...@gmail.com 
wrote:
 I am looking for a way to verify at IPL time that z/VM is using the 
volumes I 
 intended.
 It is possible that there are more than one volume with a volid of 
xxxRES 
 xxxWK1 xxxWK2 etc.
 I could put something in the AUTOLOG1 PROFILE EXEC to do Q DASD and 
verify that 
 xxxRES is at address x'100' and xxxWK1 is at x'101' and xxxWK2 is at 
x'102'.. 
 etc. but that can be messy and diffacult to maintain.  
 Then I thought I could do something in the SYSTEM CONFIG file.
 SYSTEM_IDENTIFIER_2094_123ABC_GOODSYS
 SYSTEM_IDENTIFER_DEFAULT_BADSYS 
 IMBED -SYSTEM- CONFIG 
 In GOODSYS CONFIG I would have all of the normal stuff. 
 In BADSYS CONFIG maybe just a bunch of SAY 'WRONG SYSTEM' statements.. 
 This doesn't verify anything beyond that this is the correct xxxRES for 
this 
 LPAR. If the volume was cloned (DDR) it would pass this test anway. 
 Just courious as how others handle this, if at all. 
 Thanks for any thoughts. 

You need to use the DEVICES OFFLINE_AT_IPL and ONLINE_AT_IPL statements in 
SYSTEM CONFIG that I discussed in a July post.  
http://listserv.uark.edu/scripts/wa.exe?A2=ind1007L=IBMVMP=R1084.

Checking AFTER the system has IPLed is too late.

Alan Altmark
z/VM Development
IBM Endicott


Re: Duplicate VOLID's

2010-08-25 Thread Tom Huegel
In a normal production environment this is not such an insurmountable
problem.
The problem is this is a test lab, and I don't necessarily know what happens
to the different disk volumes.
I do know what addresses my system disks are on, but there may be copies
that someone was testing with floating around.
It may have been a in a second level machine, or a first level test..
It's only a problem at IPL ..



On Wed, Aug 25, 2010 at 10:08 PM, Feller, Paul pfel...@aegonusa.com wrote:

  We code stuff in the SYSTEM CONFIG to control which lpar sees which
 volumes at IPL time.  It does require some work to set up and to maintain
 it.



 *Paul Feller*
 *AIT Mainframe Technical Support*

  *From:* The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] *On
 Behalf Of *Tom Huegel
 *Sent:* Wednesday, August 25, 2010 6:49 PM
 *To:* IBMVM@LISTSERV.UARK.EDU
 *Subject:* Duplicate VOLID's



 I am looking for a way to verify at IPL time that z/VM is using the volumes
 I intended.

 It is possible that there are more than one volume with a volid of xxxRES
 xxxWK1 xxxWK2 etc.

 I could put something in the AUTOLOG1 PROFILE EXEC to do Q DASD and verify
 that xxxRES is at address x'100' and xxxWK1 is at x'101' and xxxWK2 is at
 x'102'.. etc. but that can be messy and diffacult to maintain.



 Then I thought I could do something in the SYSTEM CONFIG file.

 SYSTEM_IDENTIFIER_2094_123ABC_GOODSYS

 SYSTEM_IDENTIFER_DEFAULT_BADSYS



 IMBED -SYSTEM- CONFIG



 In GOODSYS CONFIG I would have all of the normal stuff.

 In BADSYS CONFIG maybe just a bunch of SAY 'WRONG SYSTEM' statements..



 This doesn't verify anything beyond that this is the correct xxxRES for
 this LPAR. If the volume was cloned (DDR) it would pass this test anway.



 Just courious as how others handle this, if at all.



 Thanks for any thoughts.