FTP of CMS file as attachment - PJBR

2009-06-16 Thread jose raul baron
Hi list, I would like to ask if there's a way to e-mail a CMS file as an
attachment and not as the text itself. We have been trying to achieve this
but haven't succeeded yet. All we get is the file as text, not as
attachment. 

Any help will be very welcome. 
Thanks in advance, 


Raul Barón
Dpto. Sistemas
CALCULO S.A. 
E-mail: jba...@calculo-sa.es 


VMBackup question

2009-06-16 Thread Jim Bohnsack
We have installed a Luminex virtual tape system.  My question isn't how 
to make it work.  It seems to be doing fine.  The only problem and the 
reason for my question is that I have the virtual tapes defined as 3490C 
tapes.  My quandary is the fact that I have the virtual tapes defined as 
the VMBackup Primary Resource Pool and and a 3494 robot with 3590 
drives, located in a different location, as the VMBackup Copy Resource 
Pool.  It works just fine for backups. 

Restores, however, result in VMBackup picking a 3590 tape rather than 
one of the 3490 tapes.  I know, or think I know, that I can get around 
that using VMBackup Job Template files and specifying the input tape 
volser and/or density.  That's a little awkward.  I want the restores to 
be able to be done from the VMBackup screens so our admin. people can do 
them.  The VMBackup System Prog. Ref says that the tapes for a restore 
are chosen from a density preference list with the higher density tapes, 
naturally, being chosen first.  That's logical, but not what I want. 

Does anyone have any idea as to how to force VMBackup to use a different 
list?  I have a question open the CA but I was hoping that someone has 
faced the same question .


Jim

--
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Thomas Kern
For email not FTP, use the MAILIT package from the IBM Downloads page. It

can do binary files like PDFs too.
 
/Tom Kern

On Tue, 16 Jun 2009 12:25:08 +0200, jose raul baron jba...@calculo-sa.es

wrote:

Hi list, I would like to ask if there's a way to e-mail a CMS file as an

attachment and not as the text itself. We have been trying to achieve th
is
but haven't succeeded yet. All we get is the file as text, not as
attachment. 

Any help will be very welcome. 
Thanks in advance, 


Raul Barón
Dpto. Sistemas
CALCULO S.A. 
E-mail: jba...@calculo-sa.es 


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Bill Munson
Your subject line says FTP but your question says E-MAIL  ?

E-Mail can be done 2 ways from CMS 

get the excellent product MAILBOOK from SineNomine 
or the very fine product MAILIT from the IBM VM download page

both can send CMS files as attachments

good luck
 
Bill Munson 
Sr. z/VM Systems Programmer 
Brown Brothers Harriman  CO.
525 Washington Blvd. 
Jersey City, NJ 07310 
201-418-7588

President MVMUA
http://www2.marist.edu/~mvmua/





jose raul baron jba...@calculo-sa.es 
Sent by: The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU
06/16/2009 06:25 AM
Please respond to
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU


To
IBMVM@LISTSERV.UARK.EDU
cc

Subject
FTP of CMS file as attachment - PJBR






Hi list, I would like to ask if there's a way to e-mail a CMS file as an
attachment and not as the text itself. We have been trying to achieve this
but haven't succeeded yet. All we get is the file as text, not as
attachment. 

Any help will be very welcome. 
Thanks in advance, 


Raul Barón
Dpto. Sistemas
CALCULO S.A. 
E-mail: jba...@calculo-sa.es 



*** IMPORTANT
NOTE* The opinions expressed in this
message and/or any attachments are those of the author and not
necessarily those of Brown Brothers Harriman  Co., its
subsidiaries and affiliates (BBH). There is no guarantee that
this message is either private or confidential, and it may have
been altered by unauthorized sources without your or our knowledge.
Nothing in the message is capable or intended to create any legally
binding obligations on either party and it is not intended to
provide legal advice. BBH accepts no responsibility for loss or
damage from its use, including damage from virus.


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Marci Beach
Or you can use SENDFILE from CMS to send an attachment via the MIME 
option:

sendfile cms file fm to use...@xx.xx.com (mime ascii-attach 
or
sendfile cmd file fm to use...@xx.xx.com (mime binary-attach

depending on if it is an ASCII or BINARY file.

Marci Beach
 
 



From:
jose raul baron jba...@calculo-sa.es
To:
IBMVM@LISTSERV.UARK.EDU
Date:
06/16/2009 10:28 AM
Subject:
FTP of CMS file as attachment - PJBR



Hi list, I would like to ask if there's a way to e-mail a CMS file as an
attachment and not as the text itself. We have been trying to achieve this
but haven't succeeded yet. All we get is the file as text, not as
attachment. 

Any help will be very welcome. 
Thanks in advance, 


Raul Barón
Dpto. Sistemas
CALCULO S.A. 
E-mail: jba...@calculo-sa.es 





Re: VMBackup question

2009-06-16 Thread Imler, Steven J
You VM:Backup System Programmer is correct ... the higher density
media will always be preferenced in the volser list sent to VM:Tape.  

The simplest thing I can think to do is use a VM:Tape COMMAND EXIT that
swaps the 2 volsers in the list.  

Alternatively, I suppose you could outcode (CHECKOUT) the 3590 tapes,
I think that would prevent them from being selected (but you'd need to
remember to check them back in before they can be scratched for reuse). 

JR (Steven) Imler
CA 
Senior Sustaining Engineer
Tel: +1-703-708-3479
steven.im...@ca.com



-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Jim Bohnsack
Sent: Tuesday, June 16, 2009 10:34 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: VMBackup question

We have installed a Luminex virtual tape system.  My question isn't how 
to make it work.  It seems to be doing fine.  The only problem and the 
reason for my question is that I have the virtual tapes defined as 3490C

tapes.  My quandary is the fact that I have the virtual tapes defined as

the VMBackup Primary Resource Pool and and a 3494 robot with 3590 
drives, located in a different location, as the VMBackup Copy Resource 
Pool.  It works just fine for backups. 

Restores, however, result in VMBackup picking a 3590 tape rather than 
one of the 3490 tapes.  I know, or think I know, that I can get around 
that using VMBackup Job Template files and specifying the input tape 
volser and/or density.  That's a little awkward.  I want the restores to

be able to be done from the VMBackup screens so our admin. people can do

them.  The VMBackup System Prog. Ref says that the tapes for a restore 
are chosen from a density preference list with the higher density tapes,

naturally, being chosen first.  That's logical, but not what I want. 

Does anyone have any idea as to how to force VMBackup to use a different

list?  I have a question open the CA but I was hoping that someone has 
faced the same question .

Jim

-- 
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu


Re: Control Data Backups

2009-06-16 Thread Schuh, Richard
Thanks for the tip regarding filemode being a no-no.

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:ib...@listserv.uark.edu] On Behalf Of Rob van der Heij
 Sent: Monday, June 15, 2009 5:10 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Control Data Backups
 
 On Tue, Jun 16, 2009 at 1:44 AM, Schuh, 
 Richardrsc...@visa.com wrote:
 
  Setting up 2 test servers is not a problem. Devising the conditions 
  where one might cause the other problems could be difficult when 
  resources are limited. First-hand experience in the real 
 world would 
  be a much better measure.
 
 There was discussion about this on the list in July 2008.
 We used to have a setup like this where the PROFILE EXEC of 
 SFS1 would access its directory in SFS2 and refer to that 
 file mode on the DMSPARMS. My problem with that was that when 
 SFS2 was restarted for some reason, the directory was 
 released in SFS1 and nothing would help access it (apart from 
 poking in SFS control blocks to issue a CMS command to fix it).
 
 Sue pointed out that the proper way is to refer to the SFS 
 directory rather than the file mode. So the only thing to 
 take care of is that you schedule the control data backup 
 often enough that it does not happen by accident when you 
 have the other SFS down for service.
 
 Rob
 

Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Mike Walter
Have you tried: SENDFILE fn ft fm TO addressee (SMTP MIME

And... what is the - PJBR appended to the Subject: of your posts?

Mike Walter
Hewitt Associates
Any opinions expressed herein are mine alone and do not necessarily 
represent the opinions or policies of Hewitt Associates.


Hi list, I would like to ask if there's a way to e-mail a CMS file as an
attachment and not as the text itself. We have been trying to achieve 
this
but haven't succeeded yet. All we get is the file as text, not as
attachment. 

Any help will be very welcome. 
Thanks in advance, 


Raul Barón
Dpto. Sistemas
CALCULO S.A. 
E-mail: jba...@calculo-sa.es 



The information contained in this e-mail and any accompanying documents may 
contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if this 
message has been addressed to you in error, please immediately alert the sender 
by reply e-mail and then delete this message, including any attachments. Any 
dissemination, distribution or other use of the contents of this message by 
anyone other than the intended recipient is strictly prohibited. All messages 
sent to and from this e-mail address may be monitored as permitted by 
applicable law and regulations to ensure compliance with our internal policies 
and to protect our business. E-mails are not secure and cannot be guaranteed to 
be error free as they can be intercepted, amended, lost or destroyed, or 
contain viruses. You are deemed to have accepted these risks if you communicate 
with us by e-mail. 


Re: VMBackup question

2009-06-16 Thread Mike Walter
An heavily stripped-down version of our VM:Tape Command Exit is pasted 
below.  We do quite a bit more in our full exit  (e.g permitting 
operations to redirect VM:Backup recall requests to different campuses on 
different drives at will, handling Disaster Recovery recall mounts 
automatically, etc.), but this stripped down example should get you 
started. 

You should probably adjust a few of the variables at the top (VMBonly=, 
VMBsvms=, VMAsvms=) to meet your specific needs.  I find that real-world 
examples help a bit more than most doc, often considering situations that 
might not be obvious from the doc.

Mike Walter
Hewitt Associates
Any opinions expressed herein are mine alone and do not necessarily 
represent the opinions or policies of Hewitt Associates.

* Prolog; See Epilog for additional information 
 * Exec Name - $COMMAND EXEC (VMTAPE Command Exit)  *
 * Supported By  -  *
 * Status- Version 1, Release 2.0   *
 /
/* Address COMMAND */  /* --- Not a good idea in this Exit ---*/

   parse source xos xct xfn xft xfm xcmd xenvir .
   parse upper arg parms 0 operands '(' options ')' parmrest
   /* Variables beginning with '?' are typically binary flags ON|OFF*/
   self=userid()

  'EXTRACT /COMMAND/USERID/'

   VMBonly='VMBACKUP V2BACKUP'
   VMBsvms='VMBACKUP VMBARCH  VMBMPC' ,
   'V2BACKUP V2BARCH  V2BMPC'
   VMAsvms='VMARCH   V2ARCH   V3ARCH' ,
   'VMMDARCH V2MDARCH V3MDARCH'

   /* Note: there will be one cmd per active VM:Oper TAPEMGR user   */
   /*   (it can look like a bug, but it's just them synching up)*/

   /* say 'Cmd from' user.1':' command.1 */

   /* - */
   /* Look for command: */
   /*   MOUNT volser1 vdev  (WRITE RETPD nnn ANYTAPE*/
   /* or (for restores, perhaps such as D.R.)   */
   /*   MOUNT (volsr1 volsr2 ) vdev (READ LABEL SL ANYTAPE  */
   /* orMOUNT (volsr1 ) vdev (READ LABEL SL ANYTAPE */
   /*   */
   /* E.g.  MOUNT (318321 701007 ) 310 (READ LABEL SL ANYTAPE   */
   /* - */

   parse var user.1 l6user1 7 .
   ?multvols=0
   If pos(')',command.1)0 then
  Do
parse var command.1 cmd '(' vols ')' vdev '('mntopts ')' ,
  1 . w2 .
cmd=strip(cmd,'B')
If words(vols)1 then /* Might be handy to know*/
   ?multvols=1
  End
   Else
  Do
parse var command.1 cmd vols postvols '('mntopts
cmd=strip(cmd,'B')
  End

   /* Exit quickly if not a VM:Backup server recall mount */
   If \abbrev('MOUNT',cmd,1) then Call Exit 0
   If \?multvols then Call Exit 0

   ?read=wordpos('READ',mntopts)0   /* R/O mount?  */
   If \?read then Call Exit 0/* No, pass on cmd unchanged*/

   /* Exit quickly if not a VM:Backup server */
   If wordpos(user.1,'VMBACKUP VMBARCH VMBMPC V2BACKUP')=0  ,
  l6user1'VMBRES'  then/* Any VMBRESnn server */
  Call Exit 0/* Exit if not VM:Backup svm.  */

   /* - */
   /* At this point the MOUNT cmd is from VMBACKUP|VMBARCH|VMBRESnn,*/
   /* is READ only, and had a parenthesis around the volser(s). */
   /* - */

   say 'Input:' command.1
   parse var vols vol1 vol2 .

   /* - */
   /* At this point the MOUNT cmd is from VMBACKUP|VMBARCH|VMBRESnn,*/
   /* is READ only, and had a parenthesis around the volser(s). */
   /* - */

   /* - */
   /* If mounter is a VMBACKUP machine, the mount is a READ (recall)*/
   /* swap the volsers per local requirements.  */
   /* - */

   newcmd=cmd '('vol2 vol1')' vdev '('mntopts
  'SET COMMAND' newcmd
   Call Exit 0


Call Exit 0

/** */
/*   Sub-Routines below this point  */
/** */

Exit:
   parse arg exitrc .
   If verify(exitrc,0123456789)=0 then Exit exitrc


/* Epilog ***
 * Function  -  *
 * Component of  -  *
 * Command format-  *
 * 

Re: VMBackup question

2009-06-16 Thread Jim Bohnsack
Steve--That wasn't the answer I wanted to hear but is the type I 
expected.  We don't use VM:Tape, just VM:Backup.  A work-around I've 
thought of is to do the backups, which happen in the middle of the 
night, in 2 separate runs.  First a backup specifying the 3590 tape pool 
and then another backup specifying the virtual tape pool.  They wouldn't 
be exact but at least the taking the default on a restore of the most 
recent backup would pull the virtual tapes rather than the 3590 
cartridges. 

On the other hand, I'll have to talk with the group to see if it really 
makes any difference as to which gets pulled for a restore.  The result 
is going to be the same.  One purpose of the virtual tape was to allow 
us to get an old 3494 out of use and off maintenance.  Mission 
accomplished, even tho not esthetically pleasing.


Jim

Imler, Steven J wrote:

You VM:Backup System Programmer is correct ... the higher density
media will always be preferenced in the volser list sent to VM:Tape. =20

The simplest thing I can think to do is use a VM:Tape COMMAND EXIT that
swaps the 2 volsers in the list. =20

Alternatively, I suppose you could outcode (CHECKOUT) the 3590 tapes,
I think that would prevent them from being selected (but you'd need to
remember to check them back in before they can be scratched for reuse).=20

JR (Steven) Imler
CA=20
Senior Sustaining Engineer
Tel: +1-703-708-3479
steven.im...@ca.com



-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Jim Bohnsack
Sent: Tuesday, June 16, 2009 10:34 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: VMBackup question

We have installed a Luminex virtual tape system.  My question isn't how=20
to make it work.  It seems to be doing fine.  The only problem and the=20
reason for my question is that I have the virtual tapes defined as 3490C

tapes.  My quandary is the fact that I have the virtual tapes defined as

the VMBackup Primary Resource Pool and and a 3494 robot with 3590=20
drives, located in a different location, as the VMBackup Copy Resource=20
Pool.  It works just fine for backups.=20

Restores, however, result in VMBackup picking a 3590 tape rather than=20
one of the 3490 tapes.  I know, or think I know, that I can get around=20
that using VMBackup Job Template files and specifying the input tape=20
volser and/or density.  That's a little awkward.  I want the restores to

be able to be done from the VMBackup screens so our admin. people can do

them.  The VMBackup System Prog. Ref says that the tapes for a restore=20
are chosen from a density preference list with the higher density tapes,

naturally, being chosen first.  That's logical, but not what I want.=20

Does anyone have any idea as to how to force VMBackup to use a different

list?  I have a question open the CA but I was hoping that someone has=20
faced the same question .

Jim

--=20
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu

  


--
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread David Boyes
 Hi list, I would like to ask if there's a way to e-mail a CMS file as
 an
 attachment and not as the text itself. We have been trying to achieve
 this
 but haven't succeeded yet. All we get is the file as text, not as
 attachment.

Two ways you could do it : Use XMITIP or (for a more full-function solution) 
use MAILBOOK. 


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Lionel Dyck

David - has someone ported XMITIP to CMS ?

I have to admit that I've not tried it there but it is written in REXX
although the file I/O is the z/OS EXECIO.



Lionel B. Dyck, z/Linux Virtualization Specialist
IBM Global Services - Kaiser Permanente Team
Linux on System z Service Delivery Team
925-926-5332 (8-473-5332) | E-Mail: ld...@us.ibm.com
AIM: lbdyck | Yahoo IM: lbdyck


I never guess. It is a capital mistake to theorize before one has data.
Insensibly one begins to twist facts to suit theories, instead of theories
to suit facts.
- Sir Arthur Conan Doyle


   
  From:   David Boyes dbo...@sinenomine.net  
   
  To: IBMVM@LISTSERV.UARK.EDU  
   
  Date:   06/16/2009 08:56 AM  
   
  Subject:Re: FTP of CMS file as attachment - PJBR 
   
  Sent by:The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU  
   





 Hi list, I would like to ask if there's a way to e-mail a CMS file as
 an
 attachment and not as the text itself. We have been trying to achieve
 this
 but haven't succeeded yet. All we get is the file as text, not as
 attachment.

Two ways you could do it : Use XMITIP or (for a more full-function
solution) use MAILBOOK.



Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread David Boyes
The early version I have (really REALLY old) seems to run OK in linemode. I 
don't know if the modern version would. Certainly all the ISPF goodies in the 
modern one don't work.


I have to admit that I've not tried it there but it is written in REXX although 
the file I/O is the z/OS EXECIO.


Re: VMBackup question

2009-06-16 Thread Michael Coffin
Can the Luminex virtual tapes be defined as a higher density than the
real 3590's?  

PS:  What is the purpose of creating backups with virtual tapes AND real
3590 tapes?  Are the real tapes sent offsite or something?

-Mike

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Jim Bohnsack
Sent: Tuesday, June 16, 2009 11:48 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: VMBackup question


Steve--That wasn't the answer I wanted to hear but is the type I 
expected.  We don't use VM:Tape, just VM:Backup.  A work-around I've 
thought of is to do the backups, which happen in the middle of the 
night, in 2 separate runs.  First a backup specifying the 3590 tape pool

and then another backup specifying the virtual tape pool.  They wouldn't

be exact but at least the taking the default on a restore of the most 
recent backup would pull the virtual tapes rather than the 3590 
cartridges. 

On the other hand, I'll have to talk with the group to see if it really 
makes any difference as to which gets pulled for a restore.  The result 
is going to be the same.  One purpose of the virtual tape was to allow 
us to get an old 3494 out of use and off maintenance.  Mission 
accomplished, even tho not esthetically pleasing.

Jim

Imler, Steven J wrote:
 You VM:Backup System Programmer is correct ... the higher density 
 media will always be preferenced in the volser list sent to VM:Tape. 
 =20

 The simplest thing I can think to do is use a VM:Tape COMMAND EXIT 
 that swaps the 2 volsers in the list. =20

 Alternatively, I suppose you could outcode (CHECKOUT) the 3590 
 tapes, I think that would prevent them from being selected (but you'd 
 need to remember to check them back in before they can be scratched 
 for reuse).=20

 JR (Steven) Imler
 CA=20
 Senior Sustaining Engineer
 Tel: +1-703-708-3479
 steven.im...@ca.com



 -Original Message-
 From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] 
 On Behalf Of Jim Bohnsack
 Sent: Tuesday, June 16, 2009 10:34 AM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: VMBackup question

 We have installed a Luminex virtual tape system.  My question isn't 
 how=20 to make it work.  It seems to be doing fine.  The only problem 
 and the=20 reason for my question is that I have the virtual tapes 
 defined as 3490C

 tapes.  My quandary is the fact that I have the virtual tapes defined 
 as

 the VMBackup Primary Resource Pool and and a 3494 robot with 3590=20 
 drives, located in a different location, as the VMBackup Copy 
 Resource=20 Pool.  It works just fine for backups.=20

 Restores, however, result in VMBackup picking a 3590 tape rather 
 than=20 one of the 3490 tapes.  I know, or think I know, that I can 
 get around=20 that using VMBackup Job Template files and specifying 
 the input tape=20 volser and/or density.  That's a little awkward.  I 
 want the restores to

 be able to be done from the VMBackup screens so our admin. people can 
 do

 them.  The VMBackup System Prog. Ref says that the tapes for a 
 restore=20 are chosen from a density preference list with the higher 
 density tapes,

 naturally, being chosen first.  That's logical, but not what I 
 want.=20

 Does anyone have any idea as to how to force VMBackup to use a 
 different

 list?  I have a question open the CA but I was hoping that someone 
 has=20 faced the same question .

 Jim

 --=20
 Jim Bohnsack
 Cornell University
 (972) 596-6377 home/office
 (972) 342-5823 cell
 jab...@cornell.edu

   

-- 
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Hughes, Jim
Look at SENDFILE.  There are some options that may work for you.


Jim Hughes
603-271-5586
It is fun to do the impossible.

==-Original Message-
==From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
==Behalf Of jose raul baron
==Sent: Tuesday, June 16, 2009 6:25 AM
==To: IBMVM@LISTSERV.UARK.EDU
==Subject: FTP of CMS file as attachment - PJBR
==
==Hi list, I would like to ask if there's a way to e-mail a CMS file as an
==attachment and not as the text itself. We have been trying to achieve
==this
==but haven't succeeded yet. All we get is the file as text, not as
==attachment.
==
==Any help will be very welcome.
==Thanks in advance,
==
==
==Raul Barón
==Dpto. Sistemas
==CALCULO S.A.
==E-mail: jba...@calculo-sa.es


Re: VMBackup question

2009-06-16 Thread Hughes, Jim
You can run multiple backup jobs at the same time using virtual tape
drives.

Then you can stack the virtual tape volumes on a 3590 later on.

That's my guess.


Jim Hughes
603-271-5586
It is fun to do the impossible.

==-Original Message-
==From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu]
On
==Behalf Of Michael Coffin
==Sent: Tuesday, June 16, 2009 12:10 PM
==To: IBMVM@LISTSERV.UARK.EDU
==Subject: Re: VMBackup question
==
==Can the Luminex virtual tapes be defined as a higher density than the
==real 3590's?
==
==PS:  What is the purpose of creating backups with virtual tapes AND
real
==3590 tapes?  Are the real tapes sent offsite or something?
==
==-Mike
==
==-Original Message-
==From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu]
On
==Behalf Of Jim Bohnsack
==Sent: Tuesday, June 16, 2009 11:48 AM
==To: IBMVM@LISTSERV.UARK.EDU
==Subject: Re: VMBackup question
==
==
==Steve--That wasn't the answer I wanted to hear but is the type I
==expected.  We don't use VM:Tape, just VM:Backup.  A work-around I've
==thought of is to do the backups, which happen in the middle of the
==night, in 2 separate runs.  First a backup specifying the 3590 tape
pool
==
==and then another backup specifying the virtual tape pool.  They
wouldn't
==
==be exact but at least the taking the default on a restore of the most
==recent backup would pull the virtual tapes rather than the 3590
==cartridges.
==
==On the other hand, I'll have to talk with the group to see if it
really
==makes any difference as to which gets pulled for a restore.  The
result
==is going to be the same.  One purpose of the virtual tape was to
allow
==us to get an old 3494 out of use and off maintenance.  Mission
==accomplished, even tho not esthetically pleasing.
==
==Jim
==
==Imler, Steven J wrote:
== You VM:Backup System Programmer is correct ... the higher density
== media will always be preferenced in the volser list sent to
VM:Tape.
== =20
==
== The simplest thing I can think to do is use a VM:Tape COMMAND EXIT
== that swaps the 2 volsers in the list. =20
==
== Alternatively, I suppose you could outcode (CHECKOUT) the 3590
== tapes, I think that would prevent them from being selected (but
you'd
== need to remember to check them back in before they can be scratched
== for reuse).=20
==
== JR (Steven) Imler
== CA=20
== Senior Sustaining Engineer
== Tel: +1-703-708-3479
== steven.im...@ca.com
==
==
==
== -Original Message-
== From: The IBM z/VM Operating System
[mailto:ib...@listserv.uark.edu]
== On Behalf Of Jim Bohnsack
== Sent: Tuesday, June 16, 2009 10:34 AM
== To: IBMVM@LISTSERV.UARK.EDU
== Subject: VMBackup question
==
== We have installed a Luminex virtual tape system.  My question isn't
== how=20 to make it work.  It seems to be doing fine.  The only
problem
== and the=20 reason for my question is that I have the virtual tapes
== defined as 3490C
==
== tapes.  My quandary is the fact that I have the virtual tapes
defined
== as
==
== the VMBackup Primary Resource Pool and and a 3494 robot with
3590=20
== drives, located in a different location, as the VMBackup Copy
== Resource=20 Pool.  It works just fine for backups.=20
==
== Restores, however, result in VMBackup picking a 3590 tape rather
== than=20 one of the 3490 tapes.  I know, or think I know, that I can
== get around=20 that using VMBackup Job Template files and specifying
== the input tape=20 volser and/or density.  That's a little awkward.
I
== want the restores to
==
== be able to be done from the VMBackup screens so our admin. people
can
== do
==
== them.  The VMBackup System Prog. Ref says that the tapes for a
== restore=20 are chosen from a density preference list with the
higher
== density tapes,
==
== naturally, being chosen first.  That's logical, but not what I
== want.=20
==
== Does anyone have any idea as to how to force VMBackup to use a
== different
==
== list?  I have a question open the CA but I was hoping that someone
== has=20 faced the same question .
==
== Jim
==
== --=20
== Jim Bohnsack
== Cornell University
== (972) 596-6377 home/office
== (972) 342-5823 cell
== jab...@cornell.edu
==
==
==
==--
==Jim Bohnsack
==Cornell University
==(972) 596-6377 home/office
==(972) 342-5823 cell
==jab...@cornell.edu


Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Gentry, Stephen
Marci, thanks for that tidbit of info. I could have used this in the past.  
Now, I've got a solution looking for a problem.

Steve

 



From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Marci Beach
Sent: Tuesday, June 16, 2009 10:38 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: FTP of CMS file as attachment - PJBR

 


Or you can use SENDFILE from CMS to send an attachment via the MIME option: 

sendfile cms file fm to use...@xx.xx.com (mime ascii-attach 
or 
sendfile cmd file fm to use...@xx.xx.com (mime binary-attach 

depending on if it is an ASCII or BINARY file. 

Marci Beach 
  
  



From: 

jose raul baron jba...@calculo-sa.es 

To: 

IBMVM@LISTSERV.UARK.EDU 

Date: 

06/16/2009 10:28 AM 

Subject: 

FTP of CMS file as attachment - PJBR

 






Hi list, I would like to ask if there's a way to e-mail a CMS file as an
attachment and not as the text itself. We have been trying to achieve this
but haven't succeeded yet. All we get is the file as text, not as
attachment. 

Any help will be very welcome. 
Thanks in advance, 


Raul Barón
Dpto. Sistemas
CALCULO S.A. 
E-mail: jba...@calculo-sa.es 






Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Lionel Dyck
Interesting - I may have to try it.



Lionel B. Dyck, z/Linux Virtualization Specialist
IBM Global Services - Kaiser Permanente Team
Linux on System z Service Delivery Team
925-926-5332 (8-473-5332) | E-Mail: ld...@us.ibm.com
AIM: lbdyck | Yahoo IM: lbdyck


I never guess. It is a capital mistake to theorize before one has data.
Insensibly one begins to twist facts to suit theories, instead of theories
to suit facts.
- Sir Arthur Conan Doyle



  
  From:   David Boyes dbo...@sinenomine.net   
  

  
  To: IBMVM@LISTSERV.UARK.EDU   
  

  
  Date:   06/16/2009 09:10 AM   
  

  
  Subject:Re: FTP of CMS file as attachment - PJBR  
  

  
  Sent by:The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU   
  

  





The early version I have (really REALLY old) seems to run OK in linemode. I
don’t know if the modern version would. Certainly all the ISPF goodies in
the modern one don’t work.




I have to admit that I've not tried it there but it is written in REXX
although the file I/O is the z/OS EXECIO.






Re: FTP of CMS file as attachment - PJBR

2009-06-16 Thread Edward M Martin
Hello Raul

 

Use the SENDFILE with ( MIME ASCII-ATTACH

 

 

Sendfile fn ft fm u...@name.com ( mime ascii-attach subject 'this is a file 
sent as an attachment'

 

Z/VM 5.3

 

Ed Martin

Aultman Health Foundation

330-363-5050

ext 35050

 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of jose raul baron
Sent: Tuesday, June 16, 2009 6:25 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: FTP of CMS file as attachment - PJBR

 

Hi list, I would like to ask if there's a way to e-mail a CMS file as an

attachment and not as the text itself. We have been trying to achieve this

but haven't succeeded yet. All we get is the file as text, not as

attachment. 

 

Any help will be very welcome. 

Thanks in advance, 

 

 

Raul Barón

Dpto. Sistemas

CALCULO S.A. 

E-mail: jba...@calculo-sa.es 



Nevermind: REXX SOCKET statement question

2009-06-16 Thread Gentry, Stephen
Never mind, found my answer.

 



From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Gentry, Stephen
Sent: Tuesday, June 16, 2009 10:14 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: REXX SOCKET statement question

 

Regarding the REXX SOCKET statement with the RECV parameter.  There is a
maxlength parameter.   If this parameter is not coded, what is the
maximum number of characters the RECV command can handle/receive?

Thanks,

Steve



DB2 Problem

2009-06-16 Thread Tom Duerbusch
I don't believe that this is a DB2 Server code problem, just how I'm coding it, 
is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that table, 
inserts them into another table, and then deletes all records in the first 
table, all within the same LUW.

However, if the process that adds records to the table, is adding records 
during this merge/purge process, some records are deleted without being merged. 
 I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in a DB2 
Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a View.  The 
table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF   
 ISOLATION LEVEL = REPEATABLE READ  
-- CONNECT SYSA IDENTIFIED BY ;  
ARI8004I User SYSA connected to server STLDB01. 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- 
-- COMMENT 'PAYROLL SYSTEM'
-- 
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock the 
dbspace of ershist_xx
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- INSERT INTO STL01.ERS_HISTORY_A 
--SELECT * FROM ASN.ERSHIST_X; 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've inserted 
3 records
-- 
-- DELETE FROM   ASN.CDERS_HISTORY 
-- ;   
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.   
 Warning may indicate a problem. 
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved deleted 
30705 records
ARI0502I Following SQL warning conditions encountered:   
 NULLWHERE   
 
-- COMMIT WORK; 
ARI0500I SQL processing was successful.  
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0   
-- SET ERRORMODE OFF;   
ARI0899I ...Command ignored. 
--  
ARI0802I End of command file input.  
ARI8997I ...Begin COMMIT processing. 
ARI0811I ...COMMIT of any database changes successful.   
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds records.  
99.99% of the time, no records are added during the merge/purge process.   
However, if a batch job add/chg/deleted a lot of records in a signal LUW, as in 
100,000 or more, it is possible that the merge/purge runs while records were 
still being added, which then I might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would do the 
trick.  I thought that the process that was adding records, would be held on a 
LOCK, and wait (perhaps till -911, in which case it will delay and restart), 
but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.  Is 
there another option, without taking down the database to Single User Mode, or 
terminating the process that is adding records, not to loose records in the 
merge/purge process?  

Thanks

Tom Duerbusch
THD Consulting


Re: REXX SOCKET statement question

2009-06-16 Thread Miguel Delapaz
Steve,

According to the doc, the default is 10,000 bytes.

Regards,
Miguel Delapaz
z/VM Development

Re: DB2 Problem

2009-06-16 Thread Graves Nora E
Well, I'm hoping you were updating the table names for security
purposes, and that's the error below.  In this example, the table you're
using for the SELECT (ASN.ERSHIST_X) is not the same table that you
specify in the DELETE (ASN.CDERS_HISTORY).


Nora 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 1:38 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: DB2 Problem

I don't believe that this is a DB2 Server code problem, just how I'm
coding it, is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that
table, inserts them into another table, and then deletes all records in
the first table, all within the same LUW.

However, if the process that adds records to the table, is adding
records during this merge/purge process, some records are deleted
without being merged.  I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in
a DB2 Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a
View.  The table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF   
 ISOLATION LEVEL = REPEATABLE READ  
-- CONNECT SYSA IDENTIFIED BY ;  
ARI8004I User SYSA connected to server STLDB01. 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- 
-- COMMENT 'PAYROLL SYSTEM'
-- 
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock
the dbspace of ershist_xx
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- INSERT INTO STL01.ERS_HISTORY_A 
--SELECT * FROM ASN.ERSHIST_X; 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've
inserted 3 records
-- 
-- DELETE FROM   ASN.CDERS_HISTORY 
-- ;   
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.   
 Warning may indicate a problem. 
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved
deleted 30705 records
ARI0502I Following SQL warning conditions encountered:   
 NULLWHERE   
 
-- COMMIT WORK; 
ARI0500I SQL processing was successful.  
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0   
-- SET ERRORMODE OFF;   
ARI0899I ...Command ignored. 
--  
ARI0802I End of command file input.  
ARI8997I ...Begin COMMIT processing. 
ARI0811I ...COMMIT of any database changes successful.   
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds
records.  99.99% of the time, no records are added during the
merge/purge process.   However, if a batch job add/chg/deleted a lot of
records in a signal LUW, as in 100,000 or more, it is possible that the
merge/purge runs while records were still being added, which then I
might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would
do the trick.  I thought that the process that was adding records, would
be held on a LOCK, and wait (perhaps till -911, in which case it will
delay and restart), but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.
Is there another option, without taking down the database to Single User
Mode, or terminating the process that is adding records, not to loose
records in the merge/purge process?  

Thanks

Tom Duerbusch
THD Consulting


Re: DB2 Problem

2009-06-16 Thread Tom Duerbusch
ERSHIST_X is a view of table CDERS_HISTORY.

Tom Duerbusch
THD Consulting

 Graves Nora E nora.e.gra...@irs.gov 6/16/2009 1:08 PM 
Well, I'm hoping you were updating the table names for security
purposes, and that's the error below.  In this example, the table you're
using for the SELECT (ASN.ERSHIST_X) is not the same table that you
specify in the DELETE (ASN.CDERS_HISTORY).


Nora 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 1:38 PM
To: IBMVM@LISTSERV.UARK.EDU 
Subject: DB2 Problem

I don't believe that this is a DB2 Server code problem, just how I'm
coding it, is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that
table, inserts them into another table, and then deletes all records in
the first table, all within the same LUW.

However, if the process that adds records to the table, is adding
records during this merge/purge process, some records are deleted
without being merged.  I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in
a DB2 Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a
View.  The table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF   
 ISOLATION LEVEL = REPEATABLE READ  
-- CONNECT SYSA IDENTIFIED BY ;  
ARI8004I User SYSA connected to server STLDB01. 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- 
-- COMMENT 'PAYROLL SYSTEM'
-- 
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock
the dbspace of ershist_xx
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- INSERT INTO STL01.ERS_HISTORY_A 
--SELECT * FROM ASN.ERSHIST_X; 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've
inserted 3 records
-- 
-- DELETE FROM   ASN.CDERS_HISTORY 
-- ;   
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.   
 Warning may indicate a problem. 
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved
deleted 30705 records
ARI0502I Following SQL warning conditions encountered:   
 NULLWHERE   
 
-- COMMIT WORK; 
ARI0500I SQL processing was successful.  
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0   
-- SET ERRORMODE OFF;   
ARI0899I ...Command ignored. 
--  
ARI0802I End of command file input.  
ARI8997I ...Begin COMMIT processing. 
ARI0811I ...COMMIT of any database changes successful.   
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds
records.  99.99% of the time, no records are added during the
merge/purge process.   However, if a batch job add/chg/deleted a lot of
records in a signal LUW, as in 100,000 or more, it is possible that the
merge/purge runs while records were still being added, which then I
might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would
do the trick.  I thought that the process that was adding records, would
be held on a LOCK, and wait (perhaps till -911, in which case it will
delay and restart), but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.
Is there another option, without taking down the database to Single User
Mode, or terminating the process that is adding records, not to loose
records in the merge/purge process?  

Thanks

Tom Duerbusch
THD Consulting


Re: REXX SOCKET statement question

2009-06-16 Thread Gentry, Stephen
Yes, I saw that (about an hour after I posted to the list).  We are
consistently getting our data cut off at 2920 bytes and can't figure out
why. Packet size in TCPIP is 8992.

I've left the maxlength field blank and I've plugged in 1 and still
get the same results.  We're checking now to see if there is a character
string of some kind that the SOCKET RECV is interpreting as end of data.
The SOCKET program does receive the total amount being transmitted to
it, but it appears to be in 2920 byte blocks.  Might be time for level 2
support. 8-)

Steve

 



From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Miguel Delapaz
Sent: Tuesday, June 16, 2009 1:39 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: REXX SOCKET statement question

 

Steve,

According to the doc, the default is 10,000 bytes.

Regards,
Miguel Delapaz
z/VM Development 



Re: DB2 Problem

2009-06-16 Thread Graves Nora E
Does the view match the table?   I've created views that have WHERE
clauses in them to restrict the data that is retrieved, things like
WHERE FISCAL_YEAR = 2009.  If that's the case, the 2 statements are
not looking at the same set of rows.


Nora 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 2:14 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: DB2 Problem

ERSHIST_X is a view of table CDERS_HISTORY.

Tom Duerbusch
THD Consulting

 Graves Nora E nora.e.gra...@irs.gov 6/16/2009 1:08 PM 
Well, I'm hoping you were updating the table names for security
purposes, and that's the error below.  In this example, the table you're
using for the SELECT (ASN.ERSHIST_X) is not the same table that you
specify in the DELETE (ASN.CDERS_HISTORY).


Nora 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 1:38 PM
To: IBMVM@LISTSERV.UARK.EDU 
Subject: DB2 Problem

I don't believe that this is a DB2 Server code problem, just how I'm
coding it, is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that
table, inserts them into another table, and then deletes all records in
the first table, all within the same LUW.

However, if the process that adds records to the table, is adding
records during this merge/purge process, some records are deleted
without being merged.  I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in
a DB2 Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a
View.  The table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF   
 ISOLATION LEVEL = REPEATABLE READ  
-- CONNECT SYSA IDENTIFIED BY ;  
ARI8004I User SYSA connected to server STLDB01. 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- 
-- COMMENT 'PAYROLL SYSTEM'
-- 
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock
the dbspace of ershist_xx
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- INSERT INTO STL01.ERS_HISTORY_A 
--SELECT * FROM ASN.ERSHIST_X; 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've
inserted 3 records
-- 
-- DELETE FROM   ASN.CDERS_HISTORY 
-- ;   
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.   
 Warning may indicate a problem. 
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved
deleted 30705 records
ARI0502I Following SQL warning conditions encountered:   
 NULLWHERE   
 
-- COMMIT WORK; 
ARI0500I SQL processing was successful.  
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0   
-- SET ERRORMODE OFF;   
ARI0899I ...Command ignored. 
--  
ARI0802I End of command file input.  
ARI8997I ...Begin COMMIT processing. 
ARI0811I ...COMMIT of any database changes successful.   
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds
records.  99.99% of the time, no records are added during the
merge/purge process.   However, if a batch job add/chg/deleted a lot of
records in a signal LUW, as in 100,000 or more, it is possible that the
merge/purge runs while records were still being added, which then I
might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would
do the trick.  I thought that the process that was adding records, would
be held on a LOCK, and wait (perhaps till -911, in which case it will
delay and restart), but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I 

DFSMS RMSMASTER fails to initialize if library is offline

2009-06-16 Thread Marcy Cortes
We've got multiple tape libraries now and so multiple RM_AUTO_LIBRARY 
statements.

RMSMASTR has a nasty habit of failing to initialize when one of these libraries 
is unavailable for some reason instead of skipping it and moving on.   There 
are multiple reasons why a library might not be available (maintenance, running 
in a different location, etc.) and we'd like it just to put out an error 
message and move on instead of stopping.

Is that behavior configurable?

Marcy 



This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.


Re: DB2 Problem

2009-06-16 Thread Bill Pettit
Do you have the REXX SQL option available to you?

Bill

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 10:38 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: DB2 Problem


I don't believe that this is a DB2 Server code problem, just how I'm coding it, 
is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that table, 
inserts them into another table, and then deletes all records in the first 
table, all within the same LUW.

However, if the process that adds records to the table, is adding records 
during this merge/purge process, some records are deleted without being merged. 
 I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in a DB2 
Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a View.  The 
table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF
 ISOLATION LEVEL = REPEATABLE READ
-- CONNECT SYSA IDENTIFIED BY ;
ARI8004I User SYSA connected to server STLDB01.
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
--
-- COMMENT 'PAYROLL SYSTEM'
--
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock the 
dbspace of ershist_xx
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- INSERT INTO STL01.ERS_HISTORY_A
--SELECT * FROM ASN.ERSHIST_X;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've inserted 
3 records
--
-- DELETE FROM   ASN.CDERS_HISTORY
-- ;
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.
 Warning may indicate a problem.
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved deleted 
30705 records
ARI0502I Following SQL warning conditions encountered:
 NULLWHERE

-- COMMIT WORK;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- SET ERRORMODE OFF;
ARI0899I ...Command ignored.
--
ARI0802I End of command file input.
ARI8997I ...Begin COMMIT processing.
ARI0811I ...COMMIT of any database changes successful.
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds records.  
99.99% of the time, no records are added during the merge/purge process.   
However, if a batch job add/chg/deleted a lot of records in a signal LUW, as in 
100,000 or more, it is possible that the merge/purge runs while records were 
still being added, which then I might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would do the 
trick.  I thought that the process that was adding records, would be held on a 
LOCK, and wait (perhaps till -911, in which case it will delay and restart), 
but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.  Is 
there another option, without taking down the database to Single User Mode, or 
terminating the process that is adding records, not to loose records in the 
merge/purge process?

Thanks

Tom Duerbusch
THD Consulting


Re: DFSMS RMSMASTER fails to initialize if library is offline

2009-06-16 Thread Romanowski, John (OFT)
When we asked that, IBM Support said it's not configurable

 -Original Message-
 From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
 Behalf Of Marcy Cortes
 Sent: Tuesday, June 16, 2009 1:05 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: DFSMS RMSMASTER fails to initialize if library is offline

 We've got multiple tape libraries now and so multiple RM_AUTO_LIBRARY
 statements.

 RMSMASTR has a nasty habit of failing to initialize when one of these
 libraries is unavailable for some reason instead of skipping it and
 moving on.   There are multiple reasons why a library might not be
 available (maintenance, running in a different location, etc.) and we'd
 like it just to put out an error message and move on instead of
 stopping.

 Is that behavior configurable?

 Marcy



 This message may contain confidential and/or privileged information.
 If you are not the addressee or authorized to receive this for the
 addressee, you must not use, copy, disclose, or take any action based
 on this message or any information herein. If you have received this
 message in error, please advise the sender immediately by reply e-mail
 and delete this message. Thank you for your cooperation.


This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.


Re: DB2 Problem

2009-06-16 Thread Tom Duerbusch
The view is a join of two tables.  The view always has the same number of 
records as the base table.  I'm joining in descriptions into the table, instead 
of having just the description codes.Referential integrity makes sure there 
is a match.

Also, if there are no records being added, the merge record count always equals 
the purge record count.

Tom Duerbusch
THD Consulting

 Graves Nora E nora.e.gra...@irs.gov 6/16/2009 1:54 PM 
Does the view match the table?   I've created views that have WHERE
clauses in them to restrict the data that is retrieved, things like
WHERE FISCAL_YEAR = 2009.  If that's the case, the 2 statements are
not looking at the same set of rows.


Nora 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 2:14 PM
To: IBMVM@LISTSERV.UARK.EDU 
Subject: Re: DB2 Problem

ERSHIST_X is a view of table CDERS_HISTORY.

Tom Duerbusch
THD Consulting

 Graves Nora E nora.e.gra...@irs.gov 6/16/2009 1:08 PM 
Well, I'm hoping you were updating the table names for security
purposes, and that's the error below.  In this example, the table you're
using for the SELECT (ASN.ERSHIST_X) is not the same table that you
specify in the DELETE (ASN.CDERS_HISTORY).


Nora 

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
Behalf Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 1:38 PM
To: IBMVM@LISTSERV.UARK.EDU 
Subject: DB2 Problem

I don't believe that this is a DB2 Server code problem, just how I'm
coding it, is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that
table, inserts them into another table, and then deletes all records in
the first table, all within the same LUW.

However, if the process that adds records to the table, is adding
records during this merge/purge process, some records are deleted
without being merged.  I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in
a DB2 Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a
View.  The table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF   
 ISOLATION LEVEL = REPEATABLE READ  
-- CONNECT SYSA IDENTIFIED BY ;  
ARI8004I User SYSA connected to server STLDB01. 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- 
-- COMMENT 'PAYROLL SYSTEM'
-- 
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock
the dbspace of ershist_xx
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0  
-- INSERT INTO STL01.ERS_HISTORY_A 
--SELECT * FROM ASN.ERSHIST_X; 
ARI0500I SQL processing was successful. 
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've
inserted 3 records
-- 
-- DELETE FROM   ASN.CDERS_HISTORY 
-- ;   
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.   
 Warning may indicate a problem. 
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved
deleted 30705 records
ARI0502I Following SQL warning conditions encountered:   
 NULLWHERE   
 
-- COMMIT WORK; 
ARI0500I SQL processing was successful.  
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0   
-- SET ERRORMODE OFF;   
ARI0899I ...Command ignored. 
--  
ARI0802I End of command file input.  
ARI8997I ...Begin COMMIT processing. 
ARI0811I ...COMMIT of any database changes successful.   
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds
records.  99.99% of the time, no records are added during the
merge/purge process.   However, if a batch job add/chg/deleted a lot of
records in a signal LUW, as in 100,000 or more, it is possible that 

Re: DB2 Problem

2009-06-16 Thread Tom Duerbusch
Perhaps.

This is on VSE which doesn't have REXX SQL from IBM.  However, we do have 
REXXSQL from SPR which does the same thing.

BTW, the original program was written in REXXSQL.  I tore everything out and 
put it in the DB2 Batch Utility to eliminate the possibility of a commit 
being accidently put in by the product developers.

Tom Duerbusch
THD Consulting

 Bill Pettit bi...@ormutual.com 6/16/2009 1:59 PM 
Do you have the REXX SQL option available to you?

Bill

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 10:38 AM
To: IBMVM@LISTSERV.UARK.EDU 
Subject: DB2 Problem


I don't believe that this is a DB2 Server code problem, just how I'm coding it, 
is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that table, 
inserts them into another table, and then deletes all records in the first 
table, all within the same LUW.

However, if the process that adds records to the table, is adding records 
during this merge/purge process, some records are deleted without being merged. 
 I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in a DB2 
Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a View.  The 
table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF
 ISOLATION LEVEL = REPEATABLE READ
-- CONNECT SYSA IDENTIFIED BY ;
ARI8004I User SYSA connected to server STLDB01.
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
--
-- COMMENT 'PAYROLL SYSTEM'
--
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock the 
dbspace of ershist_xx
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- INSERT INTO STL01.ERS_HISTORY_A
--SELECT * FROM ASN.ERSHIST_X;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've inserted 
3 records
--
-- DELETE FROM   ASN.CDERS_HISTORY
-- ;
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.
 Warning may indicate a problem.
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved deleted 
30705 records
ARI0502I Following SQL warning conditions encountered:
 NULLWHERE

-- COMMIT WORK;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- SET ERRORMODE OFF;
ARI0899I ...Command ignored.
--
ARI0802I End of command file input.
ARI8997I ...Begin COMMIT processing.
ARI0811I ...COMMIT of any database changes successful.
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds records.  
99.99% of the time, no records are added during the merge/purge process.   
However, if a batch job add/chg/deleted a lot of records in a signal LUW, as in 
100,000 or more, it is possible that the merge/purge runs while records were 
still being added, which then I might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would do the 
trick.  I thought that the process that was adding records, would be held on a 
LOCK, and wait (perhaps till -911, in which case it will delay and restart), 
but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.  Is 
there another option, without taking down the database to Single User Mode, or 
terminating the process that is adding records, not to loose records in the 
merge/purge process?

Thanks

Tom Duerbusch
THD Consulting


Re: DFSMS RMSMASTER fails to initialize if library is offline

2009-06-16 Thread Marcy Cortes
IBM, has anyone submitted a requirement? 

Marcy 

This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.


-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Romanowski, John (OFT)
Sent: Tuesday, June 16, 2009 12:13 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] DFSMS RMSMASTER fails to initialize if library is offline

When we asked that, IBM Support said it's not configurable

 -Original Message-
 From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
 Behalf Of Marcy Cortes
 Sent: Tuesday, June 16, 2009 1:05 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: DFSMS RMSMASTER fails to initialize if library is offline

 We've got multiple tape libraries now and so multiple RM_AUTO_LIBRARY
 statements.

 RMSMASTR has a nasty habit of failing to initialize when one of these
 libraries is unavailable for some reason instead of skipping it and
 moving on.   There are multiple reasons why a library might not be
 available (maintenance, running in a different location, etc.) and we'd
 like it just to put out an error message and move on instead of
 stopping.

 Is that behavior configurable?

 Marcy



 This message may contain confidential and/or privileged information.
 If you are not the addressee or authorized to receive this for the
 addressee, you must not use, copy, disclose, or take any action based
 on this message or any information herein. If you have received this
 message in error, please advise the sender immediately by reply e-mail
 and delete this message. Thank you for your cooperation.


This e-mail, including any attachments, may be confidential, privileged or 
otherwise legally protected. It is intended only for the addressee. If you 
received this e-mail in error or from someone who was not authorized to send it 
to you, do not disseminate, copy or otherwise use this e-mail or its 
attachments.  Please notify the sender immediately by reply e-mail and delete 
the e-mail from your system.


VMSES/E service application backout

2009-06-16 Thread daver--
Hi, I did an oops. So to speak. I am running a z/VM 5R4.0 (service level
0801) system and downloaded the latest RSU and COR for it. In the fine
tradition of measuring once and cutting twice, I put the COR on first
which has some prerequisites that I believe are in the RSU. Oops. I ran
the following:
-SERVICE ALL COR_DOCS (against the COR documentation envelope)
-VMFUPDAT SYSMEMO
-SERVICE ALL COR_FILE (against the COR envelope)

It moved a bunch of stuff around and ultimately gave me:
ST:VMFAPP2102I 19 of 19 PTFs processed   
ST:VMFAPP2105I VMFAPPLY processing completed unsuccessfully. 
ST:The Apply list HCPVM contains 26 PTFs.
ST:7 PTFs were already applied.  
ST:9 PTFs passed validation. 
ST:0 PTFs were included and passed validation.   
ST:0 PTFs were excluded or require excluded PTFs.
ST:10 PTFs failed
WN:VMFAPP2103W The Software Inventory has not been updated   
...among other things.

So I've hung myself on a SERVICE RESTART such that it wants me to fix
that which I have hosed up by getting ahead of myself before moving on.
Which I suppose I could do, but the RSU is really big and I guess I
would have to be pulling pieces or go re-order the ones I need etc. It
would be really nice at this point to just dump what I did and start
over with the RSU. Or force through what was good on the COR, then apply
the RSU, then reapply the COR. I _do_ have a full system backup, but it
would sure be nice to not have to roll back any user's disks and then
have to monkey them back around. I suppose I could restore it and clip
it and then just grab MAINT's minidisks instead.

Suggestions? Any gun/foot warnings I should be thinking about?

Thanks.


Re: DB2 Problem

2009-06-16 Thread Bill Pettit
Can you add another select statement with the same where clause as your 
select-insert roll, and write a temporary control file with each record 
containing enough information to identify the selected row, then after the 
select-insert roll read back the file and use it to do the deletes using rxsql?

Bill

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 12:55 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: DB2 Problem


Perhaps.

This is on VSE which doesn't have REXX SQL from IBM.  However, we do have 
REXXSQL from SPR which does the same thing.

BTW, the original program was written in REXXSQL.  I tore everything out and 
put it in the DB2 Batch Utility to eliminate the possibility of a commit 
being accidently put in by the product developers.

Tom Duerbusch
THD Consulting

 Bill Pettit bi...@ormutual.com 6/16/2009 1:59 PM 
Do you have the REXX SQL option available to you?

Bill

-Original Message-
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of Tom Duerbusch
Sent: Tuesday, June 16, 2009 10:38 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: DB2 Problem


I don't believe that this is a DB2 Server code problem, just how I'm coding it, 
is a problem G.

I have a table, that a process adds records to it.

On an hourly basis, I kick off a job that copies all the records in that table, 
inserts them into another table, and then deletes all records in the first 
table, all within the same LUW.

However, if the process that adds records to the table, is adding records 
during this merge/purge process, some records are deleted without being merged. 
 I didn't think that was suppose to happen.

After stripping out all the other code, and coding the remaining code in a DB2 
Batch Utility step, I see that I do have a problem.

I don't know if I'm confusing DB2, as the table I insert from, is a View.  The 
table I delete from, is the real table.

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF
 ISOLATION LEVEL = REPEATABLE READ
-- CONNECT SYSA IDENTIFIED BY ;
ARI8004I User SYSA connected to server STLDB01.
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
--
-- COMMENT 'PAYROLL SYSTEM'
--
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== lock the 
dbspace of ershist_xx
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- INSERT INTO STL01.ERS_HISTORY_A
--SELECT * FROM ASN.ERSHIST_X;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  === I've inserted 
3 records
--
-- DELETE FROM   ASN.CDERS_HISTORY
-- ;
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.
 Warning may indicate a problem.
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   === I'ved deleted 
30705 records
ARI0502I Following SQL warning conditions encountered:
 NULLWHERE

-- COMMIT WORK;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- SET ERRORMODE OFF;
ARI0899I ...Command ignored.
--
ARI0802I End of command file input.
ARI8997I ...Begin COMMIT processing.
ARI0811I ...COMMIT of any database changes successful.
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds records.  
99.99% of the time, no records are added during the merge/purge process.   
However, if a batch job add/chg/deleted a lot of records in a signal LUW, as in 
100,000 or more, it is possible that the merge/purge runs while records were 
still being added, which then I might loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would do the 
trick.  I thought that the process that was adding records, would be held on a 
LOCK, and wait (perhaps till -911, in which case it will delay and restart), 
but the lock didn't seem to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.  Is 
there another option, without taking down the database to Single User Mode, or 
terminating the process that is adding records, not to loose records in the 
merge/purge process?

Thanks

Tom Duerbusch
THD Consulting


Re: DFSMS RMSMASTER fails to initialize if library is offline

2009-06-16 Thread David Boyes
 IBM, has anyone submitted a requirement?
 Marcy

If not, I just wrote it up and will submit it via WAVV. 


Re: Control Data Backups

2009-06-16 Thread C. Lawrence Perkins
They're both quite large - the data disks are 3 3390-27 but they both enj
oy 
a working vacation in terms of activity.   Some huge files get written 
to 
them every morning, then users read them all day long.  The user communit
y 
is also large, but not volatile.  The CDB's take place about 3am when no 

creatures are stirring, not even mice.


On Mon, 15 Jun 2009 16:47:37 -0700, Schuh, Richard rsc...@visa.com wrot
e:

How large and active (creating, modifying or deleting files, actively 

modifying permissions, adding/deleting users, etc.) are your two filepool
s? 

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:ib...@listserv.uark.edu] On Behalf Of C. Lawrence Perkins
 Sent: Friday, June 12, 2009 6:39 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Control Data Backups
 
 On Fri, 12 Jun 2009 14:52:40 -0700, Schuh, Richard 
 rsc...@visa.com wrot=
 e:
 
 Suppose I have two filepools called SFS1 and SFS2. Would 
 having each do 
 =
 
 control data backups to the other cause problems. For 
 purposes of this =
 
 discussion, assume that they would not do the backups 
 concurrently and th= at neither catalogs nor filespaces share 
 space on the same disks.
 
 
 Regards,
 Richard Schuh
 
 
 
 
 
 I've been doing just that for 4 years.  My SFS1 and SFS2 
 are not only=
  
 not on the same disks, they're on two different z/VM systems 
 in an ISFC =
 
 collection.
 
=
===


Re: Control Data Backups

2009-06-16 Thread Schuh, Richard
Mine are 23 3390-03s for one, only 15 in the other. Unfortunately, the user 
community is small but active. There are several CDBs in any given day.

I think what I will do is create a third filepool whose only purpose is to host 
the backups from the other two. That would simplify the coordination of the 
backups. Put each on a 2-hour CDB cycle with one on the even numbered hour and 
the other on the odd. That frequency would probably be often enough to keep 
LUWs from ever being suspended.

Regards, 
Richard Schuh 

 

 -Original Message-
 From: The IBM z/VM Operating System 
 [mailto:ib...@listserv.uark.edu] On Behalf Of C. Lawrence Perkins
 Sent: Tuesday, June 16, 2009 3:28 PM
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: Control Data Backups
 
 They're both quite large - the data disks are 3 3390-27 but 
 they both enj= oy 
 a working vacation in terms of activity.   Some huge files 
 get written =
 to
 them every morning, then users read them all day long.  The 
 user communit= y is also large, but not volatile.  The CDB's 
 take place about 3am when no =
 
 creatures are stirring, not even mice.
 
 
 On Mon, 15 Jun 2009 16:47:37 -0700, Schuh, Richard 
 rsc...@visa.com wrot=
 e:
 
 How large and active (creating, modifying or deleting files, 
 actively =
 
 modifying permissions, adding/deleting users, etc.) are your 
 two filepool= s? 
 
 Regards,
 Richard Schuh
 
  
 
  -Original Message-
  From: The IBM z/VM Operating System
  [mailto:ib...@listserv.uark.edu] On Behalf Of C. Lawrence Perkins
  Sent: Friday, June 12, 2009 6:39 PM
  To: IBMVM@LISTSERV.UARK.EDU
  Subject: Re: Control Data Backups
  
  On Fri, 12 Jun 2009 14:52:40 -0700, Schuh, Richard 
 rsc...@visa.com 
  wrot=
  e:
  
  Suppose I have two filepools called SFS1 and SFS2. Would
  having each do
  =
  
  control data backups to the other cause problems. For purposes of 
  this =
  
  discussion, assume that they would not do the backups concurrently 
  and th= at neither catalogs nor filespaces share space on the same 
  disks.
  
  
  Regards,
  Richard Schuh
  
  
  
  
  
  I've been doing just that for 4 years.  My SFS1 and SFS2 
  are not only=
   
  not on the same disks, they're on two different z/VM systems in an 
  ISFC =
  
  collection.
  =
 ==
 ===
 

Re: VMBackup question

2009-06-16 Thread Jim Bohnsack




I don't think I can define them at a higher density. It's been too
long (thankfully) since I've been in that depth of channel programming
and working to the real device. I had to code (3490C on the TAPE WVOL1
command to label tapes that would work. I may be able to change them
in VMBackup now that they are labeled, however. I'll try.

The purpose of creating backups with virtual tapes and real 3590
cartridges is just for separation of the backup media. We had (have)
two 3494 robots a mile or more apart on the campus. Wouldn't help much
in a nuclear blast but then I wouldn't care. Actually, I may because I
now live and work 1,500 miles from either backup site. No danger of a
flood on the campus. It would have to float Noah's ark.

Jim

Michael Coffin wrote:

  Can the Luminex virtual tapes be defined as a higher density than the
"real" 3590's?  

PS:  What is the purpose of creating backups with virtual tapes AND real
3590 tapes?  Are the "real" tapes sent offsite or something?

-Mike

-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] On
Behalf Of Jim Bohnsack
Sent: Tuesday, June 16, 2009 11:48 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: VMBackup question


Steve--That wasn't the answer I wanted to hear but is the type I 
expected.  We don't use VM:Tape, just VM:Backup.  A work-around I've 
thought of is to do the backups, which happen in the middle of the 
night, in 2 separate runs.  First a backup specifying the 3590 tape pool

and then another backup specifying the virtual tape pool.  They wouldn't

be exact but at least the taking the default on a restore of the most 
recent backup would pull the virtual tapes rather than the 3590 
cartridges. 

On the other hand, I'll have to talk with the group to see if it really 
makes any difference as to which gets pulled for a restore.  The result 
is going to be the same.  One purpose of the virtual tape was to allow 
us to get an old 3494 out of use and off maintenance.  Mission 
accomplished, even tho not esthetically pleasing.

Jim

Imler, Steven J wrote:
  
  
You VM:Backup System Programmer is correct ... the "higher" density 
media will always be preferenced in the volser list sent to VM:Tape. 
=20

The simplest thing I can think to do is use a VM:Tape COMMAND EXIT 
that swaps the 2 volsers in the list. =20

Alternatively, I suppose you could "outcode" (CHECKOUT) the 3590 
tapes, I think that would prevent them from being selected (but you'd 
need to remember to check them back in before they can be scratched 
for reuse).=20

JR (Steven) Imler
CA=20
Senior Sustaining Engineer
Tel: +1-703-708-3479
steven.im...@ca.com



-Original Message-
From: The IBM z/VM Operating System [mailto:IBMVM@LISTSERV.UARK.EDU] 
On Behalf Of Jim Bohnsack
Sent: Tuesday, June 16, 2009 10:34 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: VMBackup question

We have installed a Luminex virtual tape system.  My question isn't 
how=20 to make it work.  It seems to be doing fine.  The only problem 
and the=20 reason for my question is that I have the virtual tapes 
defined as 3490C

tapes.  My quandary is the fact that I have the virtual tapes defined 
as

the VMBackup Primary Resource Pool and and a 3494 robot with 3590=20 
drives, located in a different location, as the VMBackup Copy 
Resource=20 Pool.  It works just fine for backups.=20

Restores, however, result in VMBackup picking a 3590 tape rather 
than=20 one of the 3490 tapes.  I know, or think I know, that I can 
get around=20 that using VMBackup Job Template files and specifying 
the input tape=20 volser and/or density.  That's a little awkward.  I 
want the restores to

be able to be done from the VMBackup screens so our admin. people can 
do

them.  The VMBackup System Prog. Ref says that the tapes for a 
restore=20 are chosen from a density preference list with the higher 
density tapes,

naturally, being chosen first.  That's logical, but not what I 
want.=20

Does anyone have any idea as to how to force VMBackup to use a 
different

list?  I have a question open the CA but I was hoping that someone 
has=20 faced the same question .

Jim

--=20
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu

  

  
  
  


-- 
Jim Bohnsack
Cornell University
(972) 596-6377 home/office
(972) 342-5823 cell
jab...@cornell.edu




Re: DB2 Problem

2009-06-16 Thread Alan Ackerman
On Tue, 16 Jun 2009 14:13:03 -0700, Bill Pettit bi...@ormutual.com wrot
e:

ARI0801I DBS Utility started: 06/16/09 10:29:35.
 AUTOCOMMIT = OFF ERRORMODE = OFF
 ISOLATION LEVEL = REPEATABLE READ
-- CONNECT SYSA IDENTIFIED BY ;
ARI8004I User SYSA connected to server STLDB01.
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
--
-- COMMENT 'PAYROLL SYSTEM'
--
-- LOCK DBSPACE PERSHIST IN EXCLUSIVE MODE;=== l
ock the dbspace of 
ershist_xx
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- INSERT INTO STL01.ERS_HISTORY_A
--SELECT * FROM ASN.ERSHIST_X;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 3  ==
= I've inserted 3 
records
--
-- DELETE FROM   ASN.CDERS_HISTORY
-- ;
ARI0501I An SQL warning has occurred.
 Database manager processing is completed.
 Warning may indicate a problem.
ARI0505I SQLCODE = 0 SQLSTATE = 01504 ROWCOUNT = 30705   ==
= I'ved deleted 30705 
records
ARI0502I Following SQL warning conditions encountered:
 NULLWHERE

-- COMMIT WORK;
ARI0500I SQL processing was successful.
ARI0505I SQLCODE = 0 SQLSTATE = 0 ROWCOUNT = 0
-- SET ERRORMODE OFF;
ARI0899I ...Command ignored.
--
ARI0802I End of command file input.
ARI8997I ...Begin COMMIT processing.
ARI0811I ...COMMIT of any database changes successful.
ARI0809I ...No errors occurred during command processing.
ARI0808I DBS processing completed: 06/16/09 10:30:31.


I don't really have a good option for stopping the process that adds rec
ords.  99.99% of the 
time, no records are added during the merge/purge process.   However, if 
a batch job 
add/chg/deleted a lot of records in a signal LUW, as in 100,000 or more, 
it is possible that the 
merge/purge runs while records were still being added, which then I might
 loose some records.

I thought locking the DBSPACE that I'm doing the merge/purge from, would
 do the trick.  I 
thought that the process that was adding records, would be held on a LOCK
, and wait (perhaps till 
-911, in which case it will delay and restart), but the lock didn't seem 
to do the trick.

Between Repeatable Read and Locking the DBSPACE didn't do what I needed.
  Is there another 
option, without taking down the database to Single User Mode, or terminat
ing the process that is 
adding records, not to loose records in the merge/purge process?

Thanks

Tom Duerbusch
THD Consulting

=
==
=

You could add a FLAG, normally NULL,  to mark the records you plan to ins
ert and then delete. 
That would ignore the added records.

UPDATE ASN.CDERS_HISTORY SET FLAG=1; 
 INSERT INTO STL01.ERS_HISTORY_A
SELECT * FROM ASN.ERSHIST_X
WHERE FLAG=1;
DELETE FROM   ASN.CDERS_HISTORY
WHERE FLAG=1;

Alan Ackerman
Alan (dot) Ackerman (at) Bank of America (dot) com 


Re: VMSES/E service application backout

2009-06-16 Thread Alan Altmark
On Tuesday, 06/16/2009 at 04:42 EDT, daver-- da...@reinken.us wrote:

 So I've hung myself on a SERVICE RESTART such that it wants me to fix
 that which I have hosed up by getting ahead of myself before moving on.
 Which I suppose I could do, but the RSU is really big and I guess I
 would have to be pulling pieces or go re-order the ones I need etc. It
 would be really nice at this point to just dump what I did and start
 over with the RSU. Or force through what was good on the COR, then apply
 the RSU, then reapply the COR. I _do_ have a full system backup, but it
 would sure be nice to not have to roll back any user's disks and then
 have to monkey them back around. I suppose I could restore it and clip
 it and then just grab MAINT's minidisks instead.
 
 Suggestions? Any gun/foot warnings I should be thinking about?

Call the Support Center and let them help you.

Alan Altmark
z/VM Development
IBM Endicott