Re: Hardware-assisted compression: not CPU-efficient?

2010-12-02 Thread Andrew N Wilt
Ron,
Thank you for the good response. It is true that the DFSMSdss
COMPRESS keyword and HWCOMPRESS keyword do not perform the same types of
compression. Like Ron said, the COMPRESS keyword is using a Huffman
encoding technique, and works amazing for repeated bytes (just the types of
things you see on system volumes). The HWCOMPRESS keyword utilizes a
dictionary based method, and works well, supposedly, on customer type data.
The CPU utilization of the HWCOMPRESS (dictionary based) is indeed larger
due to what it is doing. So you should choose the type of compression that
suits your CPU utilization needs and data type.
It was mentioned elsewhere in this thread about using the Tape
Hardware compaction. If you have it available, that's what I would go for.
The main intent of the HWCOMPRESS keyword was to provide the dictionary
based compression for the cases where you were using the software
encryption, and thus couldn't utilize the compaction of the tape device.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@bama.ua.edu wrote on 12/02/2010
04:20:15 PM:

 From:

 Ron Hawkins ron.hawkins1...@sbcglobal.net

 To:

 IBM-MAIN@bama.ua.edu

 Date:

 12/02/2010 04:21 PM

 Subject:

 Re: Hardware-assisted compression: not CPU-efficient?

 Tony,

 You are surprised, and then you explain your surprise by agreeing with
me.
 I'm confused.

 I'm not sure if you realized that the Huffman encoding technique used by
 DFMSdss COMPRESS keyword is not a dictionary based method, and has a
 symmetrical CPU cost for compression and decompression.

 Finally, as I mentioned in another email, there may be intrinsic Business
 Continuance value in taking advantage of the asymmetric CPU cost to speed
up
 local recovery of an application, or Disaster Recovery that is based on
 DFSMSdss restores. An improvement in Recovery time may be worth the
 increased cost of the backup.

 Ron

  -Original Message-
  From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
 Behalf Of
  Tony Harminc
  Sent: Thursday, December 02, 2010 9:09 AM
  To: IBM-MAIN@bama.ua.edu
  Subject: Re: [IBM-MAIN] Hardware-assisted compression: not
CPU-efficient?
 
  On 2 December 2010 05:53, Ron Hawkins ron.hawkins1...@sbcglobal.net
 wrote:
   Johnny,
  
   The saving in hardware assisted compression is in decompression -
when
 you
  read it. Look at what should be a much lower CPU cost to decompress the
 files
  during restore and decide if the speed of restoring the data
concurrently
 is
  worth the increase in CPU required to back it up in the first place.
 
  I am a little surprised at this. Certainly for most of the current
  dynamic dictionary based algorithms (and many more as well),
  decompression will always, except in pathological cases, be a good
  deal faster than compression. This is intuitively obvious, since the
  compression code must not only go through the mechanics of
  transforming input data into the output codestream, but must do it
  with some eye to actually compressing as best it can with the
  knowledge available to it, rather than making things worse. The
  decompression simply takes what it is given, and algorithmically
  transforms it back with no choice.
 
  Whether a hardware assisted - which in this case means one using the
  tree manipulation instructions - decompression is disproportionately
  faster than a similar compression, I don't know, but I'd be surprised
  if it's much different.
 
  But regardless, surely it is a strange claim that an installation
  would use hardware assisted compression in order to make their
  restores faster, particularly at the expense of their dumps. What
  would be the business case for such a thing? How many installations do
  restores on any kind of regular basis? How many have a need to have
  them run even faster than they do naturally when compared to the
  dumps?
 
  Tony H.
 
  --
  For IBM-MAIN subscribe / signoff / archive access instructions,
  send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
  Search the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Question About DFDSS :FCNOCOPY/FCWITHDRAW

2008-05-10 Thread Andrew N Wilt
Esmie,
  I would add the UTILMSG=YES parameter to the backup EXEC statements.
This will tell you if DFSMSdss is invoking ICKDSF to init the volumes
instead of simply issuing the FCWITHDRAW after the DUMP is completed. If
ICKDSF is being invoked, it is because the VTOC tracks of the DASD volume
are in a target FlashCopy relationship, and issuing an FCWITHDRAW against
them could well cause the volume to be online but invalid (because the VTOC
location now contains the residual data from before the FlashCopy was done
to it).

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 05/08/2008
08:43:05 AM:

 [image removed]

 Question About DFDSS :FCNOCOPY/FCWITHDRAW

 esmie moo

 to:

 IBM-MAIN

 05/08/2008 08:46 AM

 Sent by:

 IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU

 Please respond to IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU

 Good Morning Gentle Readers,

   I am investigating a problem with a backup(backups of SNAP
 volumes) that is executed daily.  For some reason in the last 2 days
 the backup has been taking 12-15 hours to execute.  I covered all
 angles : changes to the job, Z/OS version, tape mounts/tape drives,
 scheduling, envirnonment changes etc.  Nothing has been changed.  I
/snip

   //BACKUP1 EXEC PGM=ADRDSSU,TIME=60
 //SYSPRINT DD SYSOUT=*
 //SYSUDUMP DD SYSOUT=*
 //DEV1LST  DD SYSOUT=*
 //INDEVDD UNIT=3390,VOL=SER=SNAP01,DISP=SHR
 //OUTDEV   DD DSN=SYS2.BACKUP1.OUT.SYS001(+1),
 //DISP=(,CATLG,DELETE),
 //DCB=GDGDSCB,
 //UNIT=3490,VOL=(,RETAIN),
 //LABEL=(01,SL)
 //SYSINDD *
 DUMP FULL INDD(INDEV) OUTDD(OUTDEV) CAN OPT(4) FCWITHDRAW
 //*
 //BACKUP2 EXEC PGM=ADRDSSU,TIME=60
 //SYSPRINT DD SYSOUT=*
 //SYSUDUMP DD SYSOUT=*
 //DEV1LST  DD SYSOUT=*
/snip
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



Re: Bruce Black passed away

2007-11-06 Thread Andrew N Wilt
I am very sad to hear this news. I had always looked forward to meeting
Mr. Black someday. I had great respect for him through his responses
on this list. His ability and willingness to equally answer questions about
FDR as well as DFSMSdss spoke volumes to me about his character. I
was always impressed that he always answered the questions fairly and
didn't try to 'sell' FDR over DFSMSdss. His type of character seems to
be increasingly rare. I will miss his contributions to the list.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 11/05/2007
09:47:08 AM:

 I am very sorry to say that Bruce Black passed away this past weekend.

 Those of you that knew Bruce know that he had been in poor health
 for some time, but things were looking better. So this has come as a
 surprise to many of us.

 The folks at Innovation will keep Bruce's email address active for
 some time, so if you want to send condolences to the family you can
 send them to Bruce's email address at Innovation and they will
 forward them along to his family.

 Russell Witt

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: flash copy and relationships, using HSM for dump to tape.

2007-07-13 Thread Andrew N Wilt
When you use ICKDSF to init a volume, it will issue an FCWITHDRAW for
that volume just as Joel said. Since the volume is newly inited, then there
is no reason to have a relationship as all those tracks are free space.

If you are using DFSMSdss to dump, we have an FCWITHDRAW keyword
that requests us to issue the FCWITHDRAW against the volume after we
have successfully dumped. For full volume dumps, this keyword was
enhanced in APAR OA18929 so that if the VTOC tracks are the target of
a FlashCopy relationship, we would have ICKDSF do an init of the volume
instead of the FCWithdraw. Previously, simply withdrawing the relationship
could make the volume unusable until it was initialized because track 0
would point to the location of the FlashCopied VTOC which no longer
was there due to the relationship withdrawal.

This APAR would only save you steps if you were doing Dump Conditioning
Full volume copies with DFSMSdss followed by full volume dumps of the
copy.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 07/12/2007
06:04:18 PM:

 Interesting.  I have always thought in terms of stopping the COPY
 relationship and any unnecessary track copy activity as soon as possible
 when the target was no longer needed, so it had never occurred to me to
 re-initialize the FlashCopy target first and then try to do the FCWITHDR.

 We always do it in the sequence FCESTABL-NOCOPY, dump, vary offline,
 FCWITHDR, then re-initialize.  That works without any problems 99.9% of
 the time, but occasionally a FlashCopy even with NOCOPY will have
 already completed normally, and the FCWITHDR will get a failure that
 must be tolerated.

 The FCESTABL NOCOPY can apparently terminate on its own (at least on the
 IBM 2107) if no further action is required because all in-use tracks on
 the source volume at the time of the FCESTABL have been over-written on
 either the source volume (forcing a physical copy of the track to the
 target) or on the target volume (making the original source volume track
 irrelevant to the target).  I would also swear that I have seen
 instances on our IBM 2107 with FlashCopy V2 where it seems as if the
 controller decided it was cheaper to just copy the few remaining
 tracks than maintain the copy relationship.

 Perhaps z/OS is smart enough to recognize when you re-initialize the
 target volume and destroy the VTOC that you have logically overwritten
 all tracks on the target volume, and that is the point at which the COPY
 relationship gets terminated.  If that is indeed the case, then perhaps
 we can simplify our procedures by relying on the INIT to accomplish the
 withdrawal (if it hasn't already terminated on its own), and at the same
 time eliminate the need to handle the occasional random FCWITHDR failure.

 Is there anyone at IBM that can verify the hypothesis that a
 re-initialize of the FlashCopy target volume is sufficient to always
 terminate a FCESTABL NOCOPY relationship under FlashCopy V2?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Copy DSS Dump File to DASD

2007-07-05 Thread Andrew N Wilt
Tom,
  I'm sorry, but there is no way that I know of to do that.
Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 07/05/2007
11:28:50 AM:

 I think I'm out of luck, but wanted to verify first. We're z/OS 1.7.

 We've had a number of small DSS DUMP files written to tape that I want to
 move to DISK. Unfortunately, it seems that the dump files on tape get
 created with a block size of about 64K, and from the DSS manual:

 The COPYDUMP command cannot change the block size of the DFSMSdss dump
data
 set. If you are copying a dump data set to a DASD device, the source
block
 size must be small enough to fit on the target device.

 Is there no way to copy these DSS dump files to disk other than restoring
 the files (renaming them of course) and backing them up again?

 Tom Chicklon

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: ADRUIXIT

2007-06-01 Thread Andrew N Wilt
Hello Errol,
  I hope this can allay your fears. The default tape blocksize of 64k
was done in 1988 in APAR PL18193. Your DR site should be able to read
DFSMSdss dump tapes with that block size. What OA13742 did was to
cause the block size that we were using to be put into the tape label where
only 0 was put into the tape label before. If your DR site has a level of
z/OS
that is out of support, you may not have an Open/Close/End-of-Volume
(O/C/EOV)
APAR (OA09868) that allows an open to be performed against a tape with
a blocksize greater than 32K by a DFSMSdss that does not have OA13742
applied.

  If you can't apply those APARs, then I think your idea of ADRUIXIT
should work. One small thing with your assembler below is that I believe
you need to return the value 4 to indicate that you changed the parameters.
In your example, I think you are returning the value 2.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 05/31/2007
08:01:20 AM:

 Hi John! We thought that the default tape BLKSIZE has been changed to 65k

 from 32k in zOS 1.4. Our DR site which has an older zOS can not read the
65k
 BLKSIZE tapes. We want to write the tapes at 32K.

 My latest research into the problem shows that APAR OA13742 changed the
 way the output tape is written and puts the actual Blocksize in the
 tape label.
 It used to be 0.

 My issue is that the exit (ADRUIXIT) would work for JCL driven ADRDSSU
 invocations but not from HSM invocations.

 Hope that explains why

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Help needed : ADR421E

2007-05-02 Thread Andrew N Wilt
Hi John,
I think that is just a requirement of DFSMSdss that when it is to
process
a Catalog, then you have to specify the fully qualified name. I don't know
off the top of my head what the rationale at the time was, but I imagine it
is to protect you from accidentally processing a User Catalog. Most
shops wouldn't want all of their users locked of using data sets just
because DFSMSdss had an exclusive enqueue on the Catalog that
those data sets were cataloged in.

From your SYSIN statements, it seems that the SYS2.AM.MYCAT
data set got caught in the INC(**) net, so you ran into the requirement
that the data set be fully qualified if it is a Catalog.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 05/02/2007
04:18:26 AM:

 Hallo To All,

   I am executing a DFDSS logical backup of MULTIPLE volumes and I
 received the following message:
   ADR421E (025)-DTDSC(01), DATA SET SYS2.AM.MYCAT NOT PROCESSED,
 FULLY QUALIFIED NAME NOT PASSED, 2

 I verified the dataset which is a CATALOG and it is correct.  I did
 a listcat and all looks okay.  The manual says to resubmit the job
 by passing the qualified name.

   I ran an indivdual backup of the dataset and it worked fine.  Is
 there something that I should look out for?

   Below are the control cards used for the volume backup:

   DUMP DATASET(FILTERDD(INDD)) -
LOGINDDNAME(DISK24) -
OUTDD(TAPE24) ALLDATA(*) SHARE -
ALLEXCP SPHERE SELECTMULTI(FIRST) -
TOL(IOER ENQF) OPTIMIZE(4) WAIT(0,0)
 INCLUDE(**) -
 EXCLUDE(UCAT.**,SYS1.VVDS.**, -
 IOAX.CTDV.**, -
 SYS2.AUXILARY.UCAT, -
 JES150.CED.FILLER.**)

   Thanks



  Send instant messages to your online friends
http://au.messenger.yahoo.com

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Help needed : ADR421E

2007-05-02 Thread Andrew N Wilt
Yes. Either way should work. (**,SYS2.AM.MYCAT) or (SYS2.AM.MYCAT,**)

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 05/02/2007
07:16:54 AM:

 So Andrew, are you saying that

 Include(**,SYS2.AM.MYCAT)would have worked? Or does the catalog need
 to be addressed in a separate step?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMSdss DOC APAR OA20117 (was RE: HSM Missing Member from Recalled Dataset -- Update

2007-03-28 Thread Andrew N Wilt
I'd like to second John's request/recommendation on adding your
voices to either the marketing request or the SHARE requirement.
The more people interested in a particular change that add their
voices/votes, the more it helps the priority get set so that the
change gets done.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona

IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 03/26/2007
01:26:32 PM:


 The subject DOC APAR closed today, 26 March.

 I VIGOROUSLY RECOMMEND anybody who uses DFSMSdss full-volume or
 track-range (aka physical) DUMP / RESTORE for D/R, data movement, etc.
 to READ AND UNDERSTAND this DOC APAR.  You **ARE** at risk of losing
 data or data integrity via DFSMSdss Full-volume or track-range (aka
 physical) DUMP / RESTORE if you have ANY OTHER PRODUCT that depends
 upon the setting of the change bit in the Format-1 DSCB.

 The DFSMSdss Level 2 rep also submitted Marketing Request #
 MR0302074136, requesting that a switch (similar to the RESET keyword
 on DUMP) be provided to allow the user to specify how DFSMSdss should
 handle the change bit at RESTORE time.  Those interested should add
 their voices via appropriate channels.

 SHARE members who have not done so, please vote on Requirement #
 SSMVSS07002, which requests a design change to DFSMSdss' default
 behavior on full-volume RESTORE.

 -jc-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMSdss exit to list dumped or copied datasets

2007-03-06 Thread Andrew N Wilt
Hello,

If you are using the Application Programming Interface for DFSMSdss, then
you
have access to the various exits that ADRDSSU calls during its processing.
Exit option 23 would be the one to look for. It is the Data Set Processed
Notification
exit, and comes with return codes indicating successful processing (0),
processed
with warning (4), error during processing (8), etc.

If you don't have your own program that invokes ADRDSSU, then I would
capture
the SYSPRINT and look at the end of the ADR messages. There should be
an ADR454I message listing the data sets that were successfully processed.
If
any data sets were not successfully processed, you should see an ADR455W
message listing those data sets.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 03/06/2007
12:52:30 AM:

 Hello,

 I'm wondering if there is an existing exit (or an ADRDSSU statement
 option; but I don't find it...) that allows us to print the successfully
 and unsuccessfully dumped or copied datasets to datasets...

 Thanks a lot in advance for your lights,

 Regards,

 Romain

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: QUESTION ON DFDSS - DELCATE PARM

2006-11-17 Thread Andrew N Wilt
Hello,
  The DELETECATALOGENTRY keyword may be responsible for what you are
seeing. However, it should only be issuing a DELETE NOSCRATCH for data set
names
that are catalog entries, not for aliases. I guess it could be the case
that your alias may be
exactly the same as the name of a data set being restored. If you still
have output from
that user's job, look for ADR464I messages indicating that an entry was
deleted.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 11/15/2006
06:02:06 PM:

 Hi,

   I am trying to diagnose a problem with missing alias.  I think I
 may have tracked down the culprit.  One of our users is doing a full
 volume LOGICAL restore using the following parms:
   PARALLELRESTORE DATASET(INCL(**)) -
 TGTGDS(ACTIVE) -INDD(TAPEV07) OUTDD(GIDM07) SPHERE -  DELCATE

   Could the DELCATE parm be causing the problems?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Ralph Griswold

2006-10-13 Thread Andrew N Wilt
 I was very saddened to hear that Dr. Griswold had passed away. I had him
 for a course at the University of Arizona in his Icon programming
language.
 He was a great teacher. I was always impressed at his down-to-earth style
 especially in light of his lifetime of accomplishments.

 Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona



 Patrick O'Keefe [EMAIL PROTECTED]
 Sent by: IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU
 10/13/2006 01:26 PM

 Please respond to
 IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU


 A while ago somebody (Shmuel, maybe?) mentioned in passing that Ralph
 Griswold had died.  Nobody seemed to pick up on that.  It was worth
 noting.  He was a big name in non-numeric computing.  Creator of
 SNOBOL among other things.

 Pat O'Keefe



 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html
 [image removed]

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: JCL - Copy Tape

2006-08-30 Thread Andrew N Wilt
Radoslaw and all,
  You are correct in that ADRDSSU 64kB blocks. In APAR OA13742, we made
a change
to how we opened the output data set when writing to tape. This change
allows our 64Kb
blocksize to show up in the new blocksize field and be recorded by tape
management systems.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


snip

 I was wrong. I confused block count field of HDR1/EOF1/EOV1 with block
 size field of HDR2/EOF2/EOV2. Indeed, zero's means you should read bytes
 71-80 for block size. BTW: ADRDSSU does not use LBI, but writes 64kB
 blocks. It always sets block size as 0. That's why ADRDSSU dumps cannot
 be copied using legal methods, like IEBGENER.


 --
 Radoslaw Skorupka
 Lodz, Poland

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: HDS backup process (resend)

2006-03-09 Thread Andrew N Wilt
Dennis,
  I believe that if you have the FlashCopy v2 Extension for your HDS,
DFSMSdss
should act just like it is working with an ESS and attempt to use the
FlashCopy interface
to do an instant copy of the volume. (I can't say for sure since I haven't
had the chance
to try it on an HDS. We don't have any lying around to play on.)
  If you don't have the Extension, then we would revert to using manual
I/O on the
volume and the copy process would take longer to complete.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 03/09/2006
10:53:52 AM:

 Retry sending this post

 Since we are not able to purchase something like FDR Instant, I'm looking
 for other ways to avoid having to clip volume labels.  Has anyone had any
 experience copying volumes with DFDSS COPY with the dumpconditional and
 fastreplication keywords on a HDS 9900?  Is Showdow Image invoked?

 Also what functionality does ShadowImage Flashcopy v2 Extension
 provide?   What is
 the difference if I run DFDSS COPY with dumpconditional with and without
 this extension?
 Thank you

 - enD sin

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DFSMSdss COPY FULL with Flashcopy V2

2006-01-31 Thread Andrew N Wilt
Mark,
  I looked through our publications for a specific mention of what you
would like to have documented and couldn't find one. I will see about
getting
it included in the next round of publications.
  I would like to take the opportunity to share our rationale and
accept
your feedback on it. In this particular scenario, you have requested
DFSMSdss
to take the image or contents of your source volume and write it to your
target
volume. The contents of the target volume for a full volume copy has been
tradtionally considered as unwanted. (If you wanted it you wouldn't have
used it as the target for the copy.) Likewise, if the target volume is in a
FlashCopy
relationship where it is the target of the relationship, then you must not
be
interested in keeping the relationship since that is equivalent to keeping
the
original contents of the target volume. Therefore, it is assumed to be
acceptable for the relationship to be removed so that the copy can proceed.
If
you think about it, had you specified FR(NONE) on the second COPY FULL, we
would have used normal I/O and the target volume would again not be in a
FlashCopy relationship with the original volume A.
  We didn't want to force users to have to manually remove the
FlashCopy
relationship when they wanted to use that target volume as the target of
many
different source volumes (dumping them in between or otherwise not needing
the data on the target volume).

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 01/31/2006
01:23:21 PM:

 I ran a DSS Copy step using: FULL, FASTREP(REQ), FCNOCOPY, DUMPCOND
 to set up a FC volume pair between volumeA(source) and volumeB(target).
 But instead of following this with a DSS Dump step from volumeB, I
 mistakenly ran another DSS Copy step to set up a FC volume pair
 between volumeC(source) and volumeB(target).
 And it worked.  It dropped the pairing between volumeA and volumeB
 and created the pairing between volumeC and volumeB.
 I thought that there would be some protection to prevent selecting a
 target volume that was already in a Flashcopy relationship.

 Can anyone point me to documentation that states this is WAD?


 Mark Kliner
 Systems Programmer
 Vision Service Plan
 Rancho Cordova, CA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Basic question on deleting datasets

2006-01-30 Thread Andrew N Wilt
Rex,
  The REPLACEUNCONDITIONAL keyword isn't documented in the z/OS v1.4
manuals because it was a feature we shipped via SPE (APAR) long after z/OS
v1r4 was in the field. If you check a current version of the publications,
say z/OS v1r7, it is documented there.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 01/30/2006
01:47:53 PM:

 Don gets the big THANK YOU!  REPLACEUNCONDITIONAL is not documented in
 the DSS reference manual but it does just what I need.  Hmm,
 undocumented features.  I thought only M$ did that ;-)

 As for the others, thanks for the replies.  Unfortunately I forgot to
 make part of my request clear.  There are about 20 packs we need to do
 this with and we are trying to get this set up so the operators can run
 it (or schedule it to run automatically) without intervention.


 Rex

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
 Behalf Of Imbriale, Donald (Exchange)
 Sent: Monday, January 30, 2006 2:11 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Re: Basic question on deleting datasets


 In your DSS copy job, use REPLACEUNCONDITIONAL.

 Don Imbriale

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On
 Behalf
 Of Pommier, Rex R.
 Sent: Monday, January 30, 2006 2:50 PM
 To: IBM-MAIN@BAMA.UA.EDU
 Subject: Basic question on deleting datasets
 
 Hi.
 
 I'm afraid I'm in a can't see the forest because of the trees
 situation which probably has a very easy answer that I can't see.
 
 I have a disk pack that I need to get a large subset of the datasets
 off
 it as well as uncataloged.  Is there some way using IDCAMS or DFSMSdss
 to accomplish this using wildcards so I don't have to list each dataset

 separately?  IDCAMS apparently only allows me to use a single-qualifier

 wildcard which won't do it.
 
 Unless somebody has a better idea as how to proceed here.  The
 situation
 is that we have a training system here that is a clone of our
 production.  On a semi-regular basis we are requested to refresh the
 training system.  The process is to delete the training system datasets

 then use DFSMSdss COPY to copy the datasets from production to
 training.
 I have tried just using DFSMSdss COPY with REPlace but apparently the
 REPlace parameter of DSS COPY doesn't work with the RENameUnconditional

 because DSS complains about the output datasets already being there.
 
 I don't want to have to list all the datasets individually in IDCAMS
 DELETE statements because that will be a maintenance nightmare.
 
 Environment is z/OS 1.4.
 
 What am I missing?
 
 Rex


 ***
 Bear Stearns is not responsible for any recommendation, solicitation,
 offer or agreement or any information about any transaction, customer
 account or account activity contained in this communication.
 ***

 --
 For IBM-MAIN subscribe / signoff / archive access instructions, send
 email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search
 the archives at http://bama.ua.edu/archives/ibm-main.html

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: Copy Extended-format VSAM Sphere

2006-01-04 Thread Andrew N Wilt
John,
  Off the top of my head, I think the reason that the offline volumes
are being requested for mount is due to RACF (Security) processing.
Because the source data set catalog is available to the target system,
DFSMSdss is seeing that the original data set is cataloged. Since it is
cataloged, we need to check your authority to be able to restore that
data set. I think that as part of checking the authority, Catalog may be
called which may need to read the VVRs from the volumes (which are
offline to that system). I would try specifying the ADMINISTRATOR
keyword so that we would bypass the RACF checking for the data set
being restored.
  I think that the rest of what you are trying to do should work
just fine. I think is just the funny situation of the source catalog being
available to the target system, but not the source volumes that is the
problem.

Thanks,

 Andrew Wilt
 IBM DFSMSdss Architecture/Development
 Tucson, Arizona


IBM Mainframe Discussion List IBM-MAIN@BAMA.UA.EDU wrote on 01/04/2006
02:26:10 PM:

 Hi, All,

 We have need to copy an extended-format VSAM sphere consisting of the
base
 cluster, one alternate index and one path from one image to another,
 changing the HLQ and target catalog in the process.  So far we've
 successfully DUMPed the sphere (logical dump) to tape using DFSMSdss, but
 all attempts so far to RESTORE it (with RENUNC) to the target system have
 failed.

 Both images are at z/OS 1.5; both images are members of the same Parallel
 Sysplex; and the source image's *Catalog* is accessible on the target
image
 (but the source image's volumes are offline to it) (maybe that's the
 show-stopper?).  Most of the failed attempts are requesting that the
 source volumes be mounted on the target image, but we want the RESTORE
to
 go to target image volumes.

 Can this be done using DFSMSdss DUMP / RESTORE? or should we look at
 (perhaps) IDCAMS EXPORT / IMPORT? or maybe just REPRO the base cluster to
 tape, DEFINE an empty cluster on the target image, REPRO from tape,
DEFINE
 the alternate index and path, and BLDINDEX?

 TIA for any ideas,

 -jc-

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
 Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html