Re: How often does SMS update its free space information?

2014-04-09 Thread David Devine
Hi Kees,
if you issue a d sms,options command it'll display your current settings and 
dinterval is the one in to look for; it probably has the default setting of 150 
seconds.

However i think your issue is more fundamental.

From dfsms implementing System-managed storage

SMS interfaces with the system resource manager (SRM) to select from the 
eligible volumes in the primary volume list. SRM uses device delays as one of 
the criteria for selection and does not prefer a volume if it is already 
allocated in the jobstep. This is useful for batch processing when the data set 
is accessed immediately after creation. It is, however, not useful for database 
data that is reorganized at off-peak hours. 

Which implies that if you are reorging a tablespace with multiple physical 
datasets, extents or partitions and these are allocated to you reorg job, it 
wont touch the packs they are on even if they have enough freespace.

Info on primary, secondary, tertiary volume lists are in the same manual. I 
believe that quiesced volumes go into the secondary pool.
 
B.T.W what z/os version are you on? I seem to recall that older versions used 
to treat enabled and quiesced volumes the same.   

And finally there is the way your reorgs run.

My underatanding is that the reorg allocates a copy dataset, populates it, 
applies any updates, flips it over to become the true dataset (also renaming 
the original) and if the flip is succesful, then deletes the original.
 
It sounds like you are expecting to reuse space that has been deleted by the 
first successful partition reorg and so on for further reorgs and if the 
allocation remains on the pack until step end, then it wont happen.

hope this is of use.

kind regards,
Dave
  
**

Hello group,

We see that new datasets are being allocated by SMS on volumes, which we cannot 
explain. In each storagegroup we have one or more Quiesced volumes, on which 
SMS is supposed to allocate datasets only when all the enabled volumes in the 
storagegroup are filled to their max (migration high percentage). This triggers 
us to have a look at the storagegroup and add space to it.

During DB2 Reorgs we see datasets being allocated on Quiesced volumes, although 
space should be available on the Enabled volumes in the storage group. Tracing 
in detail, we see DBM1 deleting and defining parts of the tablespace in a very 
high speed. If a part of the table space has been deleted, it makes space 
available for the define of the new part, but this space is not used by SMS. 
Our assumption is, that SMS does not update its administration when DBM1 has 
deleted a part and when DBM1 defines a new part of the table space, SMS must 
look for free space based on its 'aged' information. Considering the speed of 
DBM1's delete and define actions, it appears as if space for the entire new 
tablespace must be found as if the old tablespace still is fully present. When 
the Reorg has finished, we have a large amount of space allocated on the 
Quiesced volumes, with ample space for it available on the enabled volumes.

At what moments or at what rate or with which other algorithm does SMS update 
its free space information of the volumes of a storagegroup? I could not find 
it in SMS information.

Thanks,
Kees.




--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Why does a dataset that should never migrate does occassionally (hsm)

2014-02-24 Thread David Devine
Hi Mike,
i think you have 2 basic problems. 
The first is that primary days in your mgmtclas definition is set to blank, 
and as far as hsm is concerned, the default is 2 days if there is no entry.
Which leads onto the second problem, and it looks like your job scheduler 
allocates but does not open the dataset until it needs its, so the last 
referenced date field does not get updated untill you run your plans or a job 
is submitted from it
(if its a job library).
Getting migrated is a big indicator that the dsn is free at that point and not 
allocated.

As per other repliers, checking the hsm logs for the dataset will tell you the 
reason for the migration.

Also, as the other repliers have said, the simple way round it is to change the 
mgmtclass to one where primary days is a decent figure (i.e ) or 
cmd/migrate is none.

If you've already got suitable mgmtclas's available, a simple alter against the 
dataset can change it and bring in your new one. 

kind regards
Dave

***
Hi, we have a mainframe dataset that is used daily used in our automated
scheduling processes.
Occassionally, our ops support gets paged out because a process is delayed
and it turns out it was because
said dataset was migrated and had to be recalled.
this dataset's management class  has nolimit on expires, and blanks for
'Primary Days'
Partial Release is 'yes'  so could hsm be migrating this dsn in Primary
space mgmt, despite we have
primary-days set to blanks???
And if so, short of setting partial release to no, how can we keep an sms
managed dataset from ever migrating at all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Stacking SMS and non-SMS disk on same HSM dump tape

2013-10-09 Thread David Devine
Hello,
at my place we get both sms  nonsms disk volumes on one stack volume and have 
done for yonks; sms volumes first, then nonsms; we run maxdumptasks(4) (default 
is 2 if not specified) and stack(40).

Where we have two dumpclasses (e.g sysdaily  weekly) which coincide on a given 
dumping day, it (a) does not stack different dumpclass on the same tape and (b) 
we have 2 packs in both dumpclasses and it dumps onto seperate tapes from the 
other packs in the same dumpclasses.  

Thats how it is here. The packs are dumped about 2 hours into a 3+ hour 
process. 

Sysdaily only does 12 volumes so its not like the 2 on there own just happen to 
concide with filling a tape with 40 and they're next. 

If you run a hsend list dumpvolume ods('my.output.dataset')  i'm betting the 
volumes you're interested in are in different dumpclasses.
  
heres an extract from and old one of mine:- 

EF3023 UNEXP  POR3590  WEEKLY   2013/08/10 Y
   001  PFDB01 N2013/07/20 21:55:57 
   002  PFDB02 N2013/07/20 21:58:13 
EF5140 EXPIR  POR3590  SYSDAILY 2013/07/22 Y
   001  PFDB01 N2013/07/20 21:55:57 
   002  PFDB02 N2013/07/20 21:58:13   

regards
Dave

***
z/OS 1/12 and above offer the ability for stack(255)

HTH,

Each DUMPCLASS specifies DAY(n) determining the day of the week it runs, 
UNIT(3590-1), and STACK(99) to fill up a tape.  TAPEUTILIZATION specifies 97% 
of a 3590-1.  AUTODUMPSTART specifies a three hour window of which no more than 
15 minutes is being used.
/snip

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DCOLLECT QUESTION -RESULTS PUZZLING -

2013-07-22 Thread David Devine
Hi Esmee,
It is a little convoluted, but if you follow the code in the job deck, you can 
see it calls ACBQBAR7, check that code and you find that it calls ACBQADR3 and 
in there you find this:- 

 DO WHILE(eof = 'no')
   dcolrec = rec.1  
   /* check to see if this a D record   */  
  IF ((SUBSTR(dcolrec,5,1) = 'D')  (SUBSTR(dcolrec,5,2) ¬= 'DC')) THEN 
   DO   
  /***/ 
  /*initialize variables */ 
  /***/ 

So it's only even going to select D types.

Regards,
  Dave

***

Dave,
 
Thanks for the info.  However I cannot find where I am selecting the D type 
records.  Could you point it out?
 
Thanks

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Orphaned ICF catalog in the VVDS

2013-07-19 Thread David Devine
Norbert, you are the man!
A very nifty piece of code.

Don't know why i missed your reply first time round.

I carried out the missing part of the solution when i got in this morning, (out 
of time last night) of recataloging the vvds entry in the re-created, missing 
catalog, then ran the del nscr for the vvds against said catalog.
Diagnosis is now clean!
Just deleted the catalog to keep it all tidy.

regards,
 Dave

***
On Thu, 18 Jul 2013 13:44:50 -0400, Rouse, Willie wrote:

 I guess I don't understand:   put a corresponding entry into the dummy bcs 

Recatalog the VVDS into the dummy bcs.

I posted a complete job to the list on 5 July: 
http://www.mail-archive.com/ibm-main@listserv.ua.edu/msg15771.html


1. Create a new catalog with the old non-existent catalog name
2. Recatalog the VVDS
3. Delete (NOSCRATCH) the VVDS
4. Delete the catalog

//*  
//S1  EXEC PGM=IDCAMS
//*  
//SYSPRINT  DD SYSOUT=*  
//VVDS  DD DIS=SHR,DSN=SYS1.VVDS.Vvol001,
// VOL=SER=vol001,UNIT=SYSDA 
//SYSIN DD * 
DIAG VVDS IFILE(VVDS)
DEF UCAT(NAME(old.catname) - 
  VOL(vol002)  - 
  CYL(1 1) - 
  FSPC (0 0))  - 
  DATA(CISZ(4096)) - 
 INDEX(CISZ(4096))  
DEF CL(NAME(SYS1.VVDS.Vvol001) - 
   VOL(vol001)  - 
NONINDEXED   - 
 RECATALOG)   - 
 CAT(old.catname)   
DEL SYS1.VVDS.Vvol001  - 
 FILE(vol001) - 
  NOSCRATCH- 
  CAT(old.catname)   
   DEL old.catname UCAT 
DIAG VVDS IFILE(VVDS)
 /*   
 //   


  



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DCOLLECT QUESTION -RESULTS PUZZLING -

2013-07-19 Thread David Devine
Hi Esmee,
You've answered your own question.
The problem is that you want to get a report on M type (migration) dcollect 
records and the job deck you are running will only select D type dataset 
records.

If you are running the DFSMS RMM tape management system, it actually includes a 
dfsort/icetool report deck to strip out dcollect migrated data.
From tso ispf panel for RMM, select 1 User, then R report, then just hit enter 
to create a list of reports and the one you want is  
ARCGDM01 DCOLLECT MIGRATION DATA etc.

If you don't have RMM you can use the DFSMS access methods services for Z/os 
manual appendix F to get a breakdown of M type records and their relative 
positions and lengths and build your own rexx or sort.

There are probably examples on this forum you can use.

Good luck!

Dave  



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Orphaned ICF catalog in the VVDS

2013-07-18 Thread David Devine
Willie, hello,
Can you post the relevent snippet from your diagnose output so we can see 
exactly what's what. 
Thanks
Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Orphaned ICF catalog in the VVDS

2013-07-18 Thread David Devine
Hi Willie,
I believe that Allan has hit the nail on the head for the solution, but i think 
you may also have to put a corresponding entry into the dummy bcs so that the 
delete nscr will match up on both sides.
Just tried it on a pack on our sysprog lpar in the same situation, recreated 
the lost catalog and got an idc3012i on the delete nscr for the 
sys1.vvds.vx.

An inelegant solution but quick and easy would be to disnew the volume, empty 
it out (hsm migrate or dfdss copy etc.) then delete the vvds and define a nice 
new empty one, then enable the volume back to sms.
Sod's law will probably have it that there's a permanently allocated dataset on 
the volume to put the kibosh on that idea!

If you fancy being a guinea pig, theres VVDSFIX an unsupported IBM program 
you can download. 

 You can read about and download this tool from this URL:
http://www.ibm.com/support/docview.wss?uid=isg3S1000618 

Was on a share presentation a few years back.
Looks like it could be vaiable. 

Never used it, but i'm sure there are people around who have.

Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Missing HSM backups

2013-07-17 Thread David Devine
Hello Richard,

there is a diagnostic patch mentioned in one of the share Managing Hsm. 
sessions which should show up why datasets are not being backed up.
You are only supposed to use it at IBM's request but it seems that you are at 
the point anyway.

  • PATCH .BGCB.+24 X'FF'
• Used to determine why SMS-managed data sets are not being selected during
volume backup
• These patches produce a lot of messages
• ARC1245I with Reason Codes GT 90 for migrations
• ARC1334I with Reason Codes GT 90 for backups

I've not used it so personally so exercise some caution and give it a whirl on 
your sysprog lpar first after you've checked it out.

Also i've a vague memory of an issue with volume dump jobs running against 
packs (or it might have been defrag/compactor type jobs ) which is re-setting 
the dataset backed up flag in the vtoc so hsm wont back it up.

So i would check what activity went on against the pack in question where your 
dsn didnt get backed up in the time between dsn creation and hsm backup.

Regards
 Dave
 








***

Richard,

The two datasets are in the same management class in the same storage group but 
on different packs.


David G. Schlecht | Information Technology Professional
State of Nevada | Department of Administration | Enterprise IT Services
T:(775)684-4328 | F: (775) 684‐4324 | E:dschle...@admin.nv.gov


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Richard Marchant
Sent: Tuesday, July 16, 2013 12:13 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Missing HSM backups

David,

Where are the two GDGs located? Are they on different SMS groups or different 
non-SMS volumes? If they are separated are you allowing HSM to perform 
autobackup on both entities? 

Richard

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of David G. Schlecht
Sent: Monday, July 15, 2013 6:54 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Missing HSM backups

We have two lpars plexed, sharing DASD and catalogs and run HSM on each. Lpar-1 
is supposed to handle the backups.
Lpar-1: AUTOBACKUPSTART(1700 1800 1900)
Lpar-2: AUTOBACKUPSTART(  ) 

Backups start at 17:00 and typically end around 17:11, hence, it seems 
sufficient time is provided.


David G. Schlecht | Information Technology Professional State of Nevada | 
Department of Administration | Enterprise IT Services
T:(775)684-4328 | F: (775) 684‐4324 | E:dschle...@admin.nv.gov


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Ulrich Krueger
Sent: Monday, July 15, 2013 9:24 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Missing HSM backups

David,
Is it possible that your time window for DFHSM backups is not long enough for 
the incremental backups to complete? 


Regards,
Ulrich Krueger


-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of David G. Schlecht
Sent: Monday, July 15, 2013 9:17 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: Missing HSM backups

A new wrinkle and possibly related. I have a job that creates two GDGs in the 
same Management Class. Both should be backed up. However, one gets backed up 
while the other doesn't. The backup for the first shows up in HSM's job logs at 
the point of backup but the second is entirely absent from HSM logs.

Even if HSM is having trouble backing up a dataset, shouldn't it be showing up 
in the job logs?




David G. Schlecht | Information Technology Professional State of Nevada | 
Department of Administration | Enterprise IT Services
T:(775)684-4328 | F: (775) 684‐4324 | E:dschle...@admin.nv.gov

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
 
 
Top of Message | Previous Page | Permalink
 
 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Old usercatalogs with IMBED and REPLICATE

2013-07-17 Thread David Devine
Hiya,
Just out of interest, how many extents are these catalogs in?  if they havent 
been reorged in 10 years they are probably due for some attention, not just for 
a straight tidyup reorg (which will also remove the imbed  replicate) but 
also give you the opportunity to make changes for performance enhancements.
Run some catalog address space displays and see how they are performing
regards,
 Dave



 





--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Missing HSM backups

2013-07-10 Thread David Devine
Hello,
just a thought but are the datasets being backed up via an hbackds command? 
they might have an explicit retaindays coded (new in Z/os 1.11 i think) which 
would negate the mgmtclas settings.

Also, have another look through the hsm backup logs when expirebv is running 
for the datasets in question.
The datasetname would have the dfhsm.back.t** prefix so all you could 
recognise would be the 1st  2nd qualifiers rather than the full datasetname 
but there should still be an entry, which would give you the deletion criteria.

If your system runs daily dcollects with the hsm migd bacd parms you could 
strip out (sas or rexx) pertinent data (See AMS for Catalogs, Appendix F, 
record type B) about your missing datasets to help narrow things down.

Good hunting! 

 Dave 

***

 EXPIREBV is run daily.

 FWIW, this same problem plagues certain GDG datasets as well. However, this 
 is not a system-wide problem.





David G. Schlecht | Information Technology Professional
State of Nevada | Department of Administration | Enterprise IT Services
T:(775)684-4328 | F: (775) 684‐4324 | E:dschle...@admin.nv.gov


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: RMM: Best, safest way to mass delete tape volumes

2013-05-08 Thread David Devine
Jeff hello,
firstly just to clarify the situation with the tapes as the first part of your 
mail states the tapes were destroyed, but the 2nd part implies they are still 
extant in the silo's

If they still exist  if you still have paths to the drives (not removed from 
HCD) you may want to data wipe/erase the volumes prior to removal and 
subsequent destruction.
RMM has this facility, refer to running Edginers ERASE in the implementation  
customization guide chapter 18  also the Changevolume command in Managing and 
using removable media. 

This would be a due diligence exercise so you can prove to 
management/audit/compliance that you have taken reasonable precautions about 
data security before the tapes leave site.
Also known as C.Y.A 

*** Do it all under your sites change management procedures  

That aside, yes you are quite correct that you will get issues with journal 
filling, master file filling up (fragmentation) if you just steam on in, so 
some basic checks are in order.

(a) check your current rmm housekeeping jobs and make sure its actually working 
properly and that you are also backing up the master file and journal at least 
on a daily basis. (the edghskp backup will also null the journal when backup is 
succesfull, edgbkup will only backup) How many versions are kept?   

(b) do a listcat on your master file (usually) rmm.master and look at the high 
used rba and also volume check under tso 3.4 (or similar) Is it in extents? how 
many? is there plenty of freespace on the volume to do multiple extends? You 
may want to reorg or resize the masterfile before you start.
It's a vsam ksds so beware the basic 4gb limitation; redefine as an extended 
dataset if needed.
Reorg  resizing are all covered in the manual.
 
(c) same with RMM.Journal how big is it? do you want to resize?
 
(d) check the rmm startup parms, the warning for journal full should be there; 
80% is usual. You may have automation to watch for this and submit a job to 
backup master files and null the journal 

As to the doing, once you are happy with a,b,c,d above, i'd suggest batches of 
500 to start, you canalways ramp up when you are comfortable with it. 

Under your change management protocols, schedule a large window on your plex 
when its at a quiet point tapewise, as if it all goes horribly wrong, ALL your 
tape processing is stuffed till you get it sorted out. 

An approach would be:-
 
(1) ad hoc job to backup control dataset and journal and null the journal.  
(2) submit a batch job to run the tso command 
rmm deletevolume x force  
I dont believe you can do ranges, so its a command for each individual volume.
(3) when 2 is succsessful  (try display to volumes via tso rmm interface, 
should be gone)  
rmm deleterack x  you can put a count field here but may be 
prefferable/safer to do it for each volume.
(4) keep an eye on the master file  journal after each batch and run 
additional ad hoc backup (and journal null) as needed or every 10,000 or so and 
definitely at the end.

N.B as you will be doing multiple additional backups, make sure you don't 
overwrite them with subsequent runs, so have a 2nd step to copy to gdg's with a 
large limit! 

When you are all done, you may well want to reorg or resize the master file to 
get it nice  tidy again.

regards

Dave

P.S all opinions are my own and offerred on a no liability basis.   
 
 



***


 A few months ago all of our 3480/3490 tapes were destroyed. Something 

like over 30,000. Not to mention the REEL tapes that are still defined 
but non-existent.

I would like to remove these from the RMM database.

Something is telling me in the back of my mind Don't try do do it all 
at once. I have nightmares of journals filling up, CDS corruption, etc, 
and other Bad Things, all of which I'd rather avoid.

My environment:
z/OS 1.13 on z10, current maintenance.
3 LPAR resource sharing SYSPLEX.

I have been told that the only valid tape volumes are 05*. Anything else 
is obsolete and can be deleted. Oh, and these are all non-SMS and are 
actually STK/Oracle 94somethingorother residing in a STK/Oracle Silo. If 
that makes a difference.

So, what is a good way to approach this?
Any suggestions welcome, and thanks in advance,
Jeff



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Speed up DASD copy

2013-03-22 Thread David Devine
Hello,
with regard to the undocumented use of the optimize parameter on dfdss copy, 
those nice people at Ibm in Dfdss RD have opened a marketing req to get it 
fully supported and also an apar which will ignore it if coded with copy.   

We have the following Marketing Requirement opened:
User Group Number - MR0318131821

We also have an APAR to ignore the keyword for now:
OA41764

regards
Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM QUESTION - RECALLING DSN TO ITS ORIGINAL VOLUME

2013-03-21 Thread David Devine
Hi Willie,
its not as simple as getting it recalled to the same volume. I believe it's 
still the case that at jes startup it maps the dataset extents so you would 
need to get it back to exactly the same place(s) on the pack.
The best solution is to make all your jes proclibs non migrateable.
A simple alter mgmtclas to something suitable will do the job.

Dave

***

Good Day Readers,
 
Several jobs are failing due to the following :
 
IEC143I 213-2C,IFG0194D,JES2,JES2,SYS00035-0009,9987,SMTP01,SYS2.HESP.BPROCLIB 
IEF196I IEC143I 213-2C,IFG0194D,JES2,JES2,SYS00035-0009,9987,SMTP01, 
IEF196I SYS2.HESP.BPROCLIB 
$HASP307 HESPD01 UNABLE TO OPEN PROC02
IEFC452I HESPD01 - JOB NOT RUN - JCL ERROR
 
The problem was caused by the library which was migrated by HSM was recalled 
to another volume - SMTP03.  The volumes in this Storage Group are SMS 
managed.
 
I was able to get our MVS support folks to refresh the JES2 proc which 
rectified the problem (jobs were rerun successsfully).
 
My question is should a problem of this nature occur again (pds migrated) how 
can I force HSM to recall the dsn to the original volume?  In this case the 
dsn was recalled to another volume other than the orginal.
 
Thanks.





--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DS6800 problem (?)

2013-03-14 Thread David Devine
Hi Miklos,
Some quick questions:-
(a) did you have purge coded in the sysin of your dfdss copy decks?
(b) did you copy the ipl volumes when their system was down ? (via another 
lpar)   
(c) did you copy the source volume to a target with exactly the same cylinder 
count?
(d) you mention message complains about the output volume can you copy that 
in so we can see specifically?

Dave
***
 Hi

In some previous message I have described that , we are moving now from 
the zDASD storage system to DS6800.
We have a relative large number of volumes (140) , so we decided to 
move via DFSDD COPY
all the volumes to the proper new addresses. It worked for every volume, 
except the z/OS 1.13 system volumes (Z1DT11 and Z1DT12), worked also for 
the z/OS 1.12 system volumes
We got output track format errors, and nobody can say the reason till now:
- if I change the output (target) the error moves to the new volume, but 
the message complains about the output volume
- if I COPY logically by dataset , I got errors for
   sys1.linklib
   cee.sceerun
   sys1.shasmig
   csf.scsfmod0
 As last attempt I copied again this datasets by IEBCOPY, and try to IPL, 
 and the new system is working
 (or at last I think so) .

I lost my confidence on me , DS6800  zDASD and the IBM support , but 
maybe somebody have seen something like this.

-- 
Kind regards, / Mit freundlichen Grüßen
Miklos Szigetvari

Research  Development
ISIS Papyrus Europe AG

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Speed up DASD copy

2013-03-14 Thread David Devine
Hello,
further to this query and the oddity of Optimize being tolerated for dfdss full 
copy but not documented, i emailed one of the guys who present's the What's 
new in dss share sessions asking for clarifcation and they were kind 
enough to send on to R  D who gave the response.

Here's the reply.

Hi David,

Thank you for your questions.  You are correct, Optimize is only valid for 
dump.  Optimize for copy was prototyped 20+ years ago, but was never intended 
to be shipped as part of the product.  I understand you do see some performance 
gains when running your copy commands, but the code is not fully developed.  We 
will take an APAR and are considering two approaches: 
1.  ignore the OPTIMIZE keyword for the COPY command.  This means that 
existing jobs that specify OPTIMIZE on COPY that run today would continue to 
run, however, they will be forced to OPTIMIZE(1). 
2.  cause a syntax error when OPTIMIZE is specified on the COPY command.  
This means existing jobs that specify OPTIMIZE on COPY that run today would now 
fail.

We would appreciate any comments you have on whether we should take one 
approach over the other.  Also, if you would like to see OPTIMIZE for COPY 
fully supported I would ask you open a marketing requirement.  A requirement 
will help get that work prioritized amongst other requirements.

Thank you again,
Robert Gensler
DFSMSdss Architecture and Development
Tucson, AZ

Nice people to talk to.

In the short term, i think the first apar option is probably desirable rather 
the second as they need to cater for the fact its unsupported and mitigate for 
any unforseen consequences of its usage, but longer term, getting it fully 
deployed  supported on copy, if only for Copy Full would be the way to go, 
so i'd say opening a marketing requirement would be the preferred approach.

An invitation hard to refuse.
The more requests they get the more likely it is to happen so anyone attending 
the next few shares could bring it up too.  

Dave
 

***
Miklos,

Others may want to correct me, but from what I've observed OPT(4) is only used 
for the DUMP command.

COPY will accept OPT(4), but all you seem to get is  OPT(3).

Ron

 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU]
 On Behalf Of Miklos Szigetvari
 Sent: Thursday, March 07, 2013 4:34 AM
 To: IBM-MAIN@LISTSERV.UA.EDU
 Subject: Re: [IBM-MAIN] Speed up DASD copy
 
   Hi
 
 The dfdss accepting haply the OPT(4) keyword, the copy time dropped from
 10 minutes to abut 7 minutes.
 The  real copy will go from ESCON channels to FICON .
 We have to do this only once, as BusTech gave up the support for zDASD .
 
 Thank you


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Enqueue during VSAM REPRO - Who is the culprit

2013-03-11 Thread David Devine
Hi Lizette,
I think DanD hit it on the head with his suggestion about using infile  
outfile dd statements instead of ids  ods.
Ids  Ods use a disp of old during dynamic allocation.   

I have a sneaky feeling that your job is contending with itself because grs or 
mim can't always see quickly enough that the dataset is free'd after the 
del/def.
As the message is an IKJ its implying that its all done within a TSO rexx 
routine and rexx is quite well know for erratic behaviour.

Do you have explicit close  free's coded in your rexx for the dataset prior to 
the final repro?  

In any event, I'd suggest changing your repro to hardcoded infile outfile dd's 
with a disp of shr.

regards,
 Dave

***
Sorry about the delay in responding.  I had a network issue that was not 
resolved until now.
 

The correct order is

1)  Allocate new VSAM Temp file
2)  Split off the archive records to a GDG, and the records to be reloaded 
into the VSAM Temp File.
3)  Del/Def the original file name
4)  Repro the VSAM Temp back into the Original name

Yes, the original is the one that is enqueued.  I should have used the term 
TEMP rather than NEW in my example.

This is a process that will archive of older records from the vsam dataset to 
a GDG and also create a temp VSAM data set that holds the records to get 
loaded back into the file (don't ask, this is the vendor's process).

The main reason for this posting is to see if this is still annoying.  That 
you can get the message DATA SET IN USE, but that the process is so quick 
there is no trail of the holder.  And from this discussion it seems this is 
still an issue.

Thanks for all the good commentary.

Lizette
 
 Sometimes (not always) when this job runs I get
 
 REPRO IDS(NEW.VSAM.FILE)-
 .   ODS(CURRENT.VSAM.FILE) REUSE
 .IKJ56225I DATA SET CURRENT.VSAM.FILE ALREADY IN USE, TRY LATER
 .IKJ56225I DATA SET IS ALLOCATED TO ANOTHER JOB OR USER

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Enqueue during VSAM REPRO - Who is the culprit

2013-03-11 Thread David Devine
Hi Lizette,
Thanks for your clarification. 
How about posting out the whole kit  caboodle of what you've got so we can all 
see directly? 
Shouldn't take up too much space.  
 
Dave 
This is all native commands, no rexx
Each step is separate JCL step
Lizette


Paul,

I think DanD hit it on the head with his suggestion about using infile  
outfile dd statements instead of ids  ods.
Ids  Ods use a disp of old during dynamic allocation.   


 IDS, even?  I'd expect SHR to suffice for IDS.

 Nope

I have a sneaky feeling that your job is contending with itself because grs or 
mim can't always see quickly enough that the dataset is free'd after the 
del/def.
As the message is an IKJ its implying that its all done within a TSO rexx 
routine and rexx is quite well know for erratic behaviour.

 I believe Rexx messages are IRX; TSO are IKJ. 

Quite so, hence implying. 
My expectation was that it would be contained within some rexx.
Different strokes for different folks. Does any one really use native tso these 
days for file manipulation? how retro :-) 

The news of Rexx's erratic behavior hadn't reached me.  Can you
 cite an example?

Yep, practically anything that's poorly coded. Ever forget to initialise a 
variable? or delstack? results not always as designed! :-)  
  
Do you have explicit close  free's coded in your rexx for the dataset prior 
to the final repro?  
 
 Can contention occur within a single address space?

With poorly written code or implementation? been there so i wouldn't be 
suprised, hence sneaky feeling  :-)  

In any event, I'd suggest changing your repro to hardcoded infile outfile dd's 
with a disp of shr.
 
 Wouldn't EXC be safer for OUTFILE?

Yes, but you have that already by the dynamic ids and its failing to work. 
I've suggested that Lizette posts out what she's got; which would help zero in 
the issue.
   
If we can get to the root cause then we can get the correct dispositions all 
the way though. 

TTFN
Its Kipper tie time!

Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Speed up DASD copy

2013-03-07 Thread David Devine
Hi Miklos,
I'm making the assumption that your are referring to dfdss.
The traditional way of speeding up dfdss (adrdssu) copy's (or dumps) is by 
using the OPT keyword.
I believe the default is (3) which is 5 tracks read at at time.
OPT(1) 1 track OPT(2) 2 tracks OPT(3) 5 tracks and OPT(4) 15 tracks read at at 
time.

If you are doing a full volume copy i expect your sysin should look something 
like this:-

COPY FULL ALLX ALLDATA(*) IDY(TS0002) ODY(XX12C0) -
COPYVOLID PURGE OPT(4)

or you may use INDD OUTDD for your volume selection 

Other criteria to consider, region size on your jobcard or pgm; number of 
available channels; whether you want to copy your disks when in use or not.

Using a product like softek's tdmf will allow for in flight disk copys.
If you don't ahve it, then dfdss should only be used for packs when not in use.

With dfdss you need to cater for the volume ids on the source  target volumes, 
so you may want to add steps to clip (ickdsf reformat) the source volume 
before the dfdss copy and clip the target volume to the original source name 
after the copy.
This will keep it isolated and stop inadvertant access to the original source 
volume.

Good luck!

Dave
***
 Hi

 We need to copy a large number of DASD volumes (from zDASD to DS6800).
 Any option to speed up?
 Seems CONCURRENT is not supported by the zDASD.
Kind regards, / Mit freundlichen Grüßen
Miklos Szigetvari

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM QUESTION - DUPLEXING CHECKLIST

2013-02-27 Thread David Devine
Hi Esmee,
Not strictly dfhsm, but you should also consider your actual tape environment.
Assuming you have at least 2 separate librarys you need to ensure that your 
copy volumes are directed to a 2nd library.

If you have virtual tape libraries I would also consider it good practice to 
have seperate storagegroups/pools within the librarys so that dfhsm migration 
and backup tapes end up on different physical volumes when migrated off of 
cache on to back end stacked tapes.
There are 2 points to that:-
(a) if you have mass dataset recalls which are on dozens of virtual tapes, 
having a seperate pool for mig tapes will reduce the amount of back end tape 
mounts going on.
(b) You can (and do) end up losing physical stacked volumes every now and then, 
losing possilbly thousands of virtual volumes, so it would simplify recovery.

If you only have one big virtual library for everything in your datacentre, 
then having separate pools for master and alternate tapes would be pretty 
essential; also worthwhile for each plex to be distinct.   

These are generalisations as it always depends on each individual companys 
setups and working practices.
If you are in a multi grid setup you may not think it worthwhile.

kind regards,
Dave

***
  Each recycle process only takes only 50% more drives:  three instead of 
 two.

Unless it has recently changed, one annoying thing about DFHSM in a 
tape-drive constrained environment is there is no easy way to put an 
absolute upper bound on the maximum number of drives needed concurrently 
by all DFHSM tasks.  Yes, you can separately restrict the number of 
daily backup tasks, daily ML2 movement tasks, and migration recall 
tasks; but this doesn't really address the unpredictable concurrent 
demand from asynchronous ML2 recall requests and demand backup requests 
that may go directly to tape; and those requests can occur at 
inconvenient times, like when ML2 migration, or auto-backup to tape, or 
recycle is in progress.  If you want to more-tightly restrict the total 
number of drives used by unscheduled DFHSM tasks during scheduled tasks 
with high drive demand, there is no easy way to do just that, and the 
greater drive requirements with duplexing can make this more of an issue.

 I would never use DFHSM with today's high-capacity cartridges without 
duplexing, but in some environments duplexing may require adding 
 additional drives.
 JC Ewing

On 02/26/2013 12:30 PM, Staller, Allan wrote:
 1) be aware that duplexing for backup and migration are separate commands.
 2) recycle, ml2 migration and volume backup will now take twice as many tape 
 drives as before. You may have to change the limits on
 the number of backup tasks, migration tasks and recycle tasks you run 
 concurrently to fit within physical constraint (# of tape drives available).
 3) Plan on a program of recycling all existing media to get them duplexed. 
 You can use tapecopy, but recycle will most likely be more efficient.

 HTH,

 snip
 We are looking at implementing DUPLEXING for DFHSM.  My question is,  besides 
 the parms in ARCCMD9A which needs to be update are there other parms that 
 need to be modified?
 /snip

 ...


-- 
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 





--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DB2 V10 upgrade - SMS allocation failure

2013-02-21 Thread David Devine
Hi Jim,
It's this one that's the issue:-

IGD17279I 12 VOLUMES WERE REJECTED BECAUSE OF INSUFF TOTAL SPACE

Which translates to:- 

The primary quantity requested was larger than the total capacity of the 
largest
available volume. See z/OS DFSMSdfp Storage Administration for a detailed 
explanation.

Because you are not selecting a dataclas, you do not get space reduction, 
multivolumes etc so dfp tries to squash a quart into a pint pot.

The resolution is to (a) reduce your primary allocation (with a decent 
secondary) and (b) update your acs routines to select a suitable dataclas that 
is both multi-volume and extended fortmat.

Good luck!

Dave  

***

cross posted to DB2-L and IBM-MAIN.

I'm trying to run DSNTIJEN to upgrade to DB2 V10 new function but the job
is failing on the reorg of SPT01.  Following are the SMS messages.

IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR  808
DATA SET DB2E.IMAGCOPY.SPT01.D2013052.T125957
JOBNAME (DB2DBAA ) STEPNAME (ENFM0027)
PROGNAME (DSNUTILB) DDNAME (SYS1)
REQUESTED SPACE QUANTITY = 4349339 KB
STORCLAS (SCSMS) MGMTCLAS () DATACLAS ()
STORGRPS (SGDB2E  )
IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR  809
DATA SET DB2E.IMAGCOPY.SPT01.D2013052.T125957
JOBNAME (DB2DBAA ) STEPNAME (ENFM0027)
PROGNAME (DSNUTILB) DDNAME (SYS2)
REQUESTED SPACE QUANTITY = 3038417 KB
STORCLAS (SCSMS) MGMTCLAS () DATACLAS ()
STORGRPS (SGDB2E  )

and here are the DB2 messages.

DSNU1035I   052 13:59:57.54 DSNUJTDR - TEMPLATE STATEMENT PROCESSED
SUCCESSFULLY
DSNU050I052 13:59:57.54 DSNUGUTC -  REORG TABLESPACE DSNDB01.SPT01
SHRLEVEL REFERENCE LOG NO COPYDDN(SYSCOPY)
CONVERTV10 RETRY 255 TIMEOUT TERM RETRY_DELAY 1 DRAIN_WAIT 1 SORTDATA
SORTDEVT SYSDA SORTNUM 99
DSNU3343I ! 052 13:59:57.54 DSNURFIT - REAL-TIME STATISTICS INFORMATION
MISSING FOR TABLESPACE DSNDB01.SPT01
DSNU3343I ! 052 13:59:57.54 DSNURFIT - REAL-TIME STATISTICS INFORMATION
MISSING FOR INDEX SYSIBM.DSNSPT01
DSNU3343I ! 052 13:59:57.54 DSNURFIT - REAL-TIME STATISTICS INFORMATION
MISSING FOR INDEX SYSIBM.DSNSPT02
DSNU1015I   052 14:00:40.18 DSNUGDYN - ERROR ALLOCATING DATA SET
DSN=DB2E.IMAGCOPY.SPT01.D2013052.T125957
CODE=X'970C'
DSNU1042I   052 14:00:40.18 DSNUGDYN - START OF IDCAMS MESSAGES
IKJ56893I DATA SET DB2E.IMAGCOPY.SPT01.D2013052.T125957 NOT ALLOCATED+
IGD17273I ALLOCATION HAS FAILED FOR ALL VOLUMES SELECTED FOR DATA SET
DB2E.IMAGCOPY.SPT01.D2013052.T125957
IGD17277I THERE ARE (12) CANDIDATE VOLUMES OF WHICH (12) ARE ENABLED OR
QUIESCED
IGD17290I THERE WERE 1 CANDIDATE STORAGE GROUPS OF WHICH THE FIRST 1
WERE ELIGIBLE FOR VOLUME SELECTION.
THE CANDIDATE STORAGE GROUPS WERE:SGDB2E
IGD17279I 12 VOLUMES WERE REJECTED BECAUSE OF INSUFF TOTAL SPACE
DSNU1043I   052 14:00:40.19 DSNUGDYN - END OF IDCAMS MESSAGES
DSNU012I052 14:00:52.45 DSNUGBAC - UTILITY EXECUTION TERMINATED,
HIGHEST RETURN CODE=8
Following is the available space.

Volser Unit Devtyp   FTrk   LTrk  FCyl  LCyl   DSCB %Use Mnt VTOC SGRP
DB2E12 0D33 3390-9  49926  49926  3328  3328   74470 Prv Actv SGDB2E
DB2E11 0D34 3390-9  49926  49926  3328  3328   74470 Prv Actv SGDB2E
DB2E10 0D96 3390-9  49926  49926  3328  3328   74470 Prv Actv SGDB2E
DB2E01 0DA4 3390-9   5969   1800   391   120   5360   88 Prv Actv SGDB2E
DB2E02 0DA5 3390-9   7080   1650   451   110   2588   85 Prv Actv SGDB2E
DB2E03 0DA6 3390-9   2848   2835   189   189   2219   94 Prv Actv SGDB2E
DB2E04 0DA7 3390-9   7853   6540   521   436   8194   84 Prv Actv SGDB2E
DB2E05 0DA8 3390-9  11827  10800   787   720   5979   76 Prv Actv SGDB2E
DB2E06 0DA9 3390-9  14333  10650   952   710   7369   71 Prv Actv SGDB2E
DB2E07 0DAA 3390-9   9203   5205   604   347   7227   81 Prv Actv SGDB2E
DB2E09 0DAE 3390-9  49916  49895  3327  3326   74460 Prv Actv SGDB2E
DB2E08 0DAF 3390-9   7046   6990   469   466   7424   85 Prv Actv SGDB2E


I don't understand why I have 2 x IGD17272I messages here but even so I
make each of the allocation requests to be approx 6000 cylinders.  Why
can't the allocation be done from the available space above.

Jim McAlpine

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DB2 V10 upgrade - SMS allocation failure

2013-02-21 Thread David Devine
Hi Jim,
what are your primary  secondary allocations? if your dataclas doesnt have 
space constraint relief it will still try and get it in one lump.
If i'm doing my calcualations right, thats @5120 cyls for the first one.
Try an allocation of 1000,100

Cheers

Dave  

***
David, I've added a dataclas to the DB2 template definition which has a
volcount of 5 but I'm still getting -

IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR  227
DATA SET DB2E.IMAGCOPY.SPT01.D2013052.T144057
JOBNAME (DB2DBAA ) STEPNAME (ENFM0027)
PROGNAME (DSNUTILB) DDNAME (SYS1)
REQUESTED SPACE QUANTITY = 4349339 KB
STORCLAS (SCSMS) MGMTCLAS () DATACLAS (DCDB2)
STORGRPS (SGDB2E  )
IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR  228
DATA SET DB2E.IMAGCOPY.SPT01.D2013052.T144057
JOBNAME (DB2DBAA ) STEPNAME (ENFM0027)
PROGNAME (DSNUTILB) DDNAME (SYS2)
REQUESTED SPACE QUANTITY = 3038417 KB
STORCLAS (SCSMS) MGMTCLAS () DATACLAS (DCDB2)
STORGRPS (SGDB2E  )
Jim

On Thu, Feb 21, 2013 at 2:27 PM, David Devine david.dev...@sse.com wrote:

 Hi Jim,
 It's this one that's the issue:-

 IGD17279I 12 VOLUMES WERE REJECTED BECAUSE OF INSUFF TOTAL SPACE

 Which translates to:-

 The primary quantity requested was larger than the total capacity of the
 largest
 available volume. See z/OS DFSMSdfp Storage Administration for a detailed
 explanation.

 Because you are not selecting a dataclas, you do not get space reduction,
 multivolumes etc so dfp tries to squash a quart into a pint pot.

 The resolution is to (a) reduce your primary allocation (with a decent
 secondary) and (b) update your acs routines to select a suitable dataclas
 that is both multi-volume and extended fortmat.

 Good luck!

 Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: want to purge all my jobs ended with a RC=0 automatically

2013-02-21 Thread David Devine
Hi Yann,
there is a big flaw in this, in that the fact that the job has a rc of zero 
doesnt always mean that it has worked sucessfully as designed.
I think you would be better off using RMDS (or a similar product) to offload 
production joblogs from your spool to disk and have sms assign a mgmtclas to 
delete them after x days.

This would get them off your spool and still allow the opportunity to check 
them if needed.
There would be an overhead in disk usage but it's probably worth it.

For everything else there's a load of options for the jes commands to delete 
after hours/days.
POJQ allows job masking.
   
There may well be a Jes exit which will achieve what you are looking for, but 
not really my field.
A trawl through the jes manuals should identify posibilities.

regards
Dave

***

Hi all,

For some volume reasons, we want to automatically purge jobs that ended with a 
RC=0.
But we don't want to do it, for CPU and Spool reasons, with a JES2 command. We 
want to do it automatically.
I'm sure that somebody already had the same need, no ?

Any Idea ?

Thank in advance ...

Yann


 


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: JES2 Hist from Jack Schudel(ret.)

2013-02-08 Thread David Devine
Historically Spool has had a long association with the weaving/rope making 
industry and thence to wire cable  electrical wiring.

Possibly the imagery of how an industrial weaving loom worked fitted in with 
the concept of what was trying to be achieved and last but not least it may 
well have been developed in Ibm SPOKANE and the guys there wanted to put their 
own stamp on it and were looking for an SPO prefix that matched up.
   
After all, DFH = Denver Foot Hills. IDC = International Data Corporation.

What better way to be remembered if you aren't allowed to stick your personal 
name on it.

And for another bit of history, paper tape/punched cards were used in 
industrial weaving looms back in the late 1800's to mass produce carpet and 
other fabrics to the same design  quality.

As of a few years ago, there was at least one working museum weaving shop in 
the UK midlands where it could be seen in action, about a 100 yard loop of 
heavy duty punched card.

Regards,
 Dave

***
In 51143860.7030...@acm.org, on 02/07/2013
   at 05:27 PM, Joel C. Ewing jcew...@acm.org said:

The Wood history of HASP/JES2 left hanging the question about the
origin of the term spooling. 

Do you consider SPOOL System, 7070-IO-076 to be of sufficient
antiquity?

'SPOOL has become a common verb, but originally was itself an
acronym signifying Simultaneous Peripheral Operations On Line. This
acronym originated with the 7070 computer, which had a system of
interrupts that let one program a peripheral activity (e.g.,
card-to-tape, tape-to-print, tape-to-card) while a main program was
running.' (Dictionary of IBM Jargon, Tenth Edition)

Since early spooling systems staged unit records  to a spool of 
magnetic tape,

I've only seen reels of tape called spools in an audio or TV context;
prior to cartridges and MSS the terms I heard were reel and tape
volume.

-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 Atid/2http://patriot.net/~shmuel
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN





   


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: blksize for ML2 migrated dataset via HSM

2013-02-07 Thread David Devine
Hello,
The Datamover parameter of DSS or HSM only applies to Control dataset backups 
and is not for general usage.

Hsm has used a 16k blocksize for backup and migration tapes since inception, 
possibly because originally it was a good performance match for the specs of 
existing 3420 polo tape drives and the upcoming 3480 cartridge tape drives.
(People who read the announcment specs in the early 80's feel free to dive in!)

Well overdue for an update.

Dumps however are straight dfdss and depending on what Z/os release you are on, 
the blksize will be the default of 256K (Zos 1.12 up) or 64K (Zos 1.11 down) or 
even 32k if you use the patch!  
   
While we are at it, an ML3 level for long term archive datasets (greater than 5 
years say) would be good to split them out from all the other stuff on ML2.

regards,
 Dave


***
  Hello,
I observed a blksize of 16k for all ml2 migrated dataset (seen on the CA1 
product). I use DSS as datamover and 
i'm wondering about the size concerning the blksize. Why dss don't use 256k 
since zos 1.12 ?
Everybody will have an answer. thank's 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DFSMSHSM Management Class Question

2013-02-06 Thread David Devine
Hello,
(a) are you sure that the management class is question is using expire 
date/days rather than expire non-usage ? 
It's very easy to inadvertently reset the last reference date on a dataset.
(b) if it was migrated, is the hsm secondary space management working properly? 
look at the logs for that dataset and see if there's a mention. 
As previously stated, there may be an return code against it. 
Usual issue is a rc= 53 which means Hsm expects it to have a backup but it 
doesnt, so Hsm won't delete it. 
A simple hsm backup will sort that one, but if you have multiple RC 53's you 
may have a flaw in sms routines, like allocating a dataset with a backup 
mgmtclas to a no backup Storage group or Hsm died during its Automatic backup 
cycle sometime and flags are out of sync.

For migrated rc= 53 datasets its recall and backup.
If you've got multiple 53's during primary space management a backvol total for 
the pack concerned will sort it.  

It's good practice to undertake occasional Backvol total backups for all 
storagegroups/packs where every dataset on a pack will be backed up regardless 
of a recent backup (if mgmtclas specified) to tidy up issues like this.

There is a patch you can use to disable checking for backup before expiry, but 
this is a sledgehammer approach.
You are better off checking for errors now and then to identify flaws.   

Regards,
 Dave

***
   Uriel,

I think David's on to something here.
 
HSM will not delete a dataset that is supposed to have a backup but doesn't. 
Check your HSM space management job output and look for code 53 errors. These 
are delete candidates that don't have backups so HSM can't delete them. Do your 
datasets show up on this list? If so, they need backups or need manual 
intervention.



David G. Schlecht | Information Technology Professional
State of Nevada | Department of Administration | Enterprise IT Services
T:(775)684-4328 | F: (775) 684‐4324 | E:dschle...@admin.nv.gov -- New Address

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of O'Brien, David W. (NIH/CIT) [C]
Sent: Tuesday, October 09, 2012 11:35 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: Re: DFSMSHSM Management Class Question

Uriel,

Do the unexpired datasets have HSM backups?  

-Original Message-
From: Uriel Carrasquilla [mailto:uriel.carrasqui...@mail.mcgill.ca]
Sent: Tuesday, October 09, 2012 2:32 PM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: DFSMSHSM Management Class Question

I have a situation that I am hoping someone can shine some light.

I am conflicted by a behavior in our zOS 1.11 system.

We have a Management Class defined to expire datasets after a given number of 
days.

I know it is working because I found some datasets about to expire, waited for 
them to expire, and now I can see they are no longer in the catalog (the 
management class stipulates that we keep one backup for a few years after 
deleting).

But, I also noticed that some files that have exactly the same management class 
and should have been expired/deleted a long time ago are still catalogued.  I 
recalled one of them and it had no expiration date and confirmed that it 
belongs to the same management class.

I am trying to figure out why this behavior is taking place but don't know 
where to go next.

Any suggestions would be greatly appreciated.

Sincerely yours,

Uriel



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: BYPASSING RECALL OF MIGRATED DSNS WHILE ATTEMPTING ALTER

2013-02-01 Thread David Devine
Hi,
sorry to say it's still only mgmtclas and storclas that can be ams altered on a 
migrated dataset without forcing a recall.
Hopefully you can recall them without filling up your sms pools!
And because you have updated the catalog entry they will end up being 
physically migrated again even if you have the tapemigration(reconnect parm 
coded in your hsm system.

Of more interest is how you got here in the first place.
Was someone doing dfdss restores without the tgtgds(active) parm? or iffy jcl? 
i seem to recall that disp of New,keep on a gdg made them deffered. 

What is your gds_reclaim entry set to in your igdsmsxx sys1.parmlib member? 
default is yes.

Regards

Dave 
 



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: mainframe selling points

2013-01-31 Thread David Devine
Looking at processor and software costs in isolation doesnt tell the whole 
story.

Yes, software cost are a big chunk, but doesnt Microsoft charge like a Rhino 
for each Windows licence? 
  
What would you attach your E5-2600 blade to and using what? fibre or ethernet? 
whose disk systems? tape for backup?  
how resilient is it? how many staff would it take to manage? 

The elephant in the room is reliability.
 
Z/series and associated kit is solid and dependable (baring a few exceptions) 
having grown ergonomically over 50 years.

How much down time do you get from windows or Unix farms? 
Would you risk running your key billing platform's on flaky kit? you can't send 
your bills out you can't get your money in.  

Cheap kit is cheap for any number of reasons, but often due to poor quality 
components, build processes and quality control as its working life is only 
expected to be a few years.

Ever been bitten by Tin whiskers from lead free solder? or duff capacitors?   
  
  
I recall an article from IEEE about 20 years ago looking at microsofts Hotmail 
service. 
Running on 200 quad4 pentiums, 10% were out of action at any given time.
The whole shebang could have run on 3 s390's with far better service to the 
customer. 

I doubt much has changed.  

Z/series has had such nice to have's as GDPS (about 10-15 years) multiple 
pathing to devices and system manged storage (25 +).
It's only in the last few years that other platforms have started to catch up 
in these area's and their idea of multiple paths is generally 2.
(This is a broad sweep, there may well be kit out there thats all singing and 
dancing)
(Ibm I series  P series could be classed as junior mainframes having evolved 
from System 34  36 (cut down System 360's) and are slwly getting Z/series 
features.)

Staff costs? 
once you've got a Z/series site setup which has skilled support staff (not 
including application programmers  developers) you can pretty much expand up 
to 10 times the kit and plex's (and probably a lot more) with minor staff 
increases if at all.

How many people does it take to manage windows or unix estates ? where i've 
worked over the years you are talking 4 or 5 times as many people as mainframe 
support staff. 
And thats just support. 

Once you include the dozens of Android, java  C++ developers and proggies you 
are going to need to actually produce something worthwhile, you can only afford 
to buy cheap kit!
   
This is why you need to consider Total cost of ownership and it is not solely 
limited to financial payback period and capital depreciation write off's; staff 
 running costs are often overlooked and reliability freqeuently is. 
  
Well thats my rant over for the moment.

TTFN 

Dave 

P.S yes, i am quite biased.

imugz...@gmail.com (Itschak Mugzach) writes:
 So why don't you save the money and run your corporate network from the
 mainframe ;-)

discussion in linkedin Enterprise Systems that 4% of IBM
revenue is mainframe hardware sales, but mainframe business is 25% of
total revenue ... for every dollar of hardware, customers are paying
$5.25 for software, services, and storage.  
http://lnkd.in/mjYX6H

A maxed out z196 with 80 processors  rating of 50BIPS goes for $28M or
$560,000/BIPS ... however, on avg. customers are paying total of $175M
(i.e. 6.25 times the base hardware cost, aka difference between 4% of
revenue for just hardware, but total of 25% revenue) ... or $3.5M/BIPS

as I've mentioned several times, by comparison IBM has base list price
of $1815 for e5-2600 blade rated at 527BIPS or $3.44/BIPS (a factor
of million times difference).

-- 
virtualization experience starting Jan1968, online at home since Mar1970
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DS8300 HMC WEBUI logon is forced closed

2013-01-22 Thread David Devine
Just to give an update.
We have just started our annual l.m.c upgrade plan for our DS8300's and found 
that the problem has been re-introduced with code bundle 64.36.63.0 L.M.C 
5.4.36.140.
So we had to go through the the port firewall enabling stuff again.

Apparently the fix for the problem (CMVC 27500) will not be deployed on code 
for R4 boxes (64.xx code) but will on more recent DS8xxx models, R5/6/7.

regards,
 Dave



   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: SMS COMMAND VIA BATCH

2013-01-17 Thread David Devine
An alternative way to skin this particular cat would be to use batch sdsf.

//JS0015   EXEC PGM=SDSF   
//ISFOUT   DD   SYSOUT=*   
//ISFINDD   *  
/RO *ALL,D SMS,SG(STANDARD),LISTVOL
/* 

The command string is limited to 42 chars so not great, but has the benefit in 
that if you can issue commands via sdsf you can use this.

Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: Deleting a nonexisting volser from hsm

2013-01-09 Thread David Devine
Michael, its the T record thats needed.
FIXCDS T 01-300706- create 
Then try the delvol.

Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: TS7740 and VTS B20

2012-11-28 Thread David Devine
Hello again.
Forgot to mention that it's not just HSM with it's own internal tape catalog 
you need to watch out for, there are some report archiving products that have 
it too.

Infopac/mobius/viewdirect by asg sofware is one such (opentech's tapecopy can 
handle this i believe) and i'm seem to remember CA-Despatch was another.
These products have there own tape copying / report copying utilities (probably 
an add on cost) but may well be slower than something like tapecopy.

As the Phil Esterhaus used to say at the end of the roll call on Hill Street 
Blues, Hey! let's be careful out there

Dave



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: TS7740 and VTS B20

2012-11-27 Thread David Devine
Hi,
a lot will depend on the timescale's you've been given to complete the 
migration and the overall amount of volsers that need to be copied.

It may well be that you can get away with just recycling hsm ml2  backup vols 
and not actually migrate anything else as they'll all expire  scratch 
naturally over a few weeks. 
Not very likely though.

If you are using ca-1 or ca-tlms then they have copycat which you could use. 
You may even get it gratis if your site has a generic use any ca product 
licence if not already installed.

Ibm's tape2tape is basically rexx code which will interrogate RMM and build job 
decks to copy your volumes, using adrdssu for dfdss vols, gener for flat files 
and then idcams delete define to update the catalog entry's with the new 
volser(s) for the dataset accordingly.
It is pretty limited and i had to hack it around to cater for other formats 
(fdr etc.) when i used it. 

N.B the vast majority of tape copying utilities do NOT update hsm metata data 
(its internal tape catalog) so are useless for ml2/backups/dumps.

A very good exception is Tapecopy shipped by Opentech which caters for hsm 
ml2  backup tapes (not sure on dumps) and which also cross references the back 
end tapes to select volsers by b.e.t to minimise thrashing.

There's a lot of products out there to do the job as you can see from the list 
in the manual referenced earlier.
Some will work with the tape management system(s) at your site, some wont.
  
In a nutshell, you pays your money and makes your choice but don't copy stuff 
which will expire naturally in a few days/weeks.

Good luck!

Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DS8300 HMC WEBUI logon is forced closed

2012-11-14 Thread David Devine
Hi Wayne,
as to ibm'ers, it was just our account support guy and uk back office.
Basically i did the leg work and identified the cause and they had to change 
the setting as we couldnt do it directly.
 
I stumbled across the problem while looking into an email issue on one of our 
DS8300's, so had to get the I.E disconnect sorted first before getting onto the 
email problem.
(we generally use cli rather than the web gui, so were unaware we had the 
problem)

Unfortunately i no longer have the output from the searches i did on ibm 
support web  google so i can't give you specific OA numbers etc, but it IS a 
known issue.

I'd give your regular support guy a ring and run through the problem with him. 
If he's local he may well swing by and do it on the fly (raising his own work 
case) and you can check it while you're there.

If not just raise a hardware call with them for each specific DS8xxx.

I see no harm in giving you the case references that we raised for the work if 
they want to clarify the resolution.
A173BVW   A173YP2  B1720LB.

In our case it was only port 9960 that needing the firewall setting, but you 
need all three ports, so make sure they check them all.

regards

Dave

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN


Re: DS8300 HMC WEBUI logon is forced closed

2012-11-13 Thread David Devine
Hello,
I went through this a few weeks ago myself. 
Problem appears to have come in with HMC V7R3.4.0.
 
A lot of faffing about trawling through ibm  some google's before i got to the 
answer which is down to the port firewall settings on the Ds8300 lan adaptor 
panel.

Ports in question are 443, 8443 and 9960. 

Have a look at the Hardware Management console V7 Redbook which should give you 
some useful input.

I believe the specific port with the problem is 9960, but you do need all three 
firewall configured.

We don't have direct hmc access to the lan firewall settings on our boxes (only 
have the customer id) so had to raise calls with ibm for them to dial in and 
correct.
Each affected DS8xxx H.M.C needs to be rebooted after.

Once they did that, the problem's gone.

Good Luck!

Dave  



   

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN