Re: PDSE

2012-01-21 Thread Edward Jaffe

On 1/20/2012 6:07 PM, Mike Schwab wrote:

I migrate and recall them.
With the original release of PDSEs.


At this point, I imagine Paul is lamenting the lack of an emoticon conveying 
sarcasm.


--
Edward E Jaffe
Phoenix Software International, Inc
831 Parkview Drive North
El Segundo, CA 90245
310-338-0400 x318
edja...@phoenixsoftware.com
http://www.phoenixsoftware.com/

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: OT: Backup drives for a PC

2012-01-21 Thread R.S.

W dniu 2012-01-21 03:13, Clark Morris pisze:
[...]


Both my wife and I are adding family pictures so that is part of the
reason for the capacity.  In response to Radoslaw, I am getting the
second drive so that I can rotate the drives between my home and that
of another family member 160 kilometers away.  Right now I have a
Western Digital external desktop drive for the purpose and I was
assuming the difference between that and a portable drive was how the
hard drive is mounted inside the box.  Based on some bad user reviews
that I saw for a Seagate drive that was used for a similar purpose, I
then had to question whether the additional expense of portable drive
was justified for the backup since a backup is no good if you can't
read it.


Yes, I forgot to mention jewel-box features. It important to choose good 
quality box (not the drive manufacturer) with good amortization (shock 
absorbtion).
I haven't seen any test or comparison, but I can compare some drives I 
have or I have seen: Transcend 2.5 is described as very shock-proof and 
it's look like. Seagate is 2.5 ...nice and shining ;-))) IOmega 3.5 is 
clearly described as don't move while operating, avoid any shocks all 
the time.



Regarding photos: There is a method to make them smaller *without* 
loosing quality. Usually digital cameras have no enough time/processing 
power to make good compression. A PC program can save picture with no 
conversion, under the same name, etc. Usually it saves 60-70% 
(100MB-30MB). I thoroughly compared some pictures before and after, 
also using magnification. No perceptible differences. Whole process can 
be performed as batch job, however with no DDs ;-)

Application used: freeware Irfan Viewer.


Regards
--
Radoslaw Skorupka
Lodz, Poland


--
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. 


BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. 
Wedug stanu na dzie 01.01.2012 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 168.410.984 zotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Peter Relson
how does IBM suggest doing a compress on a Linklist lib that needs 
compressing, inquiring minds would love to know 

There is no suggestion. This is simply not an operation that is supported 
or can be supported in general.

Peter Relson
z/OS Core Technology Design

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Scott Ford
Dave,
I am finally in the 'new world order of IT'  planning is a four letter word.

Sent from my iPad
Scott Ford
Senior Systems Engineer
www.identityforge.com



On Jan 20, 2012, at 7:07 PM, Gibney, Dave gib...@wsu.edu wrote:

 In theory and with proper planning, one should not need to compress active 
 linklisted libraries, PDS or PDSE.
 After all, you should not be updating your live system datasets. Maintenance 
 should be done on copies and brought in via (rolling) IPL.
 
 The idea of application libraries in the linklist is not something I like, 
 but they can certainly be dynamically added after IPL and removed, 
 update/compress, re-added for maintenance. Even here, a copy with decent 
 change control is a better idea.
 
 After saying this, I do have a couple libraries where this need arises. But, 
 with them, I do know which address spaces are using them and I am not risking 
 full system outage if I cheat.
 
 Dave Gibney
 Information Technology Services
 Washington State University
 
 
 -Original Message-
 From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
 Behalf Of Scott Ford
 Sent: Friday, January 20, 2012 12:46 PM
 To: IBM-MAIN@bama.ua.edu
 Subject: Re: PDSE
 
 So this has my curiosity peaked, how does IBM suggest doing a compress on
 a Linklist lib that needs compressing, inquiring minds would love to know ?
 
 Sent from my iPad
 Scott Ford
 Senior Systems Engineer
 www.identityforge.com
 
 
 
 On Jan 20, 2012, at 8:09 AM, Peter Relson rel...@us.ibm.com wrote:
 
 Try running an IEBCopy compress against the data set (option z against.
 
 TSO-ISPF display of the PDSE). It might be a little complicated as it's
 in the Linklist, and so disp=old wouldn't work (still allocated to LLA),
 so you'll have to use disp=shri in a batch job.
 
 This is not good advice, in general. Of course it's your system, so
 shooting yourself in the foot is always an option you are allowed to take.
 
 The system allocates LNKLST data sets for a reason -- so that you can't
 get the data set DISP=OLD which in turn means that if you're doing things
 right you will not be able to do such damaging operations as compress
 (where for compress doing things right means getting the data set
 DISP=OLD, and by damaging I mean damaging to other processes that
 might
 have knowledge of where a member is within the data set, not damaging
 to
 the data set itself).
 
 Bluntly, if you compress a LNKLST data set without DISP=OLD, then don't
 complain if something related to that data set no longer works.
 
 If you must compress the data set, then get it out of the LNKLST and out
 of LLA management. And note that there is no fully safe way to do the
 former unless you have added the data set to the now-activated LNKLST
 set
 after IPL and are able to terminate/restart all jobs that started after
 that LNKLST set was activated.
 
 Should compress require DISP=OLD? Maybe. But that's unlikely to change,
 and definitely won't change as a system default (one could imagine giving
 the customer a knob to ask that for their system the default be that).
 
 Peter Relson
 z/OS Core Technology Design
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 07:54 AM, Peter Relson wrote:

how does IBM suggest doing a compress on a Linklist lib that needs
compressing, inquiring minds would love to know


There is no suggestion. This is simply not an operation that is supported
or can be supported in general.

Peter Relson
z/OS Core Technology Design



So the only functionally-equivalent, officially-sanctioned way to 
accomplish this goal is still to

(1) create a new dataset with a different name and copy the data to it,
(2) modify PARMLIB LNKLST defs to replace the old library in linklist 
with the new at next IPL,

(3) IPL.
And if for some reason you really must have the original dataset name, 
repeat the process to get back to the old name.


All the other techniques that have been described here in the past to 
achieve this and bypass or defer the need for an IPL either don't 
guarantee the new library will be seen by all address spaces or carry 
some risk.  While those of us who have been around long enough are 
fairly certain of specific cases at our own installation where the risks 
of alternative methods are small enough and acceptable, it is 
understandable that IBM does not wish to endorse techniques whose 
success depends on SysProg competence and judgement and also in many 
cases upon the tacit cooperation of Murphy in keeping unrelated system 
failures from occurring in a narrow transition window during which 
libraries and PARMLIB might be in a state where successful IPL and 
recovery from system failure is not be possible (without an independent 
z/OS recovery system).


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Scott Ford
Peter,

I a agree with everyone, but I am finding lost of knowledge and old habits 
prevail.
I learned MVS from an older timer, older than I am, I am 61. So, ask a lot of 
questions and refine 
How I approach things, etc.  Compression of Link libraries was one of those 
areas. I have seen the compression on the fly for years, no damage no harm. 
Most of the shops I worked in, we had a maintenance cycle. Some others 
maintenance time  was not existent.

I don't feel  nowadays shops are really planning. Don't know why, but as a 
vendor have been seeing a lot of it lately. I learned via GSS working in NYC 
and saw planning work.



Sent from my iPad
Scott Ford
Senior Systems Engineer
www.identityforge.com



On Jan 21, 2012, at 8:54 AM, Peter Relson rel...@us.ibm.com wrote:

 how does IBM suggest doing a compress on a Linklist lib that needs 
 compressing, inquiring minds would love to know 
 
 There is no suggestion. This is simply not an operation that is supported 
 or can be supported in general.
 
 Peter Relson
 z/OS Core Technology Design
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Paul Gilmartin
On Sat, 21 Jan 2012 10:02:25 -0600, Joel C. Ewing wrote:

So the only functionally-equivalent, officially-sanctioned way to
accomplish this goal is still to
(1) create a new dataset with a different name and copy the data to it,
(2) modify PARMLIB LNKLST defs to replace the old library in linklist
with the new at next IPL,
(3) IPL.
And if for some reason you really must have the original dataset name,
repeat the process to get back to the old name.
 
Can LINKLIST contain aliases?  If so:

(0) Place the alias name in PARMLIB LINKLIST defs;
IDCAMS DEFINE ALIAS to the real data set name.
(1) create a new dataset with a different name and copy the data to it,
(2) IDCAMS DELETE ALIAS; DEFINE ALIAS to identify the new data set.
(3) LLA REFRESH to identify members in the new data set.

Why not?  (I know; I've been corrupted by UNIX symbolic links, and
imagine aliases are similar.)

-- gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 10:25 AM, Paul Gilmartin wrote:

On Sat, 21 Jan 2012 10:02:25 -0600, Joel C. Ewing wrote:


So the only functionally-equivalent, officially-sanctioned way to
accomplish this goal is still to
(1) create a new dataset with a different name and copy the data to it,
(2) modify PARMLIB LNKLST defs to replace the old library in linklist
with the new at next IPL,
(3) IPL.
And if for some reason you really must have the original dataset name,
repeat the process to get back to the old name.


Can LINKLIST contain aliases?  If so:

(0) Place the alias name in PARMLIB LINKLIST defs;
 IDCAMS DEFINE ALIAS to the real data set name.
(1) create a new dataset with a different name and copy the data to it,
(2) IDCAMS DELETE ALIAS; DEFINE ALIAS to identify the new data set.
(3) LLA REFRESH to identify members in the new data set.

Why not?  (I know; I've been corrupted by UNIX symbolic links, and
imagine aliases are similar.)

-- gil


If an alias name is acceptable in linklist, that still wouldn't solve 
the problem.  Any system enqueues are always done on the real name at 
the time of initial allocation for linklist, and the physical location 
and size of the dataset becomes fixed to the linklist once the dataset 
is initially allocated and is not changed by a REFRESH.  If an active 
linklist dataset is  renamed, cataloged elsewhere, or even deleted, 
linklist still points to the same original DASD tracks.


The only way to physically change the location and/or size of a linklist 
library is to deallocate and reallocate it, which requires activating a 
new linklist definition and eliminating any usage of the prior active 
linklist.  The latter is the difficult part because it can only be done 
by forcing a linklist update on  all long running address spaces, some 
of which cannot be stopped/restarted without an IPL and others of which 
cannot be restarted without disruption to end users.  As has been 
discussed in the past on this list, forcing a linklist update on an 
arbitrary running address space is an unnatural act that involves 
risk, and could in the worst case cause z/OS system failure and force an 
IPL.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread R.S.

Well,
THE ONLY 100% POLITICALLY CORRECT WAY TO solve problem with LNKLST 
libraries:

DO NOT UPDATE.

Any exception to the rule aboce requires special consideration.
I.e. You can replace single module, but you have to check free space before.
Then you have to schedule PDS compress at next service window.

For mass updates - that should mean SMPE APPLY - you should do it on 
cold set of libraries, not on live system, so you can compress the 
libraries just after update.


Trick: if you really need to update some lirbary frequently, then remove 
it from IPL-time-PROGxx, add this library after IPL. There is a 
difference. IPL-time lnklst is more equal than dynamically added LNKLST.


HTH
--
Radoslaw Skorupka
Lodz, Poland


--
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. 


BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. 
Wedug stanu na dzie 01.01.2012 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 168.410.984 zotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


IPLs and system maintenance was Re: PDSE

2012-01-21 Thread Clark Morris
On 21 Jan 2012 08:07:01 -0800, in bit.listserv.ibm-main you wrote:

On 01/21/2012 07:54 AM, Peter Relson wrote:
 how does IBM suggest doing a compress on a Linklist lib that needs
 compressing, inquiring minds would love to know

 There is no suggestion. This is simply not an operation that is supported
 or can be supported in general.

 Peter Relson
 z/OS Core Technology Design


So the only functionally-equivalent, officially-sanctioned way to 
accomplish this goal is still to
(1) create a new dataset with a different name and copy the data to it,
(2) modify PARMLIB LNKLST defs to replace the old library in linklist 
with the new at next IPL,
(3) IPL.
And if for some reason you really must have the original dataset name, 
repeat the process to get back to the old name.

All the other techniques that have been described here in the past to 
achieve this and bypass or defer the need for an IPL either don't 
guarantee the new library will be seen by all address spaces or carry 
some risk.  While those of us who have been around long enough are 
fairly certain of specific cases at our own installation where the risks 
of alternative methods are small enough and acceptable, it is 
understandable that IBM does not wish to endorse techniques whose 
success depends on SysProg competence and judgement and also in many 
cases upon the tacit cooperation of Murphy in keeping unrelated system 
failures from occurring in a narrow transition window during which 
libraries and PARMLIB might be in a state where successful IPL and 
recovery from system failure is not be possible (without an independent 
z/OS recovery system).

This discussion reminds me of when I was in a shop that had both
Tandem and an IBM mainframe.  The notice for a systems maintenance
upgrade on the Tandem was that the operator would institute a simple
procedure at a specific time with no outage.  I think at one time IBM
owned a computer company (Sequent?) that claimed to be able to do the
same thing.  I know that I was impressed by the Tandem capability.

Clark Morris

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread John Gilmore
It may perhaps be time to restate the obvious.

LINKLST has come to be  used in situations remote from its original
narrow focus, which was to improve real and virtual program-fetch
performance from certain system datasets.

It does this well, but it was never intended that volatile program
libraries---those some of the members of which are frequently
replaced---be included in it.

The natural, appropriate update time for LINKLST is IPL time,  Some
sysprogs in some shops may well be able to cheat on this in some
situations; but the competence and judg[e]ment required to do so are
personal; and these cheats should not be institutionalized; in
particular, their use should never be delegated to people of 'junior
understanding'.  They should probably not even be talked about much.

So far as I can judge from what has been posted here, the real problem
is not an arcane, PDSE-related one.  It is that some libraries that do
not belong there have made their way into LINKLST.

John Gilmore, Ashland, MA 01721 - USA

-- 
John Gilmore, Ashland, MA 01721 - USA

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Scott Ford
Exactly, John


Sent from my iPad
Scott Ford
Senior Systems Engineer
www.identityforge.com



On Jan 21, 2012, at 1:27 PM, John Gilmore johnwgilmore0...@gmail.com wrote:

 It may perhaps be time to restate the obvious.
 
 LINKLST has come to be  used in situations remote from its original
 narrow focus, which was to improve real and virtual program-fetch
 performance from certain system datasets.
 
 It does this well, but it was never intended that volatile program
 libraries---those some of the members of which are frequently
 replaced---be included in it.
 
 The natural, appropriate update time for LINKLST is IPL time,  Some
 sysprogs in some shops may well be able to cheat on this in some
 situations; but the competence and judg[e]ment required to do so are
 personal; and these cheats should not be institutionalized; in
 particular, their use should never be delegated to people of 'junior
 understanding'.  They should probably not even be talked about much.
 
 So far as I can judge from what has been posted here, the real problem
 is not an arcane, PDSE-related one.  It is that some libraries that do
 not belong there have made their way into LINKLST.
 
 John Gilmore, Ashland, MA 01721 - USA
 
 -- 
 John Gilmore, Ashland, MA 01721 - USA
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Doc of RECFM, LRECL, BLKSIZE for INTRDR and SYSIN?

2012-01-21 Thread Paul Gilmartin
I'm trying to find documention on limits on RECFM, LRECL, and BLKSIZE
for SYSIN data sets and SYSOUT data assigned to INTRDR.  I find some
terse description in:

Title: z/OS V1R13 DFSMS Using Data Sets
Document Number: SC26-7410-11

3.5.2 SYSIN Data Set

Which states that the minimum LRECL for a SYSIN data set is 80.
It gives no maximum.  32760?  Perhaps an RCF is in order.  If the
information is not now available, which publication should supply
it?

But I get lost in control blocks.  I shouldn't need to be a
systems programmer to get the information I need.  I find very
little useful additional information in the JCL Reference, which
mentions that LRECL is permitted on DD * statements, but
gives no maximum.  And experiment shows that while LRECL is
syntactically allowed, it has little or no effect.

At some point, max LRECL was probably 80, keypunch width.  some time
later it may have been 254 for JES3 (I tested it).  In 1995, OW10527
attempted to relax this limit for JES2.  It was such a failure that
IBM chose to regress it with by OW16774. OA08145 (NF, 2003?) mentions
Support handling of SYSIN data with an LRECL254, but gives no
maximum.  What publication would this appear in?  Would there be an
announcement letter?

It seems there have been enough changes that the documents haven't
kept up and now contradict each other.

Thanks,
gil

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: Doc of RECFM, LRECL, BLKSIZE for INTRDR and SYSIN?

2012-01-21 Thread Mike Schwab
Probably however big you want to define the lrecl of the PDS you
submit the job from.
Used to be you could not edit datasets with LRECL over 251, so that
was the limit.
32760 is probably the limit nowdays.  I know in the mid 80s I did 255
lrecl from ROSCOE.

I needed a sequential dataset for a list of volume, and could not
create a lrecl under 10 bytes.

On Sat, Jan 21, 2012 at 12:58 PM, Paul Gilmartin paulgboul...@aim.com wrote:
 I'm trying to find documention on limits on RECFM, LRECL, and BLKSIZE
 for SYSIN data sets and SYSOUT data assigned to INTRDR.  I find some
 terse description in:

    Title: z/OS V1R13 DFSMS Using Data Sets
    Document Number: SC26-7410-11

    3.5.2 SYSIN Data Set

 Which states that the minimum LRECL for a SYSIN data set is 80.
 It gives no maximum.  32760?  Perhaps an RCF is in order.  If the
 information is not now available, which publication should supply
 it?

 But I get lost in control blocks.  I shouldn't need to be a
 systems programmer to get the information I need.  I find very
 little useful additional information in the JCL Reference, which
 mentions that LRECL is permitted on DD * statements, but
 gives no maximum.  And experiment shows that while LRECL is
 syntactically allowed, it has little or no effect.

 At some point, max LRECL was probably 80, keypunch width.  some time
 later it may have been 254 for JES3 (I tested it).  In 1995, OW10527
 attempted to relax this limit for JES2.  It was such a failure that
 IBM chose to regress it with by OW16774. OA08145 (NF, 2003?) mentions
 Support handling of SYSIN data with an LRECL254, but gives no
 maximum.  What publication would this appear in?  Would there be an
 announcement letter?

 It seems there have been enough changes that the documents haven't
 kept up and now contradict each other.

 Thanks,
 gil

 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN



-- 
Mike A Schwab, Springfield IL USA
Where do Forest Rangers go to get away from it all?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


DFHSM: Backup tape lost due to errors

2012-01-21 Thread af dc
Hello,
suppose you have an 3592-JB dfhsm cart (native, not a vts stacked
cart) that can't be read due to physical damage, no tape backup
duplication. Several backups are unique, no additional versions. How
to correct bcds ?? Run AUDIT DATASETCONTROLS (BACKUP) with NOFIX (to
check the amount corrections) and then with FIX ??? Z/Os V1.12.??

Any hint is welcome.
MAny thx, A.Cecilio

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IPLs and system maintenance was Re: PDSE

2012-01-21 Thread John McKown
IBM once owned the Stratus line, a competitor to Tandem, and called it
the System/88.

http://en.wikipedia.org/wiki/Stratus_Technologies



On Sat, 2012-01-21 at 14:19 -0400, Clark Morris wrote:
 On 21 Jan 2012 08:07:01 -0800, in bit.listserv.ibm-main you wrote:
 
 On 01/21/2012 07:54 AM, Peter Relson wrote:
  how does IBM suggest doing a compress on a Linklist lib that needs
  compressing, inquiring minds would love to know
 
  There is no suggestion. This is simply not an operation that is supported
  or can be supported in general.
 
  Peter Relson
  z/OS Core Technology Design
 
 
 So the only functionally-equivalent, officially-sanctioned way to 
 accomplish this goal is still to
 (1) create a new dataset with a different name and copy the data to it,
 (2) modify PARMLIB LNKLST defs to replace the old library in linklist 
 with the new at next IPL,
 (3) IPL.
 And if for some reason you really must have the original dataset name, 
 repeat the process to get back to the old name.
 
 All the other techniques that have been described here in the past to 
 achieve this and bypass or defer the need for an IPL either don't 
 guarantee the new library will be seen by all address spaces or carry 
 some risk.  While those of us who have been around long enough are 
 fairly certain of specific cases at our own installation where the risks 
 of alternative methods are small enough and acceptable, it is 
 understandable that IBM does not wish to endorse techniques whose 
 success depends on SysProg competence and judgement and also in many 
 cases upon the tacit cooperation of Murphy in keeping unrelated system 
 failures from occurring in a narrow transition window during which 
 libraries and PARMLIB might be in a state where successful IPL and 
 recovery from system failure is not be possible (without an independent 
 z/OS recovery system).
 
 This discussion reminds me of when I was in a shop that had both
 Tandem and an IBM mainframe.  The notice for a systems maintenance
 upgrade on the Tandem was that the operator would institute a simple
 procedure at a specific time with no outage.  I think at one time IBM
 owned a computer company (Sequent?) that claimed to be able to do the
 same thing.  I know that I was impressed by the Tandem capability.
 
 Clark Morris
 
 --
 For IBM-MAIN subscribe / signoff / archive access instructions,
 send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
-- 
John McKown
Maranatha! 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: IPLs and system maintenance was Re: PDSE

2012-01-21 Thread Anne Lynn Wheeler
joa...@swbell.net (John McKown) writes:
 IBM once owned the Stratus line, a competitor to Tandem, and called it
 the System/88.

 http://en.wikipedia.org/wiki/Stratus_Technologies

minor nit *not owned* ... provided enormous amount of money to rebrand 
sell as system/88. there is some folklore regarding just how many
system/88s were actually installed ... about how some marketing teams
would go in after IBM was bringing along a prospect and offer them an
un-rebranded flavor at lower price.

i marketed ha/cmp against both system/88 and stratus in much of the
system/88 period ... past posts mentioning ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

part of the marketing at the time was that stratus (and system/88) was
purely fault-tolerant hardware .. but required scheduled system downtime
and reboot for many times of software maintanance. For some customers
with 5-nines availability requirement ... a century of outage budget
could be blown with each annual maintenance scheduled outage.

ha/cmp didn't have equivalent individual system uptime ... but lots of
environments, clustered operation masked any single system outage
... providing overall cluster availability much better than
5-nines. Individual scheduled system maintenance could be done with
rolling outage of individual cluster members. Stratus responded they
could configure for cluster operation ... but that negated the need (and
expense) for real fault tolerant hardware (in all those scenarios that I
was able to demonstrate clustered fault masking  recovery).

Somewhat as a result, I got asked to do a section in the corporate
continuous availability strategy document ... but after both Rochester
(as/400) and POK (mainframe) whined that they couldn't meet the
objectives, my section was pulled.

-- 
virtualization experience starting Jan1968, online at home since Mar1970

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM: Backup tape lost due to errors

2012-01-21 Thread retired mainframer
If the entire volume is unreadable, it would seem you need to use DELVOL to
tell SMS that everything that was supposed to be on that volume is lost.

If some of the files on the tape are usable, then FREEVOL may be a better
choice or possibly RECYCLE (either combined with the appropriate BDELETEs to
skip over the unreadable files).

AUDIT does not process the media.  It processes the various CDSs and
catalogs to ensure consistency but not necessarily accuracy.

As with all datasets, anything of which you have only one copy that is
unreadable means you have zero copies.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of af dc
Sent: Saturday, January 21, 2012 11:29 AM
To: IBM-MAIN@bama.ua.edu
Subject: DFHSM: Backup tape lost due to errors

Hello,
suppose you have an 3592-JB dfhsm cart (native, not a vts stacked
cart) that can't be read due to physical damage, no tape backup
duplication. Several backups are unique, no additional versions. How
to correct bcds ?? Run AUDIT DATASETCONTROLS (BACKUP) with NOFIX (to
check the amount corrections) and then with FIX ??? Z/Os V1.12.??

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread retired mainframer
::-Original Message-
::From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
::Behalf Of Joel C. Ewing
::Sent: Saturday, January 21, 2012 8:02 AM
::To: IBM-MAIN@bama.ua.edu
::Subject: Re: PDSE
::
::So the only functionally-equivalent, officially-sanctioned way to
::accomplish this goal is still to
::(1) create a new dataset with a different name and copy the data to it,
::(2) modify PARMLIB LNKLST defs to replace the old library in linklist
::with the new at next IPL,
::(3) IPL.
::And if for some reason you really must have the original dataset name,
::repeat the process to get back to the old name.

If you need the original DSN, after populating the new dataset, uncatalog
the original and rename the replacement (both actions not restricted by the
current enqueue).  This will eliminate the need for the second IPL.  And
after the IPL, you can rename the original dataset and delete it even though
the DSN is enqueued by LLA (assuming you the correct RACF access).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Shmuel Metz (Seymour J.)
In
capd5f5o8bcdv_4uouox4krsf+xgho0-rjjvp9fdfr_m31oy...@mail.gmail.com,
on 01/21/2012
   at 01:27 PM, John Gilmore johnwgilmore0...@gmail.com said:

LINKLST has come to be  used in situations remote from its original
narrow focus, which was to improve real and virtual program-fetch
performance from certain system datasets.

That wasn't its original purpose.
 
-- 
 Shmuel (Seymour J.) Metz, SysProg and JOAT
 ISO position; see http://patriot.net/~shmuel/resume/brief.html 
We don't care. We don't have to care, we're Congress.
(S877: The Shut up and Eat Your spam act of 2003)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


DFSORT manual humour

2012-01-21 Thread R.S.

Quote from DFSORT manual:
---
For best performance, specify an emulated 3390-9 device (such as RAMAC) 
or another high-speed IBM disk device as the default, and avoid 
specifying a tape, virtual (VIO), or real 3390-9 devices as the default.

---

I have to admit the manual is a little bit obsolete - it is dated on 
2009. However I'm still under impression of high-speed RAMAC devices.




BTW, now seriously:
I can specify number of dynamically allocated work datasets via DYNALOC, 
but the size of those dataset is controlled by DFSORT and user cannot 
change it.

Is it true?

--
Radoslaw Skorupka
Lodz, Poland


--
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is intended solely for business use of the addressee. This e-mail may only be received by the addressee and may not be disclosed to any third parties. If you are not the intended addressee of this e-mail or the employee authorised to forward it to the addressee, be advised that any dissemination, copying, distribution or any other similar activity is legally prohibited and may be punishable. If you received this e-mail by mistake please advise the sender immediately by using the reply facility in your e-mail software and delete permanently this e-mail including any copies of it either printed or saved to hard drive. 


BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. 
Wedug stanu na dzie 01.01.2012 r. kapita zakadowy BRE Banku SA (w caoci wpacony) wynosi 168.410.984 zotych.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: DFSORT manual humour

2012-01-21 Thread Richard Pinion
Probably the IBM 9393 RAMAC, mid to late 1990s.

Richard and Vickie Pinion

--- r.skoru...@bremultibank.com.pl wrote:

From: R.S. r.skoru...@bremultibank.com.pl
To: IBM-MAIN@bama.ua.edu
Subject: DFSORT manual humour
Date: Sun, 22 Jan 2012 04:30:43 +0100

Quote from DFSORT manual:
---
For best performance, specify an emulated 3390-9 device (such as RAMAC) 
or another high-speed IBM disk device as the default, and avoid 
specifying a tape, virtual (VIO), or real 3390-9 devices as the default.
---

I have to admit the manual is a little bit obsolete - it is dated on 
2009. However I'm still under impression of high-speed RAMAC devices.



BTW, now seriously:
I can specify number of dynamically allocated work datasets via DYNALOC, 
but the size of those dataset is controlled by DFSORT and user cannot 
change it.
Is it true?

-- 
Radoslaw Skorupka
Lodz, Poland


--
Tre tej wiadomoci moe zawiera informacje prawnie chronione Banku 
przeznaczone wycznie do uytku subowego adresata. Odbiorc moe by jedynie 
jej adresat z wyczeniem dostpu osób trzecich. Jeeli nie jeste adresatem 
niniejszej wiadomoci lub pracownikiem upowanionym do jej przekazania 
adresatowi, informujemy, e jej rozpowszechnianie, kopiowanie, rozprowadzanie 
lub inne dziaanie o podobnym charakterze jest prawnie zabronione i moe by 
karalne. Jeeli otrzymae t wiadomo omykowo, prosimy niezwocznie 
zawiadomi nadawc wysyajc odpowied oraz trwale usun t wiadomo 
wczajc w to wszelkie jej kopie wydrukowane lub zapisane na dysku.

This e-mail may contain legally privileged information of the Bank and is 
intended solely for business use of the addressee. This e-mail may only be 
received by the addressee and may not be disclosed to any third parties. If you 
are not the intended addressee of this e-mail or the employee authorised to 
forward it to the addressee, be advised that any dissemination, copying, 
distribution or any other similar activity is legally prohibited and may be 
punishable. If you received this e-mail by mistake please advise the sender 
immediately by using the reply facility in your e-mail software and delete 
permanently this e-mail including any copies of it either printed or saved to 
hard drive. 

BRE Bank SA, 00-950 Warszawa, ul. Senatorska 18, tel. +48 (22) 829 00 00, fax 
+48 (22) 829 00 33, www.brebank.pl, e-mail: i...@brebank.pl
Sd Rejonowy dla m. st. Warszawy XII Wydzia Gospodarczy Krajowego Rejestru 
Sdowego, nr rejestru przedsibiorców KRS 025237, NIP: 526-021-50-88. 
Wedug stanu na dzie 01.01.2012 r. kapita zakadowy BRE Banku SA (w caoci 
wpacony) wynosi 168.410.984 zotych.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN




_
Netscape.  Just the Net You Need.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: DFHSM: Backup tape lost due to errors

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 04:06 PM, retired mainframer wrote:

If the entire volume is unreadable, it would seem you need to use DELVOL to
tell SMS that everything that was supposed to be on that volume is lost.

If some of the files on the tape are usable, then FREEVOL may be a better
choice or possibly RECYCLE (either combined with the appropriate BDELETEs to
skip over the unreadable files).

AUDIT does not process the media.  It processes the various CDSs and
catalogs to ensure consistency but not necessarily accuracy.

As with all datasets, anything of which you have only one copy that is
unreadable means you have zero copies.

-Original Message-
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf
Of af dc
Sent: Saturday, January 21, 2012 11:29 AM
To: IBM-MAIN@bama.ua.edu
Subject: DFHSM: Backup tape lost due to errors

Hello,
suppose you have an 3592-JB dfhsm cart (native, not a vts stacked
cart) that can't be read due to physical damage, no tape backup
duplication. Several backups are unique, no additional versions. How
to correct bcds ?? Run AUDIT DATASETCONTROLS (BACKUP) with NOFIX (to
check the amount corrections) and then with FIX ??? Z/Os V1.12.??


I've said it before, but will say it again:  modern tape media has such 
a large capacity that a single dfhsm cartridge can contain an incredibly 
large number of datasets.  It is almost inevitable that loss of a single 
dfhsm ML2 or Backup cartridge will impact something you care about or 
can't afford to lose (or are legally required to retain), and it is also 
inevitable that single cartridges will occasionally be physically 
damaged.  Over the long haul, you really can't afford NOT to duplex all 
dfhsm carts (and for that matter, non-dfhsm carts that contain data you 
can't afford to lose).


All it should take is the potential loss of one critical, irreplaceable 
dataset to justify to management the cost of the extra cartridges and 
tape drives required.  Tape duplexing in some form with off-site storage 
is also a requirement for any reasonable installation Disaster Recovery 
plan, so it ought to be cost-justified on those grounds alone, with a 
side benefit that DR duplex copies can also save your backside from 
single media disasters.  If management lacks the wisdom to support 
duplexing, file the recommendation away and bring it back out when the 
next inevitable media failure and massive data set loss occurs.


The duplex support in dfhsm makes duplexing ML2 and BACKUP carts 
trivially easy to do as long as you have enough drives.  Similar support 
should also be available in software/hardware solutions that stack 
multiple non-dfhsm virtual volumes on physical volumes (e.g., CA-VTape), 
where similar exposures exist and where duplexing in some form is also 
highly recommended.


--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN


Re: PDSE

2012-01-21 Thread Joel C. Ewing

On 01/21/2012 04:21 PM, retired mainframer wrote:

::-Original Message-
::From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On
::Behalf Of Joel C. Ewing
::Sent: Saturday, January 21, 2012 8:02 AM
::To: IBM-MAIN@bama.ua.edu
::Subject: Re: PDSE
::
::So the only functionally-equivalent, officially-sanctioned way to
::accomplish this goal is still to
::(1) create a new dataset with a different name and copy the data to it,
::(2) modify PARMLIB LNKLST defs to replace the old library in linklist
::with the new at next IPL,
::(3) IPL.
::And if for some reason you really must have the original dataset name,
::repeat the process to get back to the old name.

If you need the original DSN, after populating the new dataset, uncatalog
the original and rename the replacement (both actions not restricted by the
current enqueue).  This will eliminate the need for the second IPL.  And
after the IPL, you can rename the original dataset and delete it even though
the DSN is enqueued by LLA (assuming you the correct RACF access).



This technique is only possible if it is acceptable for the replacement 
DSN to be on a different volume, AND the original DSN volume is not an 
SMS volume, but I too have used this in the past when system datasets 
were divided between two non-SMS volumes and volume residency didn't 
matter.


 One must also recognize that there is a slight risk here: that between 
the uncatalog step and the final rename there is a window during 
which the system is in a state where a successful IPL may not be 
possible should the system crash; so to not tempt fate, you don't want 
to do this during a storm when the UPS is down or elongate the window by 
allowing yourself to be interrupted in the middle of the process -- and 
having an alternative recovery system available for the unlikely 
worst-case scenario is goodness.  I was never hit by a system failure in 
the middle of one of these sequences, but I am convinced it is only 
because I was sufficiently paranoid and Murphy knew I had recovery 
alternatives at hand.:)


If the object is to find, if possible, a procedure which passes through 
no window of risk where a system outage could leave you unable to IPL, 
then to preserve the original DS name I think you are stuck with two 
IPL's --  an unpleasant enough prospect that most SysProgs quickly learn 
alternatives (like the one above) that accept some tolerable level of risk.

--
Joel C. Ewing,Bentonville, AR   jcew...@acm.org 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN