Re: ServerPac install
> About ZFS (not ServerPac) > > What about recycling ZFS with F OMVS,STOPPFS=ZFS and then replying with R > to: nn BPXF032D FILESYSTYPE ZFS TERMINATED. REPLY ’R’ WHEN READY TO > RESTART. REPLY ’I’ TO IGNORE. ? Last time I tried that it meant shutting down all of OMVS because the ROOT was a ZFS. And while you can theoretically shutdown OMVS, too, there are several address spaces in the system priviledged enough to prevent a shutdown of OMVS. I believe DB2 is among them. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ServerPac install
> Updated my RACF data set profile > for Z220.**, IPL'ed to cycle ZFS, and it worked fine. Just as a FYI: You can avoid an IPL to cycle ZFS for RACF DATASET reasons by using SETROPTS GENERIC(DATASET) REFRESH. I don't remember which book I found that in, but it is heavily sprinkled with warnings NOT to do that on a regular basis. Refreshing class DATASET should be left for emergencies like this one, when the permission is for an essentially non-recyclable address space. I am guessing that you used a test system for install where an IPL didn't matter. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How to Display Data from a Dataspace using IPCS ?
> SLIP > SET,IF,PVTMOD=(MEASMCMN,0ADA,0ADC),DSPNAME=(‘MEASMCMN’.*),JOBNAME=MEASMCMN,ID=M,END > The address in the data space I am looking for was: > Access register: 01010039 Register: 00025238 > I did not know the name of the data space DSPNAME(IRR1) is not in the > dump but I did find ASID(X'0009') DSPNAME(IEAM003F) in the dump and looked > inside… I found the MDB! What dataspace does 01010039 translate to? You will need to dump that data space on the slip command (or a dump command, as the case may be), otherwise IPCS will tell you 'storage not available'. Using option 4 and ld in front of your dump, you can see which data spaces actually got dumped. There are always some (system) data spaces that get a snippet dumped (as relevant to the time of error). Be aware that the storage list command (you cited it correctly) does not address a data space using the alet, it uses the data space name. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: IPCS Magicians (was: Smaller Private Area in DR)
> I know, but how do you go back to where you were? Say you're following a > chain of linked addresses and you want to go back. Typing in stuff to > remember the addresses would be a pain. Web browsers give you all that for > free. As far as I am concerned, one works differently in the z/OS world. I use the back button a lot on windoze, and a lot of internet sites (IBM included) make it hard for me to use it at all. Sometimes it gets perverted to have other meanings (provided I allow javascript on the page, which I mostly don't), often I get told that information needs to get resend. And if all of the pages displayed have the same name, I have a hard time remembering what my 'starting page' was, so to speak. And I will not patiently click on back and back and back and... you get my meaning. For IPCS, I learned early to start out with an address and assign a symbol to it. I still do this (p is always the psw for me), even if I don't know if I need to go back to that address. Same for save areas - the p gets a number assigned so I know how far back I am. I do the same to linkage stack entries (split screen is your friend here, too). And chains of addresses I display using the runchain command, possibly with anoth er command interspersed to give me formatted output. No need to go back and forth. > Well that's odd because back in the days when I worked on Java for z/OS, I > wrote the code to extract a java heapdump from an sdump. Way out of date > but you can still find references to "java -cp svcdump.jar > com.ibm.zebedee.commands.Convert" here: > https://publib.boulder.ibm.com/httpserv/cookbook/Operating_Systems-zOS.html > . The modern way to do that is using jdmpview, see > https://www-01.ibm.com/support/knowledgecenter/SSYKE2_7.0.0/com.ibm.java.lnx.70.doc/diag/tools/dump_viewer_dtfjview/dump_viewer_commands.html It was the java development in Hursley that told me there is no tool, about 5 years ago. Maybe your tool was internal use only. Either I had not used the correct docs when I searched for it or it just wasn't there at the time. Fortunately these days I don't have to deal with Java on z/OS anymore - other than to install it. But thanks for the link. > Oh, and besides not liking IPCS, the other reason I wrote sdump analysis > software in Java was that about 15 years ago someone had written a Rexx > script to extract useful info from a dump using IPCS but in some cases it > would take about 24 hours to run! Who said that that REXX was well written? And how is it any different from Java on z/OS? Java applications have always been cpu hogs back when I had to deal with them. I am still pretty sure that having your tool analyze a standalone dump (say, an RSM problem with large real storage amounts in the lpar) would take forever, if the tool can even deal with an sadump. More than one support person doesn't have a clue how to read an sadump properly, starting with determining what the error asids are (when all cpus are in a no-work-wait). And now I'm really off my soap box. :-) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: IPCS Magicians (was: Smaller Private Area in DR)
> Plus when you're displaying register contents or stack traces, the > addresses are displayed as links. IPCS has been doing that since I started working with it. Use %, ? and (forgot the 64bit-address character) when displaying active storage. Equate some address with a symbol, and you can list it using the symbolic name. IPCS automatically gives you a lot of symbols, generated when the dump is initialized. I should mention that even Fault Analyzer does links when you use the interactive panels. FA is geared towards application programmers, though. A tool to more easily read a java dump is a good thing. Back when we had a lot of problems with our problem being queued from component to component because nobody had a clue, I asked IBM for something to help analyze a java dump from inside an sdump. Was told in no uncertain terms that nothing like this exists. I don't think that the tool described by David is much help when analyzing a system problem, though. Knowledge of what goes on in z/OS and a good tool (IPCS) to ask a dump good questions are what's required, IMO. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: IPCS Magicians (was: Smaller Private Area in DR)
> :>If someone showed interest in becoming such a fabled magician, what > direction would you point said someone to? > > Start with simple SYSMDUMPs and work your way up. > > Learn the trace table. Very good advise. The system trace table can be more interesting than any thriller you could watch/read. Become familiar with how a process appears in the system trace table, for instance what happens when a tcb gets attached and then dispatched for the first time. Learn how to tell if you're executing in xmem mode, on what processor, in which addressing mode. Become very familiar with the 'Diagnosis' books. The 'reference' is invaluable for the current system reference and 'tools&service aids' tells you a lot about sadumps, dumps, c/traces. The 'component reference' part of the reference manual shows quite a few of the reports that IPCS can produce. Of course, you need to learn what report shows which areas. Get your hands on any SHARE presentation that deals with problem determination and with explanations about how z/OS works. You need to learn how to ask a dump the right question, and you can only do that if you have an inkling what happens when you see certain messages. And finally practise, practise, practise. Write a small Assembler program and then debug it using IPCS (and not an interactive debugger). Become familiar with save areas and how to find them in the dump, including when some code suddenly tells you F1SA for a pointer. Unfortunately I am not aware of any classes where you can learn IPCS. I learned on-the-job, watching an old-timer using IPCS in command mode (not using the analysis panels). And I had several years of practise, including code reading when I was IBM level 2 for the BCP. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Smaller Private Area in DR
> I wonder how many other people routinely ensure they have a usable dump > available back at the office after any test day. > Barbara maybe ... ;-) It was usually a fight to get a dump saved, in some cases to get the dump taken in the first place, especially when I am the only one in the installation who can actually *read* the dump afterwards. No dumps got taken if I wasn't there to demand them. In my last job, the boss'boss eventually even gave the ok to take a standalone dump of an AE machine during the day because he *knew* I would find something to get us closer to a solution. Took a few years to get there, though. And never mind that *taking* the sadump should not be the problem, getting the operators to find the docs and act upon it took at least twice as long as the actual sadump did. I believe using IPCS and reading dumps is a dying art practised only by a few magicians these days. A shame, really. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: vary ,console with an IEF238D outstanding.
> - I try to vary a 3390 device ONLINE that was previously OFFLINE. This works > without any problems and I can access the associated volume immediately. That means that the SYSIEFSD ENQ had a minor of VARYDEV, right? You may want to search for old apars with keywords sysiefsd, q4 and varydev. > - I try to vary the same 3390 device OFFLINE, and I am met with; IEF524I > , VOLUME xx PENDING OFFLINE. (I reckon that the cause of this could > lie somewhere else altogether). D GRS,RES=(SYSIEFSD,*) does not list my user > as requesting/owning any ENQ, so I guess it is some other resource that is > needed. I would consider this typical z/OS behaviour - as long as *anything* is allocated to a UCB, that UCB will not go offline. Instead of checking for an ENQ, you should have checked the actual UCB - it probably still had a count of allocated jobs/asids larger than zero. Also, IIRC, setting a UCB offline (or online, for that matter) just flips the bit in the UCB control block - it does not really mean that you can access the volume (after a V online). I haven't tested this recently, but I seem to remember it is possible to vary online a device that has no paths. > So I'm not sure if I can follow you completely, Skip and Don, because I can > still vary other 3390 devices online, but not 3270 devices to console. From > another perspective it's only normal that I can vary 3390s online, because > that's exactly what needs to happen to make our (erroneous) job continue > processing. But that shouldn't necessarily break the varying of consoles. > Unless I'm completely missing something, which is absolutely possible since I > don't (yet?) have any knowledge of the internals. > Like I said, this doesn't break the OS, but it's interesting behavior > nontheless. Again a guess on my part: I believe the SYSIEFSD ENQ is needed exclusively because to vary a device to be a console, the UCB needs to get pinned. My (1.13) console is still pinned, so console restructure probably hasn't changed that part. You check this by going into IPCS browse, selecting 'active' as the 'dump' source and then issuing 'ip listu ' with the ucb number. Field UCBSTAT has bit UCBALOC (x'80'), and that tells the system if anything is allocated to that UCB. UCBASID would show the asid, according to the UCBALOC description (For my console it shows x'', though). And the listu formatter tells you if the UCB is pinned in clear text. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: vary ,console with an IEF238D outstanding.
> IEE799D VARY CONSOLE DELAYED - REPLY RETRY OR CANCEL. > > A VARY CONSOLE command requested that a console be placed online or offline. > The system could not process the command due to other processing in the > system such as: > - Another VARY CONSOLE command > - Device allocation in progress > Just guessing here, but I think there is ENQ contention, probably SYSIEFSD Q4 or VARYDEV or both. Can you still check? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How is CAMASTER started?
> CAMASTER is not started out of ICHALTSP in the case where RACF is the > security product. That leaves me with my original question: How is CAMASTER started? As I said before, while ICHALTSP is certainly in LNKLST (since we will run that LPAR with either RACF, ACF2 or TSS, depending on the testcase - we are still converting our RACF data bases to something usable under ACF2 or TSS), right now we still run with RACF and CAMASTER still starts. How? That CNZ exit? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How is CAMASTER started?
> I'd say "sort of". But only when you are running ACF2 or Top Secret. We plan to run with either TSS or ACF2 in that lpar. Currently we still use RACF there. So we already have the (ACF2) library in linklist, but don't actually run an alternate security product yet. Which makes me think that the IBM code just searches for an agreed-upon module name (ICHALTSP) and if found, gives control to that module. That routine in turn will start CAMASTER, is my guess. > Another "early" opportunity to start an address space is by an exit > routine associated with the CNZ_MSIEXIT exit. z/OS does not have any > control over what happens in this exit. Thanks again, Peter! Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How is CAMASTER started?
> - ICHALTSP is an interface made available to the owner of the alternate > security product being used on this system, as a means of starting that > alternate security product in the same "window" when RACF is started, > i.e., before started tasks and jobs can start. > > FWIW, "ALTSP" does indeed stand for ALTernate Security Product. Thanks Peter, you've saved me from searching for the string ICHALTSP in all IBM modules. I figured that this was the mechanism to get CAMASTER up and running, since a true API *requires* to be in control first to call the API. So CA (mis)uses this interface/agreement to get themselves a trusted address space for *all* of their products, not just ACF2 and TSS, which (according to the CA website) were not even the first exploiters of CAMASTER. >On systems that run an unmodified SAF (as supplied by IBM), all address spaces >that start during NIP are initially TRUSTED and none has a user ID, because >there are no security services available to assign anything else that early in >the system's life. They also only have limited services available for their >use. Later, after the security services become available during MSI, some of >those early address spaces may choose to transition into full-service address >spaces, and if so they would acquire proper security identities, and possibly >lose their TRUSTED status. Thanks Walt, for clarifying this. As far as I am concerned, just about *every* address space should have an associated userid, but most definitely a vendor's address space! I had noticed that the IBM docs on what address space *needs* to have a userid assigned are a bit opaque back when I introduced the * profile in class STARTED with a userid without any rights on my ADCD RACF data base, so being cautious I assigned a userid to just about every address space (with the exception of *master*). I also routinely show IRR812I, so I know now that *MASTER*, PCAUTH, RASP, TRACE, GRS, SMSPDSE, CONSOLE, ALLOCAS are the only address spaces that don't get a userid assigned in STARTED. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How is CAMASTER started?
Scott, > We made our case to IBM, they agreed and provided a licensed API. am I correct in assuming that the API provides IBM with a way to start an address space (via one of the IBM IRIMs used for initializing the system) by using a CA module in the ASCRE? So essentially the first CA module kicking everything else off gets called by IBM code? Why does CAMASTER make itself a TRUSTED address space, not using any assigned userid in STARTED? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How is CAMASTER started?
> There's no module named ICHALTSP in IBM libraries (LINKLIB, CSSLIB, > MIGLIB,...). It's only in CAW0LINK. (ISPF -> DDLIST -> LINKLIST -> MEM > ICHALTSP). According to the ibmmain post ICHALTSP is an (IBM) IRIM that receives control in master scheduler (but you're right, I cannot find it in my SMPE environment either). I have found ICHALTSP in different CA libraries (common services and ACF2, haven't checked TSS yet). IKJEFXSR is not in LPA, but it is in (IBM) linklib and would be found there before any CA module could be found, so I still don't understand how and what CA intercepts to start CAMASTER. (Not having installed either CA common services not ACF2 nor TSS, I still don't think that there was a job to relink anything into the IBM target libraries with an alias.) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: How is CAMASTER started?
> Search IBM-MAIN archive (or google) for "TEC570878" or "IKJEFXSR" Thanks Norbert. I can see that CAMASTER started on this system, but I still don't know how this is actually done since no libraries containing this frontend are loaded in LPA via lpalst or command. Right now neither IKJEFXSR nor ICHALTSP can be found in active LPA: D PROG,LPA,MOD=IKJEFXSR CSV550I 06.26.56 LPA DISPLAY 748 IKJEFXSR WAS NOT FOUND IN THE LPA D PROG,LPA,MOD=CCSEFXSR CSV550I 06.27.19 LPA DISPLAY 750 CCSEFXSR WAS NOT FOUND IN THE LPA D PROG,LPA,MOD=ICHALTSP CSV550I 06.29.45 LPA DISPLAY 752 ICHALTSP WAS NOT FOUND IN THE LPA All CA libraries are concatenated behind the IBM ones in linklist, so this does not really explain it. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
How is CAMASTER started?
Now that CAMASTER has become a mandatory address space, I am asking myself how that address is actually started. The CA documentation makes it sound as if it were magic that starts it, but I don't believe in magic. There are knowledge base articles out there that say that starting CAMASTER at the SSI is too late (which makes sense, especially if a customer uses TSS or ACF2 for security). I would still like to know *how* it is started. Is IBM checking for the presence of certain modules in lpa and then start it during MSI? Does anyone know? What I dislike is also the fact that CA states that absolutely no security definitions are required, since the address space is TRUSTED. Sounds like they give themselves the TRUSTED attribute, since no entry in STARTED is required and my default userid without any rights for non-defined STCs was not taken. The address space has all kinds of priviledges via SCHEDxx and is obviously APF authorized. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Fw: Question for RDT system users - message issuance
>CNZ4200I CONSOLE L700 HAS FAILED. REASON=IOERR >From that point forward, we did not get any IEF403I or IEF404I messages anymore To answer my own question: The console has MONITOR-L set in consolxx, and there was no setcon mn,jobnames=on issued anywhere (this is an ADCD system, after all!). I haven't tested this yet, but it appears that not issuing the SETCON MN command is responsible for IEF403I/404I getting tied to the availability of a console. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Question for RDT system users - message issuance
This is a question for those of you that run z/OS on an RDT system (presumably also those on a zPDT system), i.e. on emulated hardware. Yesterday we had to restart the remote viewer (VNC) I use to login to Linux to start z/OS. Restarting VNC kills the console: CNZ4200I CONSOLE L700 HAS FAILED. REASON=IOERR >From that point forward, we did not get any IEF403I or IEF404I messages >anymore (which we noticed when we automatically scanned the hardcopy log >during testing for exactly these messages). When I got the master console back >this morning, both ief403i and ief404i miraculously started to reappear. I >was/am completely unaware that there is a connection between the issuing of >ief403/404 and the availability of a console to z/OS. Has anyone else seen this on an RDT (zPDT) system? (RDT 8.5 with z/OS 1.13, I haven't gotten around to upgrading *that* box to 9.1 yet). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Notify for XMIT
> To clarify, Exit 13 used to be required to get notification because the > default was NO. Now it's controlled by this keyword in the init deck: > > NJEDEF MAILMSG=YES > > Since YES is the default, unless you code MAILMSG=NO, you should be getting > the notify. This from Knowledge Center regarding HASP549: Thanks Skip, you have just solved the mystery (to me) why we don't get the notify messages on the 2.1 (real) system, but we do get them on the 1.13 (ADCD) systems. Someone has explicitly set mailmsg=no on njedef on that 2.1 system. I'll go and ask why. :-) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Performance question - handling of max CPU % utilization
> Question: how do you performance guys and gals present those maximums? Or how > do you prove that machines are heavily used? Do you use averages of those > maximum CPU% utilization or what do you use? Do you combine all the LPARs and > then work out the max? Any trending or statistical analysis methods to > consider? I used to use SMF type70 records (PR/SM data) and compiled grafics (SAS, MXG) that show what each lpar used in each interval. Being 'flatlined' at the top of the box certainly showed that the machine was overloaded, despite the actual lpars still showing room. They could not get it because the physical processors were busy elsewhere. I had a separate grafic for ZIIPs, as they had a different 100% - they were much faster than our GCPs. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: DUMP parm recommendation
> My WebSphere guy asked for this: > SDATA=(CSA,,SQA,RGN,TRT,GRSQ,LPA,LSQA,SUM,NUC,PSA,SWA,ALLNUC) Looks like the general setting that IBM support asks for, copied from one component to the other without any thought to content or validity. > Advice? Thoughts? Replace the ALLNUC with NUC or ditch NUC altogether. Unless you are debugging a problem in RSM/SRM/DISPATCHER (components that really truly have their code in the nucleus), I don't see any reason to artificially increase dump sizes by using ALLNUC. After all, if you ask IPCS about a nucleus module, you will always get the actual csect name (and not just the lmod name, which would require either an amblist of the lmod or the full module hopefully dumped. If not, support will still need the amblist to determine the actual csect). And if any of those components writes a dump, they should have included what they need. Same for slip traps that support for those components hands out. If you are running in a sysplex environment, I would add XESDATA and COUPLE, just in case. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Alter TRUST status on a certificate
> You misspelled websphere. > Try this with a capital S and no space. Label must exactly match. > racdcert CERTAUTH alter(label('WebSphereCA')) notrust Donald hit the nail on the head. Teaches me to always copy over the label, no matter how short it is. racdcert alter(label('WebSphereCA')) notrust certauth did the trick. Now I can try what I originally attempted - export the thing to PKCS12B64 since it seems to be the only one with a private certificate, and apparently I cannot export to PKCS12 without the certificate having a private key (got an error when I attempted that for another certificate). Not that the past almost two weeks of reading would have spelled that out in black and white that I remember. Thanks for lending the second, third, ... pair of eyes. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Alter TRUST status on a certificate
All, I am new to this certificate stuff. I have inherited this certificate in my RACF data base (apparently the only one that has a private key somewhere, no ICSF in use, and I have all RACF privileges): Label: WebSphereCA Certificate ID: 2QiJmZmDhZmjgeaFguKXiIWZhcPB Status: TRUST Start Date: 2009/11/12 07:00:00 End Date: 2019/01/01 06:59:59 Serial Number: >00< Issuer's Name: >CN=WAS CertAuth for Security Domain.OU=BBNBASE< Subject's Name: >CN=WAS CertAuth for Security Domain.OU=BBNBASE< Key Usage: CERTSIGN Key Type: RSA Key Size: 1024 Private Key: YES Ring Associations: *** No rings associated *** I want to change the trust status to NOTRUST, which I currently don't see a way (rlist digtcert tells me it has application data=irrcerta): racdcert alter(label('Websphere CA')) notrust -> IRRD105I No certificate information was found for user myuserid. racdcert alter(label('Websphere CA')) notrust id(irrcerta) -> IRRD102I The user ID specified is not defined to RACF (same for IBMUSER, which was the id it was installed under) racdcert alter(label('Websphere CA')) notrust certauth -> IRRD107I No matching certificate was found for this user. (Is this irrcerta? If so, why isn't it found?) racdcert alter(label('Websphere CA')) notrust site -> IRRD105I No certificate information was found for user irrsitec. How do I address this certificate? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Redbook on DFHSM and PDSEs Control datasets
> Do you think an old Dino like me could use it to setup DFHSM for the first > time. I need to on our development environment. > > SYS1.SAMPLIB(ARCSTRST) is the HSM starter set and will get you going. Having gone through this exercise from scratch on an ADCD system about 2 years ago, I can say that setting up HSM is not all that hard. Admittedly, I only set it up for migration (and not for backup). I also used the starter set and then went over all the parms that there are to see if they apply to me. The problem with the starter set is that it references defined storage classes, storage groups and management classes and it alludes to ACS routines, but it does not show the ACS routines, so *that* took me a while without a good example to start from. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Setting up a new user for OS/390 and Z/OS
> Anyone have the JCL to set up a new user for TSO and other services for both > OS/390 and Z/OS. > > Getting bored using IBMUSER and when trying to use TSO commands to do it have > made a bit of a mess as it does not seem to work correctly (this could be > me!) and no I do not wish to use the other default names but a specific one > or two? > > This is assuming that the procedure is the same for both :) Sounds like you're running on an ADCD system with their (IMO) messed up RACF data base. I went through this exercise a while back - you want to think about your user structure in the first place and start designing what you want to do. Define groups that you give authority to, then hand out that authority to those groups. Then define new users and make their default group one of those new groups. Then you can clean up the STC ids (and whatever userids ADCD came with). I don't think that I have much left of the original ADCD setup (well, I still have *that* IBMUSER, and I still have their SYS1 group, but I got rid of all the UACCs and migrated to generic profiles and to using AUTOUID/AUTOGID - AIM stage 3). These days, this is the job I use to define a new user on a z/OS 1.13: //* //* //*** ACHTUNG: CAPS OFF //* //* //PATHS EXEC PGM=IKJEFT1B //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * PROF MSGID WTPMSG MKDIR '/u/umnt/userid' MODE(7 5 5) //DEFALIAS EXEC PGM=IDCAMS //SYSPRINT DD SYSOUT=* //SYSINDD * DEFINE ALIAS(NAME(USERID) RELATE(USERCAT.TSOUSER)) + CAT(CATALOG.MASTER) //* //DEFRACF EXEC PGM=IKJEFT01,DYNAMNBR=20 //SYSLBC DD DSN=SYS1.BRODCAST,DISP=SHR //SYSTSPRT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SYSOUT DD SYSOUT=* //SYSINDD DUMMY //SYSTSIN DD * AU USERID DATA('Name') DFLTGRP(SYS4) PASSWORD(SYS4) + OMVS(HOME('/u/umnt/userid') PROGRAM('/bin/sh') AUTOUID) + TSO(ACCTNUM(ACCT#) COMMAND(ISPF) PROC(ISPFPROC) SIZE(2048000)) ADDSD 'USERID.**' UACC(NONE) OWNER(USERID) PE 'USERID.**' CLASS(DATASET) ID(USERID) ACCESS(ALTER) PE 'USERID.**' CLASS(DATASET) ID(SYS0) ACCESS(ALTER) PE 'USERID.**' CLASS(DATASET) ID(SYS2) ACCESS(READ) PE 'USERID.**' CLASS(DATASET) ID(SYS4) ACCESS(READ) // SYS0 is the sysprog group with special/operations/superuser authority SYS2 is the general TSO user group SYS4 is the group for TSO users with limited rights Is anyone still interested in my cleanup jobs to be submitted to the cbttape? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
MPF exit
I have just noticed a very strange behaviour of an MPF exit that is supposed to suppress messages from hardcopy log. The exit does this: OICTXTRFB2,CTXTRDTM We have an abundance of IEC161I messages that clutter up hardcopy log (among other things). So I set the exit to suppress iec161i while the joblog and the TSO user still get it. This was the result: TSU00606 00100211 IEC161I 056-084,xx,ISPFPROCISPFPROC,IPCSDDIR,,,xx.DDIR, 533 533 00100211 IEC161I xx.DDIR.D,CATALOG.USER Note that the message clearly says it 'bypassed hardcopy', but it didn't, otherwise I would not have seen it in hardcopy log. (It normally gets issued with these flags in hclog: 0090 - not serviced by any exit, automation requested.) Then I tried the same exit on 0090 $HASP373 jobname STARTED - INIT 1- CLASS A - SYS ADCD 0291 $HASP395 jobname ENDED These messages got suppressed without a problem. Now comes the puzzling part: When I tried IEC161I again (same exit, no source code change, no lla refresh, just another t mpf command to remove suppression of hasp373/395), IEC161I *also* got suppressed as it should have been all along. z/OS maintenance level is fairly recent, MFA is disabled, we run NetView 5.4 at a fairly old maintenance level. No IEA364E had been issued during this IPL. What am I missing here? Why did it not work all along? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Knowledgecentre versus the library server
> From where and what did you used to d/l the whole set? I must have missed it. > Actually I want to d/l the whole 2.1 set and place it somewhere shareable by > all. It is a one-time effort, and I download each pdf manually and sort them into the folders I want them in. I also rename them so I know what's what. I don't bother with indices. As I said, I usually know which book to look into in the first place. I use pubcenter http://www-05.ibm.com/e-business/linkweb/publications/servlet/pbi.wss?CTY=DE and you probably don't want the CTY=DE at the end. (That's just the link I have.) I usually ask for one book in a new release, look at it and at the bottom there is "Click here for an overview of all BOFs and/or kits that contain this publication." After I have done that, I get a list with a publication number (SK3T-4276-29 for z/OS 1.13) and click on that again. Now I get "Click here for an overview of the publications that belong to this publication." Voila, all the books belonging to z/OS 1.13. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Dump Defaults (Was: SLIP IF Trap?)
> It would be easiest if you could send this dump somewhere > that I can access. Like opening a PMR and sending the dump, if > you have a license to do that. Or sending it to Dallas if you > have an ISV relationship there. Thanks Jim, I have done so. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Knowledgecentre versus the library server
> >I would rather download the PDFs,I'm think personally, it's easier more me. > > Not me. I still prefer the .boo. In fact, most of the time, I > still use the z/OS 1.13 books rather than the current doc > unless I'm going to print part of it, or unless I know that I > need something more current. In that case, I sometimes do > a search with the 1.13 .boo, then find the corresponding > part of the current pdf. Normally I am not into 'me, too's, but in this I agree. I have the full set of 1.13 .boo books on my laptop and use them primarily (it helps that I am still at 1.13 most of the time). I also have a set of 2.1 .pdfs downloaded on my laptop (and will do the same for 2.2), but only use them when I want to see if something new replaces what's in the 1.13 books. The pdfs take up *a lot* more space on my laptop and are much harder to search. It's a good thing that I normally know which book I want to look into, so I don't need indices and such. I avoid KC whenever I can. No only is it slow, it is awkward to navigate and it just 'requires' javascript. Also, it doesn't work at all using a proxy (like ixquick) for searching (with javascript turned off). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Dump Defaults (Was: SLIP IF Trap?)
> Dusting off this old thread: > I set a slip trap that included the TCPIP address space in the address spaces > to be dumped. TCPIPCS socket detail on that slip dump gives me: > > TCPIP Socket Analysis > BLS18100I ASID(X'002B') 01F0_2280 not available > > The address is in hvcommon, according to rsmdata. > > How come a slip dump does not dump common storage above the bar despite both > SQA and CSA being set? Actually, I take that back. LD on the slip dump tells me that 01F0_2280.:01F0_22900FFF. was dumped, 'belonging' to one of the other address spaces that I had dumped (primary asid at the time the slip hit). Apparently the TCPIPCS formatter only works correctly when this HVCOMMON storage is attributed to the TCPIP address space, as it was in the dynamic dump I took later. In that case this HVCOMMON storage 'belonged' to x'2b'- TCPIP. Duh. Is that IPCS or TCPIP at fault? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Vary OPEERLOG offline before IPLing the system?
> We use SYSLOG, not OPERLOG. And I see the NIP messages in it. It appears > that they are buffered and written to SYSLOG once JES2 comes up and SYSLOG > is available. Given our small size, I don't know if this buffer is a "wrap > around" and so might lose messages if there are a lot of them before SYSLOG > is initialized. If you ever see a dump taken *before* JES is up, you can see all messages for the current IPL. Using ipcs verbx mtrace, they are formatted as being on the 'delayed issue queue' (if memory serves right). I believe this is the same queue that synchronous WTOs are buffered on until they can be shown in regular hardcopy log. Once JES comes up and the tcb in *master* responsible for writing syslog connects to it, the messages are taken from the delayed issue q and sent to syslog using the original time stamps. That's why they are timestamped *before* the message that tells you a syslog was initialized. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Vary OPEERLOG offline before IPLing the system?
> OTOH, it's not clear what happens to OPERLOG messages after JES terminates. > They're probably captured in a CF structure, but would we have access to that > data after system shutdown? Assuming that operlog is not varied off, operlog merrily goes on recording everything that goes on after JES terminates. Operlog stops recording only once it gets the notification that the system is about to be wait stated. On that notification, all buffers (from either CF or DASD only) are hardened for all log streams. In the case of a DASD-only logstream (and presumably monoplex) you will have access once the system comes back up. In a sysplex (with CF logstreams) you have access all along. To the OP: Having seen enough asinine automation - are you sure that removing operlog right before JES wasn't done when operlog was first introduced, on the erroneous assumption that it would behave just like syslog and 'needs to be shut down'? I see no benefit in not having operlog until the very end, in fact, I insisted on activating DASD-only operlog even in the monoplexes to see what goes on after JES shuts down. That way I found the cause of some abend during z eod. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Top-Secret
> The above will "work" but not sure what the ramifications are, however, if > you establish a TSO environment in a long-running "job" such as a started > task. The most obvious one is that you cannot stop (p) the STC that has a batch TSO environment. You will always have to cancel it, unless everything else running in that batch TSO asid has established a way to listen to operator commands. And be very very careful that you don't invoke *authorized* TSO commands from two tasks in parallel when they also deal with PDSE (as in any library accessed in the address space is a PDSE data set). The result can be as easy as PDSE latch contention to address space hang to having to restart SMSPDSE to ENQ contention on SYSIEFSD Q10 and good night, system. I was told that that can also happen if your installation runs VSAM RLS or any other IBM component that uses latches, but while I have seen the PDSE problems first and second hand, I have never seen it happen for any other component. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Full z/OS SPOOL
I realize that probably just reflects the fact that it's defaulting): > BUFDEF BELOWBUF=(LIMIT=39,WARN=80),EXTBUF=(LIMIT=200,WARN=80) > Mind you, I'm taking the BELOWBUF values from what's currently set. Is 39 a > plausible number? > > I'm reluctant to Just Try It because I know that if I get it wrong enough, > z/OS won't come back up, and then we'll have to get IBM to fix it (and that > takes turnaround time). But if y'all bless it, I'll give it a shot! Here is what my $DBUFDEF shows: $HASP840 BUFDEF BELOWBUF=(LIMIT=100,WARN=80,FREE=100), $HASP840 EXTBUF=(LIMIT=500,WARN=80,FREE=500) I don't really remember which definition it was that sent me into the checkpoint reconfig dialog (having been through it twice so far). $TBUFDEF command will tell you immediately if you can just issue the command or if it cannot be accomodated. As for the change to the init deck so the next IPL works correctly: I had copied over the JES2 procedure into the first proclib searched in MSTJCL (USER.PROCLIB on my ADCD system) and I have pointed HASPPARM in that procedure to my own complete init deck (copied over from the active one) with the changes I have made. But since I also changed the spool volume(s) and the checkpoints and the complete setup when I did that, I was in for a cold start anyway. And I first tested this new setup as an alternate JES names JESX. I have never set up a standalone editor on my system (and running under Linux, I cannot use VM to edit anything). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Full z/OS SPOOL
> Ok. I must have missed it somewhere that the OP is perhaps using ADCD, but he > said he is using VM to access his JES2? So, can you run ADCD as a guest under > VM? The OP said that he is running in Dallas. It is my understanding (having never worked on any of their systems) that Dallas provides ADCD systems to vendors, also early code for new releases. In the description I have seen of that early access, it was clearly an ADCD system. Like any z/OS system, it can run under VM. > Automation is your friend (just careful not to purge sensitive jobs...). ;-) ADCD systems don't have automation configured. Ours came with NetView, and I could see remnants of SA/390, but nothing was set up to actually work. In my "copious spare time", I have attempted to set up NetView (with the help of a former colleague), I just haven't gotten around to attempting a logon to it, much less to use it for actual automation on our system. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Full z/OS SPOOL
Elardus, > Just curious, at what z/OS level are you? I'm asking because that APAR is > somewhat old, but I'm sure that local fix also mentioned by Bob should help > you out until you can fix the init deck. > > Are other LPARs using the same JES2? If so, you could try out the local fix > or purge job entries? Welcome to the wonderful world of ADCD systems! (Which is what I understand Dallas is using.) ADCD systems come with a teeny tiny spool, and most parms in the init deck are set so low that it just functions on a lowly loaded system. After the second or third real 100% spool shortage I substantially increased not only spool and related parms, I also had to go through checkpoint reconfig to accomodate the larger spool values. And ever since, I keep an eagle eye on any HASP050* message. And on spool usage. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
RACF password history was: AW: //STARTING JOB ...
> Check out the SETROPTS HISTORY and MINCHANGE options if you haven't already. Thanks, Tom! I did that and set history accordingly. No need for an exit, then! I would set MINCHANGE only if I see that someone tries to change the many passwords that are now kept to get to the (n+1)th password. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Dataset PACK profile
> Just trying to understand the importance of setting a PACK ON on the pds > member profile. > > What will be the impact if it is set ON in parmlib members ? I once had a customer who had accidentally turned it on in MSTJCLxx. The IPL failed miserably. I imagine the same thing happens if you use pack on on any other parmlib member. LPALSTxx, PROGxx, BPXPRMxx and IEASYSxx will probably also cause the IPL to fail. Other parmlib members will 'only' generate error messages about garbage. You may not want to live with the (functional) results, though. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: RECFM=VBS,LRECL=32767?
> 6 //DD7 DD UNIT=SYSALLDA,SPACE=(1,0),RECFM=VBS,LRECL=32767 > 6 IEF638I SPECIFIED NUMERIC EXCEEDS MAXIMUM ALLOWED IN THE LRECL > SUBPARAMETER OF THE DCB FIELD > Why is this considered an error? > > In fact, 32761 is accepted; 32762 causes the error. On what rationale is > this based? > The same limts appear to apply to RECFM=VB. > I haven't tried OPENing any such data set. The explanation I came up with when I tested boundary conditions () was that 32767 is the maximum allowed for fixed records. And that length was determined by DASD geometry (in the past). A variable length record always has a length field preceding the actual record data. And since this is blocked, you also need length for the block descriptor. These two make up the 6 byte that you cannot specify for lrecl without exceeding geometry. I haven't tested (or if I did, I forgot the results) if it makes a difference when you use RECFM=V(S). The layout is described somewhere in SC26-7410 Using data sets. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Setting up a sysplex and OMVS/zFS
> It looks like most of the datasets the system is complaining about are > in their own user catalogs. I should be able to connect the user > catalog to the master catalog on the zOS 1.13 side and be ok. > > Tho there is one that is cataloged in the z2.1's master catalog. I may > have to just ignore the error message for that. I think that solves > that problem. I know I am a bit late on this, but do you really want to *share* the OMVS environment in an ADCD sysplex of two differing z/OS levels? Not sharing OMVS would take care of all error messages, and you already have the infrastructure duplicated for each release. Of course, if you plan to later migrate the 1.13 system in that plex to a 2.1 system using the same sysres's, forget I said anything. Having recently renamed a system that is using the ADCD sysplex setup (making it shared HFS/zFS depite being a monoplex), we discovered the hard way how asinine the delivered symbolic links are, pointing back and forth in several symbolic substitutions. OMVS did come up (and TCPIP came up!), but since /tmp was not mounted correctly due to the &SYSNAME having gotten changed, we could not even set new paths in OMVS, because that requires tmp to be available. I would not be surprised if you run into problems like this down the road with your semi-shared setup. Admittedly, yours seems like a later version of my 1.13 ADCD version (in mine the root was still an HFS, which we were very grateful for, because we could just catalog it on another system and then set the links). Essentially, we ended up replacing the symbolic links in the root with hardcoded links and all was well again. Of course, this effectively stopped "OMVS sharing". But now I have an idea how to remove 'sysplex sharing' of OMVS in my monoplex and remove the BPX CDSs, since they are not needed. That will also take care of all the complaining OMVS health checks that still don't take into account that ieasysxx says PLEXCFG=MONOPLEX, so nobody ever will automove anything away from my monoplex. (After 2 years, I am now almost where I want my system to be!) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Why is the "System Data Sets" book no longer deemed useful?
> Deleted: z/OS MVS System Data Set Definition, SA22-7629: Information about > system data sets is available with the information sent with the z/OS install > package. There is no replacement reference for MVS System Data Set > Definition, except for references to cataloging. In that one instance, the > reader should be referred to: z/OS DFSMS Managing Catalogs, SC26-7409. I have seen an 'install package' exactly once in my almost 30 years of z/OS, but I have been asked to check any number of system data set allocations and make them better. Using an 'install package' (where exactly would all of that be documented?) is an excuse to drop a useful book. And having gone through a recataloging exercise for a full system recently, I can state that the docs for indirect cataloging are mostly opaque. Everything is in there, no question about it, but it is decribed so that I ran into all kinds of problems until I figured out how this actually should be set in loadxx, ieasymxx and the catalog. Once I knew how to do it, I could see that it was stated there, but not before. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Why is the "System Data Sets" book no longer deemed useful?
> > When did it last exist? I have V1R10 docs here and I don't see it. > > I found one in the z/OS V1.1 bookshelf SA22-7629-00 says First Edition, March 2001. I have either always copied it over in .boo format or this was contained in some sort of 'release DVD' when I downloaded the next release. There isn't a similar book in the pdf collection for 2.1 I downloaded using the same steps. Maybe IBM thinks we don't need System Data Set Definitions anymore? Or all system data sets are now defined using clicking and z/OSMF, no need for actual information anymore since the system will know what to do? :-) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Big data - Google wants *all* of you
> Indeed. If a hospital uploads my genome code, will it be as anonymous, or > will I be spammed (real medicine, quackery medicine, experimental medicine, > etc) on what I have in my genome code? Let's hope that I could veto having my personal data (mis)used that way! Do we even get told if a doctor uploads such data? > But there is not a word on how safe the stored data it will be! (cr)apple > cloud has been cracked, google cloud is next. Exactly. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: RDz v9.0 and DIAGxx VSM USEZOSV1R9RULES(NO)
> Is anybody out there running the RSE server on z/OS (1.13) with > USEZOSV1R9RULES(NO) successfully? Anybody have any other idea why today the > developers cannot authenticate to the RSE server as they did before the most > recent IPL of the DEV lpar? We are running RSED8.5 with USEZOSV1R9RULES(NO) successfully. Says she. Now that I think of it, we sometimes had problems connecting to JESMON, and the next day it worked again. Or it worked for one user but not for another. We haven't figured out why, and as an ADCD user, we cannot open PMRs with IBM (at least not for z/OS or anything that runs under z/OS). I haven't started RDz9.1 on this system yet, but I believe that it also ran successfully at 9.1 on the system that this one was cloned from (both a very old 1.13). At least authentication has never been a problem. I have heard, though, that 'things sometimes didn't work' at 9.1 and the next day they worked again without any intermediate changes on the other system, but I haven't been able to look into this or get more specific data what 'things' did or didn't work. USEZOSV1R9RULES(NO) could certainly be a contributing factor to this. As for the second question: What else was changed with that IPL? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Abend s0077
> It blows me away that LE has to take a perfectly good 0C1, 0C4 or 0C7 and > convert it into a U4xxx code. Not only that, they have to obfuscate the > registers. I have learned (with a lot of scars) that the ZMCH (if you can find that in the ceedump) has the actual registers at time of abend, no matter what abend (or program check) it was. Unfortunately, no ceedump has a concept of other then key8/9 storage or access registers or storage keys. So if you're dealing with a 'real' PIC4 (not an obvious one where the register in question is either zero or characters), you're out of luck unless you can reproduce the problem and get a 'real' dump, preferably using the TRAP(OFF) method. Program checks are mostly indicated in the LE message that informs you of the user abend u4039. The number of the message is equal to the program check number (don't have an example handy here). Language Environment Run-Time Messages (SA22-7566) contains the LE user abend codes, and for U4039 it is of course the user's fault that the abend code is obfuscated: "The U4039 has been issued because the user has set the TERMTHDACT run-time option to request a system dump be generated when an unhandled condition of severity 2 or greater has been encountered, and Language Environment will terminate." Program checks are handled by the LE ESPIE routines, which get control before slip, so there is no way other than some incantation of TRAP(OFF) to set a slip trap on an abend code 0Cx. Later the LE ESTAE gets control and re-issues the (now-handled) program check as a user abend. But between the original problem and the user abend a lot of processing has taken place. I used to think that all I need to do is TRAP(OFF) and set a slip trap to get a dump, then use "verbx ledata 'ceedump'" to get the calling sequence up to the error. Turns out that that will sometimes be different from a ceedump as formatted by LE, though. >Why would it NOT be better to have a User Return Code that explains the single >0C1 or 0C4 or OC7 value. How is that less useful? Answering only for myself: Because the 'explanation' that LE comes up with is wrong in 99% of all cases, and it is really hard to see the storage as it were when the abend occured due to too much things having happened since the original problem. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Abend s0077
> I was so surprised by this that I had to go look. A spot check of > several z/OS messages books shows that many (perhaps most) messages > include "Module: [module name]" just as they always did. Maybe you were > "lucky" enough to have a tendency to work on exceptions, or on some > component or components for which the detecting modules were not documented. My guess is that I was unlucky enough to get all the exceptions. My spot check: IDA018 has Source: DFSMSdfp (which is evident from the prefix alone and hence redundant). IXZ0108E only has Source: JES2 (not that I ever worked on JES topics) IXGH006I has Source: n/a and is a logger health checker message (as indicated by the prefix). Seems to me that a lot of the new components don't bother with that information anymore, while the 'old stuff' didn't get deleted from the books. > For system abends, there is a cross-reference table (Chapter 4 in the > 2.1 System Codes book) that provides detecting modules. I can't easily > tell whether all are represented there, but it's a fairly long table. > For Abend077, though, it's clear from reading the message that a > perhaps-significant number of modules are involved, and that someone > chose not to document all of them in Chapter 4. Abend026 is missing altogether in that table. And has the issuing module encoded in the reason code, *if* you have the appropriate mapping macro. I did a lot of XCF/XES stuff way back when. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Abend s0077
> In my experience, most of the time the first reader of a "contact > systems programmer" explanation IS a systems programmer. I always found > it highly irritating when that was the only explanation for an error > message. Even a broad general description of what went wrong can > sometimes be helpful in isolating whether it is an application mis-use > of the operating system or an actual problem with system software that > needs to be resolved. I remember quite clearly how exasperated I always was (back when I was IBM level 2) when the books said 'contact IBM', and I was IBM and I had no clue how to find out what module even had *issued* the message, much less what the reason code meant. Right around that time the powers that be also decided not to publish anymore what module issued a message, so there went that crutch. And for the people out in the backwaters there was no cross reference, either. In some cases no retain search even gave a starting point and I ended up sending such a problem straight to Poughkeepsie with the question where to find more information. Somehow I don't think the situation has become any better. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: AW: Re: MEMLIMIT not honoured by DB2 utility job
Peter, > I have found plenty of places where the discussion is about DB2's DBM1 and > IRLM address spaces. Those ignore any MEMLIMIT setting and set this limit to > values defined in DB2. > > I could not find anything related to the utility program DSNX9WLM regarding > MEMLIMIT. Waiting for an answer from my DB2 colleagues. I don't think that this is written down anywhere. All you need to do is dump the running job and check the value of memlimit in the RSM control block (I believe it was an RSM control block). You'll find that DB2 overwrites whatever the installation specifies with what DB2 wants by the simple expedient of being APF authorized. They just go and put their own value into the RSM control block, effectively overwriting usual controls. Check the archives, I seem to have a dim memory that we discussed this here and I got bashed when I objected to such a practise. In my case it was GRS (they do the same), I think, back in z/OS 1.2 or 1.4 days. Just look at the memlimit column in SDSF DA, you'll see exactly which address spaces have adopted this practise. (In our case, it was even more evident, because I had limited *everybody* to 6GB MEMLIMIT in IEFUSI/SMF, for the simple reason that the system didn't have enough real storage to back any more. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Dump Defaults (Was: SLIP IF Trap?)
> I can tell you the intent. Each allocation of above-2G storage fits into > one of 5 "should I include it in the dump?" categories -- > - like region (default for private) > - like LSQA > - like CSA (default for common) > - like SQA > - do not dump automatically > > The intent is to dump "like x" when 'x' is included. Things that are "do > not dump automatically" are expected to be included explicitly (e.g., > LIST64, SUMLIST64) in a dump, when needed. Thanks Peter, that certainly explains it. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Dump Defaults (Was: SLIP IF Trap?)
> CD SET,SDUMP=(RGN,CSA,LPA),ADD,MAXSPACE=5000M > > ... probably inherited from something someone did before the last > millennium. > > Does anyone have any other (21st-century) recommendations for improving > this? I have set this: CD SET,SDUMP=(ALLPSA,CSA,SQA,GRSQ,LPA,NUC,RGN,SUM,SWA,TRT) In our installation there is no need to increase maxspace, the dumps I'm interested in all fit the 500MB limit (well, except when someone initialized a 1GB data space to hex zeroes :-) ). We're a monoplex, so XESDATA is something I son't care about, either. Given that I cannot open z/OS problems to IBM, anyway, there would probably be no need to even include NUC and ALLPSA. Now what I still don't quite 'get' is how to dump everything needed above the bar automatically. RGN does not seem to cover that, so any program experiencing a problem would need to explicitly specify ranges of storage above the bar in its own dump invocation. Or am I missing something? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: SLIP IF Trap?
> FWIW, I have a feeling that many (most?) customers don't analyze SVC > dumps often if ever; they have no opinion one way or the other about the > necessity of RGN. :( IBM or an ISV will likely provide an SDATA= string > for any dump they request, so the defaults really don't matter to them... Yes, and every support person within IBM has learned to copy an sdata string that contains ALLNUC, usually for problems that have nothing whatsoever to do with nucleus modules, unnecessarily increasing the time it takes to write the dump and the dump size itself. I had learned to ignore any SDATA setting that IBM gave out and instead use my own settings (that do not include ALLNUC). And it was usually me that pointed out to IBM that yes, the data *are* in the dump (when IBM support told me that the dump wasn't worth anything because 'data are not in the dump'). So every sysprog should have enough dump reading skills to at least ascertain when someone is giving them bullsh.. (Now I dream on.) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: SL SA and range parameter
> On what z/OS version? 1.13. > As documented in MVS System Commands (v1.12): > "RANGE is not valid for error event traps. RANGE cannot be specified on an > ACTION=IGNORE storage alteration PER trap." > I believe this could be the reason for IEE739I. Since this was a PER trap, not an error event trap, I didn't worry. I have overlooked the second part, though. So what I wanted to do (only monitoring certain words in a 2 page area) is really not supported. Pity. Thanks for finding the reference, though. I had spend way more than an hour reading about the correct syntax, and the docs aren't exactly easy to read, anyway. I missed the little sentence. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
SL SA and range parameter
I have attempted to set a storage alteration PER trap. I was interested in several 4 byte areas distributed across maybe 2 pages. My intention was to disable SA monitoring for everything except those 4 byte areas, similar to the way it is described for IF traps: SL SET,SA,D,ASIDSA=SA,JOBNAME=,RA=(1E92D2A0,1E92E4A8),A=TRACE, TD=(STD,REGS,1E92D2A0,+8,1E92D4A0,+8,1E92D6A0,+8),id=bn1,e IEE727I SLIP TRAP ID=BN1 SET SL SET,SA,D,ASIDSA=SA,JOBNAME=,RA=(1E92D2A8,1E92D49F),A=IGNORE, ID=BN2,E IEE739I RANGEPARAMETER IGNORED FOR SLIP ID=BN2 Am I getting this wrong or is this actually not supported? Any hint will be appreciated. Barbara ps: I ended up with sa-monitoring the full range and only tracing the fields I was interested in. I get a lot of false (uninteresting) entries that way. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: XCFAS holding OMVS and XCF CDSes
> You never add PCOUPLE. You promote from ACOUPLE to PCOUPLE via PSWITCH. You do if you activate a CDS for the first time. There is nothing preventing you from specifying both pcouple and acouple in the same command. Admittedly I have never tried to switch both primary and alternate to a new name in the same command when this CDS type was already active. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: multiple TSO Sessions (try this)
> > But I am fairly sure that, IN THIS CONTEXT, the CEA which was being > > mentioned in the thread was not Common Event Adapter, which has _nothing_ > > to do with multiple TSO logons that I can see, but was referred to z/OSMF > > and its ability to allow multiple TSO sessions for a single user. I guess > > we will need to revisit the USS wars from days of yore. > > CEA used by z/OSMF is Common Event Adapter. I believe CEA started out in z/OS 1.9 as a system address space, but I may be wrong. z/OSMF came with 1.11? In any case, the common event adapter CEA address space needs to run in full function mode (which it does nicely with the definitions delivered by IBM). And it needs a bit of RACF setup detailed somewhere in samplib, primarily for the z/OSMF 3270 function which uses the same interfaces. Documentation for these interfaces (commonly called "CEA launched TSO address space") is in chapter 5 of MVS Programming: Callable Services for High-Level Languages for release 1.13. The actual JSON format used for communication is only documented in the 2.1 books for ISPF. Essentially, the screen and the keyboard of a 3270 user are replaced by a USS message queue which is owned (or initiated) by the CEA address space. TSO and ISPF write their JSON format to that message queue, a client can retrieve the "screen" and send the "keystrokes" back to the USS message queue where it is read by TSO/ISPF. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: multiple TSO Sessions (try this)
> Change ISPCFIGU. Use the command ISPCCONF. IIRC the default number of split > screens is 8. Thanks. Will do that, just to astonish my colleagues. :-) Yes, the default is 8. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: multiple TSO Sessions (try this)
> You should bump the split screens up to 32. Much more impressive. :) I bite. How do I do that? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: multiple TSO Sessions (try this)
> You're not even getting IKJ56425I? You must have setup your system to allow > that. No to both. At least not that I know of - this is an ADCD system. I do get IKJ56425I when I attempt to logon a second time from a terminal emulator. Note that the other 10 sessions do not have a VTAM ACB in the PROCSTEP (and don't count against that limit). And also note the logon procedure - these other 10 are CEA launched TSO address spaces. z/OSMF uses the same services for 3270 access. > Indeed, my TSO users sometimes come crying back to me 'where are my > responses/messages', just because they're too lazy to have unique console > names. Oh well... ;-) You also need a unique ISPF profile per TSO session because the console name is saved in the ISPF profile. So as gil alluded to - if you share the ISPF profile, the console name for the sessions that logs off last would win. (I feel like I am reading 'Watching the clock' again - it doesn't matter what happens, it only matters what happens *last* when the quantum lock comes down). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: multiple TSO Sessions (try this)
> Is there some actual technical reason why TSO cannot be made to allow one > user ID to log in multiple times to TSO within a single LPAR? Who says TSO does not allow one userid with several logins within a single apar? JOBNAME StepName ProcStep JobIDOwnerC Pos DP Real PagingSIO CPU% ASID ASIDX BARBARA CEAPROCF TSU07563 BARBARAOUT FF 2105 0.00 0.00 0.00 96 0060 BARBARA CEAPROCF TSU07566 BARBARAOUT FF 2110 0.00 0.00 0.00 68 0044 BARBARA CEAPROCF TSU07567 BARBARAOUT FF 2110 0.00 0.00 0.00 30 001E BARBARA CEAPROCF TSU07562 BARBARAOUT FF 2109 0.00 0.00 0.00 75 004B BARBARA CEAPROCF TSU07570 BARBARAOUT FF 2115 0.00 0.00 0.00 92 005C BARBARA CEAPROCF TSU07565 BARBARAOUT FF 2109 0.00 0.00 0.00 93 005D BARBARA CEAPROCF TSU07564 BARBARAOUT FF 2108 0.00 0.00 0.00 91 005B BARBARA CEAPROCF TSU07568 BARBARAOUT FF 2109 0.00 0.00 0.00 78 004E BARBARA ISPFPROC SC0TCP31 TSU07560 BARBARAIN EE 2349 0.00 0.96 6.74 89 0059 BARBARA CEAPROCF TSU07561 BARBARAOUT FF 2062 0.00 0.00 0.00 90 005A BARBARA CEAPROCF TSU07569 BARBARAOUT FF 2115 0.00 0.00 0.00 94 005E We only have one lpar (one system), and I am now logged in 11 times with the same TSO userid. We have made sure that each of those TSO sessions has their own ISPF profile data set. And each session can have up to 8 split screens. And be aware that TSO send won't really work. I don't remember if the message is sent everywhere or only to the address space with the lowest asid number. Each of those asids can be terminated externally by using the asid qualifier if it doesn't terminate all by itself. Just canceling u=barbara will get you "who? which one?" If you go into SDSF and name all of those SDSF consoles the default (the TSO userid), then command responses won't necessarily come back to the console that issued the command. Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Storage Tracking
Peter, > Is there a tool or Rexx exec which can help me in identifying the Job > consuming the High Virtual Shared Storage or Common Storage ? check the archives :-) This is Jim Mulder's answer when I asked the exact same question about two years ago: > 6) I bemoan IBMs failure to give us a good means of figuring out who > is using HVSHARE or HVCOMMON storage and how much storage-above-the- > bar is actually *used*, i.e. backed. As far as I know, there still > isn't any tracking done for HVCOMMON storage, no means of reporting > about it. No way to know who is excessively using common storage > above the bar. Same for HVSHARE. Unless you're named Jim Mulder and > know where to look in a dump, us lowly customers cannot even check > that in a dump. Am I mistaken in the reporting capabilities? Has > that been fixed by now? Or is it another means of IBM trying to sell > software service contracts to get that done only by IBM? Not to > mention the frustration until you find someone who can actually *do* it. IPCS has RSMDATA HVCOMMON. That at least tells you the owner of the memory objects. For HVCOMMON which is obtained via the IARST64 macro, the IARST64 macro says: There is diagnostic support for 64 bit cell pools, created by IARST64, in IPCS via the CBFORMAT command. In order to locate the cell pool of interest, you need to follow the pointers from HP1, to HP2, to the CPHD. For common storage, the HP1 is located in the ECVT. CBF ECVT will format the ECVT, then do a FIND on HP1. Extract the address of the HP1 from the ECVT and CBF addrhp1 STR(HP1) will format the HP1. Each entry in the HP1 represents an attribute set (storage key, storage type (pageable, DREF, FIXED), and Fetch Protection (ON or OFF)) The output from this command will contain CBF commands for any connected HP2s.Select the CBF command of interest and run it to format the HP2. The HP2 consists of pointers to cell pool headers for different sizes. Choose the size of interest and select the command that will look like this: CBF addrcphd STR(IAXCPHD). This will format the cell pool header. To see details about all of the cells in the pool, use the EXIT option as follows: CBF addrcphd STR(IAXCPHD) EXIT. For private storage, the HP1 is anchored in the STCB. The quickest way to locate the HP1 is to run the SUMMARY FORMAT command for the address space of interest. Locate the TCB that owns the storage of interest and then scroll down to the formatted STCB. The HP1 field contains the address of the HP1. From here, the processing is the same as described for common storage above. You can also use the EXIT option as follows: CBF addrhp1 STR(HP1) EXIT to produce a report that summarizes the storage usage under that HP1. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Something to Think About - Optimal PDS Blocking
> SAdump never writes short blocks. When it wants to write a block > which is not full, it pads the block with dummy records (which IPCS > knows to ignore). Then I don't understand why the utility used produced a copy significantly smaller than the full dump. Once it was initialized on the full 63000cyls I could see that sadump had successfully completed, so what chance of an I/O error is there? Any utility would not care about the content of a 'short' block padded to full length. The statistics at the end of the sadump messages were clearly showing that each of the volumes was between 30% and 35% full (they are mod27 and I forgot the real usage number). The copy was terminated after the third volume, is my guess. > SAdump does not support data sets whose stripe count is greater than 1. My bad for using the wrong terminology. Thanks for reminding us about the reasons for using copydump. Barbara ps: IBM was sent this sadump, too. It is 66181,077,724 and has a 'nice' SYSIEFSD Q10 deadlock intermixed with PDSE latch contention. Since I never saw the full sadump, I was unable to determine why Q10 was held and not released, effectively preventing SMSPDSE1 restart. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Something to Think About - Optimal PDS Blocking
> I am somewhat at a loss to understand how some of the problems you are > detailing happened. The only way it could have would be with an ill behaving > user written program or process. If I remember this correctly (and I am on shaky ground here), sadump writes a 'special striping', understood fully only by IPCS. Mind you, sadump is a standalone application that does not use standard z/OS services because they are not available. Either here in old posts or even somewhere in the docs the behaviour I detailed is described, including the warning NOT to use standard utilities when copying a 'striped' (multi DASD volume) sadump. Suffice it to say, the full 63000cyls (from my recent customer) could only be copied when the customer used IPCS COPYDUMP. I was not told how they copied it when they sent me the 27000cyls. Just thought that a reminder about sadump was in order. After all, who would want to take an sadump only to be told by support that the necessary data are not in the dump? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Something to Think About - Optimal PDS Blocking
> Yes, however FBS stands for Fixed Block Standard, not Spanned. Exactly. And the last record in an FBS data set can be "short", i.e. less than lrecl. The short record denotes the end of the data set. And all the utility programs know it and stop processing once they reach the short record. That is all fine and well as long as we are not dealing with a multivolume data set. Think standalone dump written striped to - say - 5 volumes. Each volume has a data set in format FBS, but only one of the volumes can have a short record. SAdump knows that, and IPCS knows it, too. The utilities don't. So assume that you took a complete sadump to 5 volumes and the sort record happens to be on the first volume. Then you use a utility (ICEGENER is my favourite) to copy somewhere else. You end up with a severely truncated sadump. One fifth, to be exact. IPCS will read the truncated dump to the best of its abilities, but you will get all kinds of 'storage not available' warnings when looking at the dump. Last time a customer sent me an sadump, it had 27000cyl. I got all kinds of warnings and got lucky in that the sadump messages were clearly truncated and didn't show the 'successfully finished' message. It turned out that the wrong utility was used for copying, and the actual dump had 63000 cyls. Visible when IPCS COPYDUMP was used for copying. IPCS knows that a striped sadump can have the short record "earlier". Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ISPF 3.4 with HLQ *
> Doesn't somthing in this thread tend to refutetZe'ev Atlas's recent assertion: > > >Apparently z/OS is capable of finding the file without any manual > >assistance! ... Well, the devil is a squirrel (as we say in German). While an ADCD system is praised (by IBM) as the best thing for application development since it runs without the buyer having to have sysprog knowledge (and that it does), it is not exactly a shining example of how a z/OS system should be configured to satisfay best practises (health checker coughs up at least 20 severe/errors/warnings when first started; and a few of them cannot easily be remedied). So in general, I would agree with Ze'ev, with the caveat of 'in a well configured system'. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ISPF 3.4 with HLQ *
> Use ISMF or something like that to search for all your 'other' dataset on all > of your catalogs. Just to make sure there is not another [unwanted] duplicate > on another volser. Just review your Linklist and APF that they are referring > to the correct dataset and/or volser where applicable. I have spent an inordinate amount of time already wading through the contents of the ADCD supplied disks. I have already cleaned up quite a few obsolete (empty) catalog entries. I am currently attempting to copy the SMP/E environment that comes with the ADCD setup so that I can install maintenance without risking an IPL and/or a non-IPLable system (can you say linklib goes into extents while running apply?). I have deleted about a thousand obsolete DDDEFs that were carried over from who knows when, using an official 1.13 program directory. While copying target libraries to a mod9 (and cleaning up the placement of them) I found out that quite a few of these target libraries are much much bigger than they should be (as in 4000 tracks instead of 300). This indicates to me that they were carried over from the last millenium and never cleaned up. Which is actually funny, since the ADCD systems come packaged for mod3 sized volumes, and DASD space is at a premium for those mod3! > >On the other hand, this catalog (USERCAT.Z113.PRODS) contains a lot of the > >same data sets as the master catalog. > Wasn't that user cat not an old master catalog? I have no clue. It comes with the ADCD setup. A few products come with their own HLQ (not SYS1), but nobody bothered to define an alias for those HLQs. There were also some empty aliases without any data sets behind them. My guess is that what I am seeing is a badly executed cloning job for this delivered system. This is kinda confirmed by the jcl that was apparently used to install - it uses fairly different volsers that are not available to this system. What I find much worse is that the USS file systems are called HFS and ZFS and they have no alias entry anywhere. Hence they're catalogued in master cat. And some of them are called ZFS as their HLQ, but they are actually HFSs. I don't think I have any chance to define an alias when there are already data sets with that HLQ in the master cat. And given that this is my favourite USS, I cannot even rename them, define the alias and copy them back (when they're VSAM, they need to be physically copied, rename won't work). > If you have say, XYZ.* on mastercatalog and also on usercatalog, then if you > don't have an alias, you would see the entries on mastercatalog. But if you > create an alias, you should see the entries on the usercatalog. But in your > case, you're referring to SYS1.* entries, so you're probably sitting with > duplicate entries. Yes, that explains the first data set I found while copying DLIBs. That one existed in the usercat and on the distribution volume with a SYS1 HLQ, which *should* have been catalogued in the master cat, not a user cat. The second DLIB data set doesn't have an alias entry in the master cat, so it couldn't be found using standard search order. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ISPF 3.4 with HLQ *
Tom, > When you wildcard the HLQ in ISPF 3.4, ISPF searches ALL catalogs, so > you may end up with hits that are not in the standard search order, such > as offline resvols, etc. That's why I submitted a requirement and ISPF > added the catalog name to 3.4 if you check that option. And a good thing you did that. I just found another DLIB data set that isn't found in the standard search order. I went to the 'total view' (the display catalog option is on by default) and found the catalog this stuff is in. I guess I always assumed that any data set only identified using the dsname would be found in the standard search order. This is apparently a wrong assumption. I am hoping that my catalog command for the other data set didn't harm anything. On the other hand, this catalog (USERCAT.Z113.PRODS) contains a lot of the same data sets as the master catalog. It appears to be connected, otherwise the alias pointing to it would not show me any data sets (if there is an alias at all). Time to go and read up on non-standard search order. :-( Thanks, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
ISPF 3.4 with HLQ *
For years (forever) I thought that any 3.4 listing done using ISPF is done using the catalog search sequence (unless I deliberately specify a volume). I did not specify a volser and listed sys1.afom*. I got 4 hits. One was missing (sys1.afomhfs), that had just given me a JCL error 'data set not found'. I redid the search using *.afom*. I got 5 hits, including my missing data set. I went to the volser and listed everything on that volser. Yes, data set was there. listc on it resulted in 'data set not found' (some catalog error). I did not set anything special for ISPF options, but I am baffled that using HLQ * would get me an uncatalogued data set. Have I gotten that wrong all these years? Or is there another explanation? I ended up issuing a c in front of that entry, and now the data set is catalogued. Considering that this is a DLIB data set that does not have a volser in the DDDEF, I wonder how the ADCD people ever got this to be populated in the first place. (Well, I don't wonder, since the ADCD system doesn't have an SMPE environment usable for serious SMPE work, it was clearly copied together to fit on the mod3 volumes). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ISPF statistics
> Version is only updated by the VERSION edit command and by various > utilities (PDS, StarTool, 3.5, et al). Members with a VER.MOD level of > 01.99 remain that way. Should there be an AUTOVER ON command to increment > version to 02.01? The ISPF books are very clear on the level portion of vv.mm: It will be incremented on edit and it will always stay at 99 and not wrap (unless manually reset, of course). The ISPF books are equally unclear about the rules that would increment the vv portion. I haven't found a rule that says "vv will be incremented when" I am looking for a rule that states when to increment version. Or is that up to each customer? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: ISPF statistics
> It's certainly possible to have DSORG=PS,RECFM=U data set that does not > contain load modules. I've used them. I believe it's also possible to > have DSORG=PO,DSNTYPE=PDS,RECFM=U with content other than load > modules, but ISPF refuses to recognize this fact. I don't know whether > DSORG=PO,DSNTYPE=LIBRARY,RECFM=U can have content other than > program objects. Yes, it can. I just ran my IEBDG job that allocated a PDS to allocate a PDSE instead, and it gets filled with text data by iebdg without a problem. ISPF does NOT support editing recfm=u data sets, which is clearly indicated in the ISPF books. ISPF statistics are *only* supported for fixed or variable record format. This is the root of my first question: Application developers apparently call 'ISPF statistics' what they see for recfm=u data sets (and most of them have never encountered a recfm=u data set that was NOT a load library). I believe the data shown by ISPF are stored in the directory entry itself mapped by IHAPDS owned by DFP. Unfortunately ISPF itself seems to call these directory data 'ISPF statistics', at least in the ISPF user guide(s). ISPF just *assumes* that any RECFM=U data set is a load module, when in fact it isn't. As far as I can see 'real' load libraries always have an lrecl of zero. Which ISPF does not test. When I use a RECFM=U data set in ISPF (with LRECL other than zero), I get 'browse substituted' on any edit attempt. > It has been announced that in 2.1 Rexx EXECIO will support RECFM=U. > When we get 2.1 I'll need to experiment to discover the restrictions. I believe what that means is that execio will make sure that the data in the directory entry IHAPDS are filled in correctly for a recfm=u data set, too. I do NOT believe that mixing of load modules and data members will be possible. >ISPF might use this information to format the headers of >the member list. No. For fixed and variable length records in a PO data set the user field of the directory entry IHAPDS points to the ISPF statistics, which can be 30 or 40 byte long. They are mapped by ISPDSTAT and owned by ISPF. As I said before, ISPF clearly states that it does not maintain PDF statistics for recfm=u data sets. I also found some references to 'non-pdf form of statistics'. I'd dearly like to know who writes those (well, other than a dedicated program using STOW to put *something* into the user field). Does anyone have an answer for my second question: How does the version in the pdf statistics get incremented (other than by some form of the VERSION command)? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
ISPF statistics
I seem unable to find answers to the following questions, so I am hoping the collective wisdom can help me. The ISPF books are fairly unclear about the usage of the name 'ISPF statistics'. In many contexts 'ISPF statistics' mean what's mapped by ISP.SISPMACS(ISPDSTAT), which only applies to fixed or variable length records, not to load modules. The ISPF user guides are the only books that also call the data shown for recfm=U data sets 'ISPF statistics'. My questions are: Do the RECFM=U statistics also have an 'official' mapping macro? Or does all of that come from SYS1.MODGEN(IHAPDS)? How does the version in the pdf statistics get incremented (other than by some form of the VERSION command)? Thanks and best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: DISPLAY TCPIP,,NETSTAT help
> D TCPIP,TCPDFLT,N,CO,CLI=VPS*,FORM=LONG,MAX=* > 3 OF 9 RECORDS DISPLAYED > Q). If my global is 100, and my max is "*" (or some large number), why does > SOCK show 7 of 7, and CO only shows 3 of 9? I'd like to see all 9 of 9. I > don't think I need "FORM=" to be able to use "MAX=". To see 9 of 9 you need to issue the command D TCPIP,TCPDFLT,N,ALLCONN,CLI=VPS*,FORM=LONG,MAX=* The way I understand it is that the 'missing' connections will 'soon' go away, anyway and have a status like TIMEWT. (But I don't really understand IP.) I have been told that it is described somewhere. Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: z/OS IPL Issue
> IBM is able to read them. They might request the SA dump if you report > the problem and you will be glad you took it. I disagree. Sort of. The customer had asked IBM "Why did the restart of SMSPDSE1 not complete?" and the customer had given IBM an sadump. IBM support said that they cannot answer that question and that the customer should reproduce the problem and take another set of docs (not an sadump) that PDSE support is familiar with. The question "Why did SMSPDSE1 restart not complete?" was easily answerable from the sadump. The restart did not complete because there was unresolved contention on SYSIEFSD Q10. That ENQ was held in an authorized multitasking batch TSO address space that uses PDSE data sets. It had been held since *before* the SMSPDSE1 address space was terminated, and the customer had been fighting PDSE latch contention with the freelatch command for quite a while before Q10 got irrevocably held. So, yes, if you know what you're doing, a properly handled sadump will answer almost any question. Heavy emphasis on the 'if you know what you're doing'. I just don't think that many IBM support groups have a grasp on how to read a dump, much less an sadump. And many of those who have the mechanics of IPCS down have no knowledge how to interpret the clues that the sa/dump gives them. There are a few dinosaurs left who do, but when Ed Jaffe and I are among the 'youngsters' in that group - it speaks volumes for the platform. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Command output not displayed on Console
> The command output shows INTERNAL. Are you issuing the command from the > internal reader or thru some strange product? Is there an automation product > that is suppressing the original command and re-issuing it with the options? The command was issued from STC09092, which he said is control-O. My guess is that the command was issued using some CONTROL-O interface (that I don't know), which in turn is using MGCRE *without* an explicitly defined EMCS console and instead using console id 0 which translates to INTERNAL. And Control-O apparently doesn't get the responses to a command and displays them properly. The question is if the operators can see the command response when they use a true console and issue the command directly. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: z/OS IPL Issue
> This kind of error rarely gets mentioned on IBM-MAIN because SAD of > early IPL failure is super fast and IPCS takes only seconds to init such > a dump. The problems are usually fixed immediately and folks move on to > more challenging issues. I agree completely. But if there isn't someone forceful around that insists on taking the sadump, in many installations the knowledge how to read a dump, much less an sadump is a lost art. So sadumps don't get taken because there's nobody there who knows how to read it. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: z/OS IPL Issue
> I recommend that you do not define any devices as NIP consoles > in your IODF. Then Operating System Messages will always be used as > the NIP console. The problem is convincing the powers-that-be of that. I have the scars from attempting it. Here inertia hits - 'we have always used a "real console" for NIP, we want to keep it', and no amount of arguments helped me. And I wasn't even the one who couldn't see the actual error message since I wasn't around that night. All those refusing to rethink their definitions later were the ones that flailed around for hours (yes, literally hours) until hitting a solution *without* knowing what the real problem was. And even on a fast-rolling NIP console - an LPALST syntax error would have been readable before the wait state. Ask me how I know. Barbara ps: Thanks Jim for making me revisit your post about 'colourful language'! I got a good chuckle out of that. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Possible To Remove Both CFRM Couple datasets?
> We have (by oversight) an active base sysplex with CFRM couple datasets > defined in COUPLExx. As it is a base sysplex there is obviously no active > CFRM policy. Are you sure you don't have an active CFRM policy? The policy that was last active on those CFRM CDSs would have been activated, no matter if the CFs defined to it are there or not. You could try setxcf stop,policy,type=cfrm (caution - I have never tried it before). > Is there any way to remove the CFRM dataset definitions from the running > sysplex without a sysplex-wide IPL to re-drive COUPLExx once the definitions > have been removed? Not that I know. Even worse, even if you do a sysplex-wide IPL and have the CFRM CDSs physically deleted (so they're not there anymore) and removed from COUPLExx a D XCF will still show them as defined/active/available (*there*) because that information is kept in the sysplex CDS. At least this was the way it worked about 10 years ago when I tried to get rid of the unused CDSs in the D XCF display. I haven't deliberately retested it. Back then, I could only get rid of those displays by doing a sysplex wide true cold start with a freshly formatted sysplex CDSs. At about the same time I lost any awe I may have had at the prospect of a sysplex wide cold start. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Unit-count default with UNIT=group-name?
> If I code SPACE=(CYL,(2,2,0)),UNIT=SYSDA is that implicitly a request for > only one volume, or might z/OS give me one cylinder each on two volumes or a > secondary allocation on a second device (assuming, of course, that SYSDA > were multiple volumes, as it usually would be)? I just tried to get a data set to be multi-volume. The only way I was able to do that is when the first data set is defined with secondary extents *and* those extents were exhausted on the first volume. So I'd say you would get 2+2*16 extents on the first volume, assuming that it is defined *somewhere* that either the unit count is larger than one or a data class is silently assigned that allows multi-volume. Or your application forces FEOV and then (with application support) start writing on the next candidate volume. As far as I know, the only way to get a cylinder (or rather 2 from your example) on each volume before the extents are exhausted is when this is a striped data set. And I don't know (and haven't looked into it) how to define a striped data set. But I know that in my previous shop the system dumps were written striped, and it was something defined somewhere in SMS. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: SYSIN Symbol Resolution vs. LRECL
> No, it doesn't. The 255 byte limit for ISPF edit was removed with ISPF 4.1 > in 1994. > The limit was changed then to 32760 for fixed length records and 32756 for > variable length records. Now I am really confused. Wasn't it you who quoted from the ISPF help panels that maximum record length for RECFM=V data sets is 32752? I had wondered about that and verified it: ISPF really stops at 32752. I even refilled the data set with IEBDG, and that (while I didn't count the characters, the pattern in my downloaded copy ended at the exact same letter) also ended at 32752. Strangely enough, the data set *is* defined as lrecl=32756, recfm=v, and IEBDG didn't have a problem with that. So why the loss of 4 more data bytes? I went back to the books (mostly 'Using data sets') trying to find where the maximum LRECL for a RECFM=V data set is mentioned, but all I found were descriptions of RDW and BDW. Eventually I concluded that the 8 byte difference between recfm=V and recfm=F in terms of actual data is due to at least one BDW and at least one RDW, each 4 byte long. And the RDW is counted in the lrecl. Is that right? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: IC988I messages issued by CA-SPOOL
> Does anyone have any suggestions on how to suppress these messages? MPFEXIT1 START MPFEXIT1 AMODE 31 MPFEXIT1 RMODE ANY USING MPFEXIT1,R12 BAKR R14,0 LRR12,R15 B WORK DCC'MPFEXIT1-&SYSDATE-&SYSTIME' WORK L R5,0(R1)ADRESS PARAMETER-LIST USING CTXT,R5 OICTXTRFB2,CTXTRNHC force no hardcopy OICTXTERF3,CTXTESJL suppress from joblog DROP R5 PR LTORG IEZVX100 Installation Exit Parm List R1 EQU 1 R5 EQU 5 R12 EQU 12 R14 EQU 14 R15 EQU 15 END Add a check for specific jobnames, otherwise all iec988 will get suppressed once you active this mpfexit in mpflstxx Regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Distance considerations fo a Parallel Sysplex with DB2 Data Sharing
> OTOH I would not entertain running a parallel sysplex over DWDM. A former employer did. Distance about 22km. Due to IBMs requirement for parallel sysplex pricing for at least one machine being on a different box. That distant lpar was (obviously) sharing ISGLOCK, and response times from ISGLOCK to the distant lpar were in the 200-400ms range as opposed to (I believe) 20-40ms for response times on the same physical box. In any case, the difference was so huge that I was never able to show response times from all lpars in the sysplex for one structure in the same graphic and have the graphic be meaningful. And the GRS structure has the fastest response times in a parallel syysplex. If it is slow, other structures will be even slower. > Meanwhile the consequences of disrupted sysplex communication are too > grave, including wait stating the entire sysplex. Too great a price to pay > for whatever advantage you're reaching for. True. There was at least one of those, including a split sysplex on re-IPL of the distant lpar. Which resulted in taking down the (so far) unaffected (local) lpars. Which eventually resulted in an unscheduled sysplex-wide IPL. Didn't we just talk in another thread about human errors being far more of a risk than hardware failures? Combine both hardware failure and sysprog mistake, and you have an even greater mess.(It really upset someone when I pointed out his mistake from the remnants of hardcopy log available instead of opening an ETR to IBM.) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Issue with JES SPOOL
> But in no case did we have to cold start. I second that. We have a 100% JES2 shortage of one kind or another about every 3 months. So far I managed to get by without a cold start. The problem as I see it is reducing the shortage to the point that a logon will be possible again. A number of display and purge commands have been named already, but in my experience the steps are: 1. Determine who holds a lot of the resource. In many cases, it is an active address space, and you want to first get it to stop further writing to spool or to write its buffers to the spool just cleared, throwing you back into a 100% shortage. It may be impossible to cancel such an address space. I know that I used the force command to stop it from further filling up spool. 2. Once you're sure no active address spaces are still writing like crazy, determine who holds a lot of spool. And then use the commands named before from the console, without resorting to SDSF. 3. Once you manage to get spool usage down to the point that you get into TSO/SDSF again, you have better chances in saving output and clearing it. 4. Finally, prepare for the next shortage: a) Set alerts for the HASP050 message designed to notify you early enough so you can still login. b) Practise purging of output via JES commands without the convenient use of SDSF. I consider that the biggest challenge, but I never was an operator. c) Prepare a spare spool data set to add if necessary. My advise is to keep such a volume in reserve, but never to add it automatically. Be aware that there are cases when you cannot add a spool volume because it would require the resource you're short of (I forgot which that was -JOEs?) Once you're out of your spool shortage after adding the spare, don't forget to remove the spare, so you have it again for the next shortage. d) Set up a procedure for spool offload and practise unloading your spool. Presumably also practise reloading it. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Health Check(s) remediation
> I have been asked to take on the task of remediating the health checks for > sysplex. I have been given a list of some of the health checks that come up > as exceptions on designated LPARs. Any suggestions? > IXGLOGR_STRUCTUREFULL RRS_ARCHIVECFSTRUCTURE GDPS_CHECK_DASDMIH > XCF_CF_MEMORY_UTILIZATION XCF_CF_PROCESSORS XCF_SFM_CFSTRHANGTIME >XCF_SFM_SSUMLIMIT XCF_SIG_PATH_SEPARATION > XCF_SYSSTATDET_PARTITIONINGXCF_CF_STR_NONVOLATILE XCF_CF_STR_PREFLIST > Did you check the archives? Did you check 'Setting up a sysplex'? Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Allocation test
> IEC999I IGC0002G,S000TBE,KAT30 > IRX0250E System abend code 0C4, reason code 0016. > IRX0255E Abend in host command SELECT or address environment routine ISPEXEC. > *** > > FWIW. Not sure if that helps, but I guess you should look into defining and assigning a dataclas with valid space parms whenever a DSORG=PO is allocated with the invalid space parms I used. Or feel free to open a PMR with IBM, citing the doc and the obvious not-adherence to the docs. Sorry that I cannot help out more. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Allocation test
Wow, this has snowballed. To summarize: I am allocating a DSORG=PO data set with an explicit zero space value for directory blocks. This data set *is* SMS-managed. The ACS routines don't interfere with any DCB attributes, and there is no ALLOCxx anywhere within the parmlib concatenation. (And I came across this when I set up testcases for our own product.) The data set allocated will not show a consistent value of zero for the directory information. Instead, it takes whatever is written on DASD where the data set is allocated as directory information. VSM UseZosV1R9Rules(YES) does not influence the outcome. 1. I used a valid allocation (DSORG=PO,SPACE=(TRK,(1,0,66000))). The data set ended up on volume SMS003. 2. Delete the data set, rerun the job using DSORG=PO,SPACE=(TRK,(1,0,0)). ISPF shows the data set on SMS003 as having 66000 directory blocks. (0 was specified, clearly an error.) 3. Delete the data set, rerun the job using DSORG=PO,SPACE=(TRK,(1,0,0)). ISPF shows the data set on SMS002 and I get "I/O error" in ISPF for the "i" line command. I believe that gil's hypothesis correct: Since the specified directory blocks is 0, allocation does not format a directory, and does not write the terminating EOF. Later, allocation sees that DSORG=PO, and does not write an EOF (which would be real, not pseudo) at the beginning lest it destroy the directory had it created one. Subsequent behavior depends on the residual content of the first allocated track. Doug found this IBM statement in SC26-7407-07 DFSMS Implementing System-Managed Storage: "The system provides data integrity for newly allocated data sets that have not been written to. For these data sets, whether SMS managed or non-SMS managed, DFSMSdfp writes a physical end-of-file character at the beginning of the data set when space for the data set is initially allocated. This makes it unnecessary to OPEN data sets for the sole purpose of writing an EOF and to avoid reading old data if the data set is read immediately after being allocated." This is clearly not true in my test case above. I agree with Gerhard on "In any case, IBM should be persuaded to either produce a JCL error or modify the directory build to write an EOF." I tend to the second solution: Write the EOF in any case. (In setting up my testcases, I have used IEBDG excessively, and quite a few allocations that succeeded nevertheless produced unusable data sets. So I don't think a JCL error is the way to go. But correct setting of the EOF marker should be required and should be aparable, since the manual clearly says that that should have happened. Doug also found this in SC26-7410-11 Using Data sets: "To allocate a PDS, specify PDS in the DSNTYPE parameter and the number of directory blocks in the SPACE parameter, in either the JCL or the data class. You must specify the number of the directory blocks, or the allocation fails." I reran the DSORG=PO,SPACE=(TRK,(1,0,0)) job explicitly specifying DSNTYPE=PDS (and with VSM UseZosV1R9Rules back to NO). Note that directory blocks are specified! This time the data set ended up on SMS001 but still shows 66000 directory entries. Then I deleted and reran, the data set ended up on SMS004 and I got I/O error. Presumably that means we don't do erase on scratch. If anyone wants to report this to IBM as a bug, feel free. I would do it, but our contract doesn't allow me to report bugs. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Allocation test
> IIRC the (pseudo) eof is only written for SMS managed PS datasets, so a PO > dataset could well be allocated over old data which will then be readable. > Can you force the problem PO dataset to anther place by making sure the space > for the 66000 dataset is still in use when the problem PO dataset is > allocated. We're dealing with SMS-managed data sets here. I used this job // EXEC PGM=IEFBR14 //DD1 DD DISP=(,CATLG),DSN=ALLO020, // SPACE=(TRK,(1,0,66000)),RECFM=F,LRECL=20,DSORG=PO //DD2 DD DISP=(,CATLG),DSN=ALLO021, // SPACE=(TRK,(1,0,0)),RECFM=F,LRECL=20,DSORG=PO and got this: Data Set Name . . . : ALLO021 General Data Current Allocation Management class . . : class Allocated tracks . : 1 Storage class . . . : class Allocated extents . : 1 Volume serial . . . : SMS002 Maximum dir. blocks : 0 * Device type . . . . : 3390 Data class . . . . . : **None** Organization . . . : POCurrent Utilization Record format . . . : F Used tracks . . . . : 0 Record length . . . : 20 Used extents . . . : 0 Block size . . . . : 20 Used dir. blocks . : 0 * 1st extent tracks . : 1 Number of members . : 0 * Secondary tracks . : 0 Data set name type : PDS Dates Creation date . . . : 2013/09/19 Referenced date . . : ***None*** Expiration date . . : ***None*** * Information is unavailable. Note the 'information is unavailable'. So this looks like the invalid data is gotten from DASD, not some storage area in the initiator. After deleting ALLO020 and rerunning the job with only ALLO021, I get again the I/O error in ISPF when doing an i (and a severe error on edit). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Allocation test
> In my case it certainly is NOT by ACS-routines! I can only think of reuse of > space with a (part of) member index. > > And this must in a production environment imply a security leak ?! Having spent quite a bit of time recently with the different ways a DCB is populated, this is an invalid allocation. There are warnings liberally sprinkled that it is the responsibility of the program opening the dataset to make sure that all relevant information is consistent and available. As far as I know (but I may be wrong) I cannot change the directory space allocation after the data set was allocated (at least not without a lot more than average knowledge of internal data structures). That's why I would have expected either an allocation error or at least some sort of consistent setting for the space information. I did specify DSORG=PO (which was honoured), and the allocation did not honour the 0 directory blocks that I had explicitly specified. Instead a number (66000) that wasn't specified in this job is used (last successful space setting in this initiator, a blue-sky, random number), and that (in my opinion) is a bug. I don't dare comment on this being a security leak. Wasn't there something with allocation that lead to an explicit eof being written in case of a sequential data set to prevent reading the tracks due to such an allocation? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Allocation test
All, > Be aware of the trap that Elardus hinted to: your allocation can be > modified by ACS routines, possibly differently on different LPARs, > caused by different variables passed to it. Thanks for testing. On our system, I am the master of everything (SMS, allocation), you name it. There isn't anything defined in SMS to take over. All I am doing in SMS is sent everything to an SMS-managed volume (with the exception of a few things that should be non-SMS). No data classes defined. No ALLOCxx anywhere in the parmlib concatenation, so we're taking all the defaults. It is interesting that some of the newly allocated data sets (nothing but allocation via iefbr14) contain one or more members already. In that case I would also think that something is done using "helpful ACS routines". Oh, and to trick SMS, you could change the DSORG to POU (unmovable). On my system such a data set ends up on a storage mounted volume, which in non-SMS, despite the ACS routine sending it to an SMS volume. As few interference from ACS as possible. Today, there wasn't any activity (other than my testing) on our z/OS 1.13 system. That's why I know that the last successful allocation for a PO data set via JCL had the directory size of 66000 (I ran that allocation, too!). And the next allocation took that blue-sky number and invalidly re-used it. I figure that all of these batch jobs ran in the same initiator, too. I can allocate 66000 directory entries in 1 tracks, but 66000 directory entries won't fit into the allocation of only one track. Hence the I/O error in ISPF after successful allocation. That would also explain why some of you get the I/O error (like I did with the invalidly big directory size). Since I already knew that the allocation was invalid, I didn't even attempt to edit the newly allocated data set. >Maybe a minor correction could be asked to IBM. Yes, but I am not in a position to report any bug to IBM, no matter how valid the bug, since the RDT licence does not include 'bug fixing' support, much less 'bug reporting' support. And in case you're wondering: I stumbled across this trying to set up test cases for our own product. I did not intend to test IBM allocation! :-( Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Allocation test
Thanks, Thomas. that confirms that there is a bug somewhere in allocation. The same job should result in the same allocation on two different lpars, I'd say, and it should not use a random number for directory information. At the very least I would have expected that the directory information is always zero (making the data set unusable). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Allocation test
Can someone please run this iefbr14 job and tell me what the space allocation is (number of directory blocks) on your system? // EXEC PGM=IEFBR14 //DD1 DD DISP=(,CATLG),DSN=TEST, // SPACE=(TRK,(1,0,0)),RECFM=F,LRECL=20,DSORG=PO Note that I deliberately request DSORG=PO but do not provide a directory space number. In my case the job ends with rc=0, the data set is allocated, but ISPF gets an I/O error when I use an "i" line command in front of it. The data set is allocated as PDS. When I change the space allocation to (TRK,(1,0,0)), the job still ends with rc=0, the data set is still a PDS, but now it is allocated with 66000 "maximum directory blocks" displayed using the ISPF line command "i". In case you're wondering, the last PO data set allocated successfully using a batch job had 66000 directory blocks, and in my opinion that number is (invalidly) used for this batch job. Certainly the allocation behaviour is not consistent. Can anyone confirm the same behaviour or is this just another quirk on our system? Thanks in advance, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: COBOL 5.1 Share presentations
Thanks for the direct link. Yes, I have now downloaded the presentation. > https://share.confex.com/share/121/webprogram/Session13662.html Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
COBOL 5.1 Share presentations
> >Tom, > > > >Could you share the SHARE presentations you have given on COBOL V5? > > I just sent them over, they should be live soon at: > http://www-01.ibm.com/support/docview.wss?uid=swg21634215 'Soon' meaning that more than a week later these presentations are still not there. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: TSO Delete in IKJTSOxx
> Barbara, keep in mind that the SAMPLIB IKJTSO reflects a "vanilla" system. > Any Program Products may instruct you to update IKJTSO. I know. But an ADCD system *is* a vanilla system. Supposedly. There certainly isn't any non-IBM product running on ours (other than our own, which does not require anything in AUTHCMD). >http://www-01.ibm.com/support/docview.wss?uid=isg3S1000170 >II09867 That explains it. I have an STGADMIN.** profile with myself in the ALTER access list. So I would not see any authorization failures. Thanks for pointing it out. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: TSO Delete in IKJTSOxx
> >I've run into this in the past when deleting GDG bases from ISPF 3.4. I get > >an authorization failed message, and putting DELETE in IKJTSO00 and a > >PARMLIB UPDATE(00) fixes it. > > Interesting. This is new for me or I forgot about it. :-) I found this interesting, too, so I went and tried it out. Discovered first that the active IKJTSO member is not compliant with the one in samplib (typical for ADCD!), and that we had delete in it, too. Removed delete from the authcmds and still didn't get any error messages, RACF or otherwise when I deleted an empty GDG base. > Could you be kind to say what that message was? Was it a RACF message on the > profile covering the GDG, Catalog or was it about > STGADMIN.IGG.DELGDG.RECOVERY? Or something else? Yes, that would be interesting, since I gave myself ALTER authority to just about everything since I am supposed to be both RACF and space admin. Barbara Nitz PS: Now I am checking what we actually have in IKJTSO as opposed to what we should have in there according to sys1.samplib. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: TSO Delete in IKJTSOxx
> >I am curious why sometimes I see DEL/DELETE as an authorized command in > >IKJTSOxx and sometimes not. The current (1.13) SYS1.SAMPLIB(IKJTSO00) does not contain del or delete in the AUTHCMD section (anymore). If it is still found in a productive IKJTSOxx member, my guess is that that is 'hysterically grown' and nobody has ever removed it. After a while, nobody dares to remove it because 'it might break' something. Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: PDS/E, Shared Dasd, and COBOL V5
> I had begun to think that my experience with PDSEs was somehow > atypical, too lucky, because I had not encountered the grievous > problems that featured in others' war stories. > > I therefore spent a long afternoon trying to reproduce and clear some > of these problems. My experience was much like Mark's. The problems > reported here did---some of them anyway---occur; but they were readily > cleared away. Use a (batch) TSO address space with a truly multitasking application. Have that application call an authorized command or program (iebcopy will suffice) on at least two processors/tcbs at the same time. Wait for the deadlock inside that address space on a PDSE latch that EVERY PDSE access in the system needs. Then try to clean up the resulting mess. No sysplex involved, single system only. Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: PDS/E, Shared Dasd, and COBOL V5
> >Thanks! BTW, with "huge", are we talking about >> 1 members ? > > Generally, yes - because of the way things are split between different major > applications. I looked and saw one loadlib with about 20,000 members, > but most are under 5K. Are we also talking about PDSEs with LRECL=V in some flavour? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: z/OS 2.1 and tools like COBOL 5.1, Fault Analyzer, Debug Tool, etc
Tom, > >Very late to this, so sorry if my concerns have been answered earlier. > >What about shops with a ring of monoplexes ?. The sysplex scope is each ind= > >ividual monoplex - but the sharing boundary is the larger GRSplex. Latch co= > >ntention - particularly PDSE latches - are a PITA. > It also says not to share PDSEs outside of GRSplex, but this seems like it > woudl work for you, since the GRSplex is your sharing boundary. It sounds > like you can do the kind of sharing you need to with PDSEslet me know! Shane is talking about a ring of monoplexes that are working together in a GRSPlex. As he said, the "sharing boundary" is each individual monoplex. Which means that NONE of the PDSEs can be shared safely outside of the individual monoplex, no matter how big the GRSplex is. IGDSMS applies to the monoplex boundary, too, not to GRSplex boundaries. PDSE just *assumes* that systems are always in a construct where GRSplex = SYSplex, and in a ring of monoplexes that just isn't the case! This assumption has something to do with the nature of PDSE communication inside a sysplex: PDSE uses XCF communication, and that by design only works inside the sysplex, aka monoplex. Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: MVS ROUTE command is a bad influence on DB2 ERLY code
> The command char is registered with the subsystem definition - I suspect (but > don't know) that when you look at the OPDATA between the 2 systems (run the > command on each ) you will see a difference. > > Why Route gets involved is each system is processing the command according to > what is defined on the system that the command executes on. More than that I > don't know The way I remember it: For each command typed in, first of all an IEEsomething module gets control whose function it is to determine who should be notified to actually attach the command processor for that specific command. That IEEsomething would know to attach the VARY ucb command processor in master address space, for instance, or the control commands in console address space or wake up the address space that is addressed by a modify command. It has a list of known (system) commands. If the first thing in the typed-in string is what is recognized as a command prefix, IEEsomething would know to address the address space that established the command prefix. Now, a ROUTE command is one of many system commands known to IEEsomething. IIRC, it gets attached in console address space and all it does is send over the command string (minus the ROUTE systemname) to the system addressed in that command string. On the target system, there is also an IEEsomething runnning. It gets the command and does the same deliberation (whom to send it to), just on the other system. What I am fuzzy about is what happens when a command prefix is established (like in the DB2 case) but the actual DB2 system to execute it is down. I thought there would be an error message that the command cannot get executed. On the other hand, I was told that ERLY code can only be established via an IPL, NOT afterwards. The DB2 admins in my last installation were adamant about that. Has that changed? Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Installing HSM or rather: DFHSM woes
> I have to ask. What's harm of defining a couple 1 cylinder (or 2 track) OCDS, > BCDS that will remain empty and unused? 'Business reasons'. I just didn't understand why something that looks to have correct syntax is getting a syntax error. Turns out that it *should* be correct. I will just shrug it off. But I will do my best to help Glenn if he wants to follow up on this. Best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
Re: Installing HSM or rather: DFHSM woes
Glenn, > Have you tried an HSM Modify command to issue it? The syntax seems correct, > so it may be some typographical error in your parmlib. 13193 06:03:58.69 me 0290 F DFHSM,SETSYS TAPEMIGRATION(NONE) 13193 06:03:58.71 STC02475 0090 ARC0103I INVALID SETSYS PARAMETER TAPEMIGRATION 13193 06:03:58.71 STC02475 0080 ARC0100I SETSYS COMMAND COMPLETED My WAG is that it works for you because you run with OCDS and BCDS. I don't. :-) As far as I can tell, the code just *assumes* that everybody has those data sets defined (opening them is done way before the parmlib content is checked), when it isn't *mandatory* to have them. All I want to do with this parm is make it clear that I don't want to use TAPEMIGRATION (because we don't have tapes). If there is some other way to achieve that, I am perfectly happy to go that other way (short of defining OCDS/BCDS). The problem (as I saw it) is that there are a lot of parms that are kind of interwoven, and having looked at all the parms for about two weeks, I still wasn't sure which the ultimate *I don't want this* parm is for the things I don't want to run, so I had to basically specify them all the way I want them (as far as that is possible). Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN