Shai Hess and his product (was CA ESD files Options)
Not that I had any real interest but your ignorance in the mainframe architecture would certainly eliminate any intention I could ever have of running your product. If you knew more about the design and what protects the mainframe you would know why most (if not all) hacks wouldn't work in a properly configured system. Also, your lack of this knowledge scares me away from your product so fast I need the speed of the itnternet to get away. Also, I think this has been said already but could you please stop discussing your product, I really don't want all the vendors to start using this forum to showcase their wares. You as a software developer are attempting to be a vendor so play buy the same rules most of them play by (and yes, I can filter you out but I don't want to filter out those that are replying to you and without doing that I still have your noise coming into my system)! I have nothing (much anyway) against Windows and I don't want to start the 'open system' definition war again but Windows is not open unless using generic HBA's and related protocol and accessing FBA disk are your criteria. I am hard pressed to think of an environment the Mainframe CAN'T play in (many aren't the best place, but that's different than CAN'T). Modern z/OS can do web services, connect to the internet, host websites, run C, JAVA, COBOL, Assembler, PL/I, perl (and on and on) it has an integrated UNIX component. What exactly can Windows do besides run Windows programs that z/OS CAN'T again? Now if you use 'Open Source' as your criteria, z/OS is more open than Windows, I can't get to much source but I can at least get to some of the source for exits and such, what source do you have for the Windows operating system? One word of advice, it isn't a good practice to poke your customers in the eye and it is even worse to show your ignorance of their environment when trying to get them to purchase your product. Regards, Greg shai hess [EMAIL PROTECTED] wrote: = There is a BIG difference between a PC and an x86 server class machine. You seem to be unaware of the knowledge and attitude of your audience. Your recent comments have made me reconsider the minor interest I had in your product. I don't think I would touch it now with a ten foot pole. Scot, You right, PC is not a good name, better to say that my product using MS DOT NET to implement the open system side. I find it easy to use the PC nick name to mean a cheap open host. But today the DOT NET is supported in Windows and in Linux (I think also in Unix). As I said, even MF with Linux may be able to run my software. About your mention that you do not like my software, that is your right. Thanks, Shai -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
2105-800 to EMC DMX4 conversion
We are planning to move our zSeries disk from our 2105-800 to an EMC DMX4 disk array later this year. Currently the 2105 is mirrored and that is what we are probably going to do on the DMX. I believe the 2105 also stripes the zSeries logical volumes across multiple little SCSI disks. The 2105 handled all of the logical volume placement within the array that appears to be all user configurable in the DMX so I am curious on the various opinions for the Hiper (sp) size in the DMX. The array will be used for distributed systems and zSeries and the distributed work is already using a DMX1000 that will migrate so I think they have some set Hiper size values in mind for that work (but I think they are using about 5GB which seems way to large for a zSeries, or am I wrong?). Also, does this depend on 3390 emulated volume sizes? We currently are mostly mod -3 and mod -9 due to a DR contract that will eventually go away so we may explore some small volumes for spool, CDS files etc and other reserve sensitive files and some larger volumes for the larger databases and such. Any opinions you might have on this would also be welcome. You can respond on or off list, whichever you would prefer (if responding off list, I am not sure which email address my post will come from so please use [EMAIL PROTECTED] for the reply address) I realize I asked for opinions and although my question is regarding Hiper (sp) size I would be interested in any opinions/advice I can get regarding migrating from a 2105-800 to a DMX4. Thank you for your time and assistance, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: REFM=VB Binary upload problem
If you want to transfer a binary file from z/OS to z/OS with non-z/OS intermediate nodes, then I'd strongly suggest that you use QUOTE SITE STRU R to use record structure. This encodes the file so that record boundries are kept. I've done this to transfer files in the past from z/OS to Windows, back to z/OS. The binary file is not readily usable on the intermediate nodes due to the encoding. -- I tried pulling the file down and pushing the file back up using a command window from M$ Windows and it worked as indicated (the file matched once re-uploaded). I tried various incarnations from an MVS batch job and have had no luck at all. Is there a way to do this from an MVS batch job or TSO FTP ( where I guess MVS is the client instead of the server)? If not I guess I will have to continue using XMIT and/or TERSE were applicable but for smaller files it would be nice to eliminate the extra step and be able to submit batch jobs from our MVS scheduling package. Thank you, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SETPROG LNKLST--problem or procedural error?
On Thu, 31 May 2007 09:15:09 -0400, Peter Relson [EMAIL PROTECTED] wrote: Most 106-F abends are related to extents. Activating a new LNKLST set will result in that LNKLST set having all the current extents in its DEB so no 106-F would occur when using that LNKLST set unless the data set was subsequently extended. I hope you did not compress an in-use LNKLST data set. Okay, I know I am going to feel stupid after this but since I must be suffering some form of memory overlay/corruption, I would like to understand this correctly. In (a somewhat old) post: http://bama.ua.edu/cgi-bin/wa?A2=ind0402L=IBM- MAINP=R4072X=790D902F51A97A500F-Y=greg9% 40myway.comm=91723 it was stated: 'Unless you have ensured that no address space is using a LNKLST set that contains a specific data set, you must not rename or delete the data set.' I also thought I read on here somewhere that data sets in the IPL time linklist should never be deleted. I have always thought of a move like this as basically a 'copy' followed by 'delete' followed by 'catalog'. So I would have expected everyone to indicate not to do what the original poster did but many very sharp minds seem to be fine with it. So my questions (I know, there are a bunch so if you don't have time to explain I understand): Did something change since 2004 making this okay now? How is the move not doing a 'delete' of a data set in the IPL linklist or is it that this is permissible? and if permissible, does this mean it is okay to delete and reallocate a library on the same volume using the procedure described? if not, why not? How is a move less of a concern than compressing an in-use LNKLST dataset then doing an LLA refresh? I have in the past compressed vendor program libraries (with all jobs/STCs' for the product stopped) then LLA refresh then restarted the jobs/STC's without any problems. Anyway, I am certainly not trying to contradict anyone, just trying to fix an apparent memory overlay/corruption in my head. Thank you for any (educational) response you care to provide (okay, any response and I take the lumps), Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
DFSORT from z/OS 1.4 on LPAR running z/OS 1.7
Can I run the version of DFSORT that is installed on my z/OS 1.4 system on my z/OS 1.7 system via JOBLIB in the batch jobs? If so, which libraries would I need to add to the JOBLIB (both JCL sort and program invoked sort are used in the application if that matters) and are there any APF requirements? For those curious why: We have been seeing inaccurate results in the testing of the batch cycle for one of our applications when testing on z/OS 1.7. We know that sort is producing different results on 1.7 than our z/OS 1.4 system (related to records with duplicate sort keys). Both are using the default of EQUALS=VLBLKSET so we understand why but we don't know how that could be causing the issues we are seeing. On one test of JCL sort (relatively small sort) 1.7 results only match 1.4 if we over-ride with EQUALS on the 1.4 system. Anyway, we would like to change only SORT instead of all ServerPac installed software and test the batch part of the application to see if this is the entire cause of our issues or if perhaps we also have a test environment setup problem (or something else). Thank you in advance for your assistance, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: DFSORT from z/OS 1.4 on LPAR running z/OS 1.7
On Thu, 22 Mar 2007 16:29:12 -0700, Frank Yaeger [EMAIL PROTECTED] wrote: It's not clear to me what you're seeing here, but ... The main difference between DFSORT R14 and z/OS DFSORT V1R5 is that V1R5 can use Memory Object Sorting, a completely new path. Perhaps what you're seeing is caused by the difference in using that new path? Have you tried turning off Memory Object Sorting to check that: //DFSPARM DD * OPTION MOSIZE=0 /* Have you opened PMR? If so, what's the number? I have not opened a PMR since I cannot argue that DFSORT is working incorrectly given your reply to a previous post about a week ago and the documentation for the equals parm. I try not to abuse the PMR process and only open them when I think the fault is with the code or I don't think I have another choice. Right now I don't think SORT is working incorrectly, just incompatibly with our application as the vendor delivered it and we implemented it. What we are seeing is that z/OS 1.7 sort is working differently than 1.4 with respect to records with duplicate sort fields (I must admit based on too little testing but I can only do what I can do in the hours that I have) . As I noted before with one small JCL SORT test (31360 records, 28224000 bytes) run multiple times, the only way I get the same results out of the two systems is if I specify OPTION EQUALS in a DFSPARM DD on the 1.4 system or if I use a JOBLIB on the 1.7 system to point to the 1.4 libraries. I have tried this several times and received consistent results (with the over-rides the output matches, without them it doesn't). I even tried specifying EQUALS and NOEQUALS on the 1.7 systems as well as the default and all output files from the 1.7 system match each other consistently (EQUALS/NOEQUALS has no affect on 1.7 in this scenario I expect because it is such a small sort). I am asking if over-riding sort on the 1.7 system via JOBLIB to the 1.4 system libraries should work. I realize that I have tried this already but trying something on one small job is a lot different than setting up an entire batch cycle over the weekend that will run for several hours. I would like to know if this should work for my test before I put in my time and that of other departments necessary to get this set up. My other option would be to IPL back to 1.4 for the test but that has it's own issues and would reverse everything not just SORT and I really wanted to isolate one thing at a time as best I could. For some background on how this started. We used to be able to load our UAT DB2 tables from PROD image copies using DSN1COPY and copy over selected files from PROD that PROD was going to read into the nightly cycle and then run the UAT cycle the next day and compare the results. The compare would match on everything except a couple files that had date/timestamp information in the records and if that section of the record was excluded, then even those would match. This worked back to at least OS/390 2.10 and possibly 2.8. I set things up to run this process just before migrating z/OS 1.7 into production (thinking it was only a formality and all would be good with the world) and this time there were many mismatches that appeared to be SORT sequence issues (mostly matching delete/insert pairs) . Although these SORT sequence issues may be explainable and of no consequence to the business (we are still researching that) we have also had a couple records process incorrectly (out of hundreds of thousands). So far we have not been able to explain this. It may be that a sporadic problem has hit us in two out of two of these big tests that for some reason did not hit production using the exact same data/programs (which seems unlikely) or it may be that the difference in the SORT is causing issues and we really need to increase our SORT fields to get the records into a predictable order or it may be something else all together. I am grasping as straws and trying to eliminate just SORT by pointing to the old SORT via JOBLIB for a test this weekend. The hope is that we can isolate the cause down to the SORT differences which will tell us where to focus. If the run this weekend using the 1.4 libraries on the 1.7 system matches the prior nights production, we will restore everything back to the beginning and try again with the 1.7 libraries (DFSORT 1.5 I believe) and see if the issues come back. If it doesn't match, then we will probably need to examine the setup again. I fully realize we may have multiple problems so I am trying to find the easiest way to eliminate one thing at a time and the known difference at the moment is SORT (yes everything in the ServerPac is potentially different but I gota start somewhere). Even if this isn't causing both issues, by eliminating this, it may make finding the other easier. Sorry for the long reply and I hope you can answer my questions regarding pointing to the old sort
DFSORT 1.4-1.5 EQUALS=VBLKSET default
Would DFSORT at release 1.4 (z/OS 1.4 and I think OS/390 2.10) using EQUALS=VBLKSET produce the same results run after run if the input data were the same but produce different results when run under DFSORT 1.5 (All jobs ran with EQUALS=N)? I know what EQUALS= does, it's just that the results between two parallel runs always matched even back to os/390 2.10 and now they do not and the difference is in the sequence of duplicate records. If the answer to the above is YES, is there some way to get 1.5 to produce the same results as I used to get under 1.4 (I don't know if the duplicate records were in input order or not under 1.4 so I don't know that EQUALS=YES would get me what I want, just that the two executions would always match before)? Thank you for your time and assistance, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Resource Link Alerts
You might also want to go to the 'problem solving' page (via link just under the 'fixes' link along the left side of the window). If you look at the alert types listed, there is one listed on the 'problem solving' page that isn't listed on the 'fixes' page (at least it's that way under my ID). It's not just hardware related like the others discussed and I hope most (if not all) of you already know about this alert type but I sure didn't until recently and I think we all should. HTH, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Did schools ever teach systems programming other than NIU was Re: help -- ignorant new boss
My personal opinion is that the universities have bamboozled the business community into believing that they really train people for running data centers. I don't know if it's the universities, did you see the SearchDataCenter.com article on best places to build a new data center. http://searchdatacenter.techtarget.com/originalContent/0,289142,sid80_gci12 38054,00.html?track=NL-455ad=576189USCAasrc=EM_NLT_894417uid=663533 or http://tinyurl.com/2uq59x (you may need to register to access the article, I am not sure, if so, my apologies) Where the The Boyd Company Inc. replied to: It may be cheaper to hire someone in Sioux Falls, but is there any talent there to hire? With Boyd said yes, either through local university connections or younger people willing to relocate. And Dakota State University, for example, is only an hour outside of Sioux Falls and has several undergraduate and graduate degree programs in computer science and information systems. That makes for an ideal talent pool for Sioux Falls' employers to draw from. They don't say what happens when there's a problem in your new $16+ Million data center and all you are staffed with are people that learned all they know in a University. I have a son getting an MIS degree and as much as I love him, I wouldn't want him to be the only one fixing my major problem until he gets quite a few years of real world experience (from someone that has some). Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Shop Zseries
Since your not getting much I will try to help. When I get the email from IBM it takes me to a site that provides several links (looks something like the following): Download expires on 20 Feb 2007 Packing List for Service Order# View Now (0.045 MB) Service Overview for Order# View Now (0.092 MB) Install Instructions for Order# View Now (0.021 MB) Service Materials for Order# Download package (151 MB) directly to host using JCL job (0.008 MB) Download package (151 MB) directly to host using FTP (0.003 MB) Download to your workstation using IBM Download Director (151 MB) Download to your workstation using FTP Alternate - FTP to your workstation. Click here for details. Once I have the order downloaded (maintaining the names and directory structure from the IBM site) I make the order available to my z/OS system by copying it to an SMB mounted share in my z/OS HFS. On the web page I get, the last line says 'Alternate - FTP to your workstation' and it gives some other choices than SMB on how to make it available to your z/OS system. Pick one you like. I am pretty sure (but not positive) that the order must be placed in a UNIX file system. I seem to remember trying to figure out a way to do this with MVS data sets and gave up rather quickly (the SMB route was already established so I increased the size of the HFS and remounted it). I then use the job provided in the 'Install Instructions for Order# ' (see above as a 0.021 MB) to SMP/E receive the order by executing program GIMUNZIP (you have to change the mount point on the SMPDIR DD to the location in your HFS where you put the package and the directory structure must remain the same). Also, as is alway true in UNIX, size matters so watch the upper and lower case stuff. Here is the job I get when I select that link (minus comments and watch for line wrap): //UNZIPEXEC PGM=GIMUNZIP,PARM='HASH=NO' === NOTE 1 //SYSUT3 DD UNIT=SYSALLDA,SPACE=(CYL, (50,10)) //SYSUT4 DD UNIT=SYSALLDA,SPACE=(CYL, (25,5)) //SMPOUT DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SMPDIR DD PATH='/u/smpe/smpnts/ORDPFX', === NOTE 2 // PATHDISP=KEEP //SYSINDD DATA,DLM=$$ GIMUNZIP ARCHDEF name=SMPPTFIN/S0001.SHOPZ.S7004159.SMPMCS.pax.Z volume=volser newname=hlq.S7004159.SMPMCS /ARCHDEF ARCHDEF name=SMPHOLD/S0002.SHOPZ.S7004159.SMPHOLD.pax.Z volume=volser newname=hlq.S7004159.SMPHOLD /ARCHDEF /GIMUNZIP $$ //REC EXEC PGM=IKJEFT01,COND= (4,LT,UNZIP) //SMPCSI DD DISP=SHR, //DSN=your global csi data set name === NOTE 5 //SMPPTS DD DISP=OLD, //DSN=your SMPPTS data set name === NOTE 6 //SMPOUT DD SYSOUT=* //SMPRPT DD SYSOUT=* //SMPLIST DD SYSOUT=* //SYSPRINT DD SYSOUT=* //SMPLOG DD SYSOUT=* //SMPLOGA DD DUMMY //SYSUT1 DD UNIT=SYSALLDA,SPACE=(CYL,(2,1)),DISP= (,DELETE) //SYSUT2 DD UNIT=SYSALLDA,SPACE=(CYL,(2,1)),DISP= (,DELETE) //SYSUT3 DD UNIT=SYSALLDA,SPACE=(CYL,(2,1)),DISP= (,DELETE) //SYSUT4 DD UNIT=SYSALLDA,SPACE=(TRK,(2,1)),DISP= (,DELETE) //SMPCNTL DD DATA,DLM=$$ SETBOUNDARY (GLOBAL)
Re: Shop Zseries
Oops, just reread your post. My instructions are for receiving 'service'. I have no idea if the process for products is even close to the same. If not, ignore me, sorry. Sorry, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
JES2 WLM INIT's strange behavior
I brought up my sandbox LPAR in my basic sysplex for testing of z/OS 1.7. The other systems are z/OS 1.4. We use WLM managed JES2 initiators where the work is assigned to one of four services classes with varying velocity goals. When running batch jobs on my 1.7 system, jobs for two of my 4 service classes would never begin executing (stayed in the input queue for well over a half hour) unless I changed the service class manually or did $SJ command. I did not have this problem when the sandbox system was in its own monoplex even though the WLM policies are very similar. I also had no problems on the other systems in the sysplex when submitting jobs. This was on a very low utilization day so there was plenty of processor although this LPAR has a very low weight. The goals for the service classes that worked correctly are lower than one of the service classes that didn't work and higher than the other. I remember this behavior from the past but haven't seen it in a long time and I never did try and figure it out since it was on my sandbox. The last time I saw this was when I brought either an upgrade or service into my production system (can’t remember which now) and I re-ipled and the problem went away, then I got busy and didn't track it down. Now that it has happened again in my sandbox I would like to find out what is happening so I don't repeat the production experience. I have searched the archives and IBMLink and either there isn’t anything there or I didn’t search on the correct thing. After finding some things on IBM-MAIN related to PIs I looked at the workload screen in Omegamon II for MVS on the sandbox lpar and the PI for all of the above was listed as n/a (I believe I did get a PI for the classes that worked eventually after running some jobs but not initially). I do not believe I ever saw a PI for the service classes that didn't work even after forcing a few jobs to run. The service class definitions are as follows: Service Class JOBHI - Batch jobs HIGH CPU Critical flag: NO # Duration Imp Goal description 1 4Execution velocity of 40 Service Class JOBLONG - Batch jobs HIGH long running CPU Critical flag: NO # Duration Imp Goal description 1 4Execution velocity of 30 Service Class JOBMED - Batch Jobs Med CPU Critical flag: NO # Duration Imp Goal description 1 4Execution velocity of 20 Service Class JOBLOW - Batch jobs low CPU Critical flag: NO # Duration Imp Goal description 1 5Execution velocity of 10 (JOBLONG and JOBMED jobs would run, JOBHI and JOBLOW jobs would not). I am sure I left out some necessary info and equally sure I include some useless garbage but I don't really even know how to ask my question. I would just like to be prepared in case I see this on my PROD system. I don't like solutions of IPL and it MAY go away. If anyone has seen this or has any ideas what is (or could be) causing this I would appreciate anything I can get. Thank you for your time and assistance, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Thank you all - Re: z/OS 1.7 SYNCHRES=YES z/OS 14 SYNCHRES=NO
Thank you all for your assistance and confirmation on this. The participants on this list are great and help so many! I really appreciate that so many out there take their own time to help others. Thanks again, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
z/OS 1.7 SYNCHRES=YES z/OS 14 SYNCHRES=NO
I have checked the manuals and the archives on this and I am still unsure about mixing the yes and no values for SYNCHRES in a GRSPlex (Basic sysplex if that matters any). I need to bring my TEST LPAR into our plex for testing. I would rather wait until either our outage weekend next month to change the value of this on the systems that are already in the sysplex or until each cuts over to 1.7 but I need to bring the TEST system up in the plex and test before I can migrate 1.7 to the next system. I see that I can change the value via the SETGRS command and I guess I could always code the 1.7 system as SYNCRES=NO for this phase but from the way I interpret the FM it doesn't seem like it should be a concern. However, nothing I found gave a definitive answer. Does anyone know of any problems with having two z/OS 1.4 systems with SYNCHRES=NO and a z/OS 1.7 systems with SYNCHRES=YES active in a basic sysplex? Has anyone run this way when migrating to 1.7 from 1.4? Thank you, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Non-SMP/e packaging
My choice would be to stick with SMP/E and not waste the time to create something else. I am not sure what is considered so difficult about SMP (from a customer perspective, I have never been a packager). The only time I find it difficult is when you stick something like Omegamons CICAT in front of it. Those tools to simplify the install only seem to make it harder and take much longer (unless you go with blind faith or do it frequently enough to remember the idiosyncrasies). If there are customization steps in addition to SMP just spell them out and let me do the customization. Now, of course in some cases a full replacement process may be fast and easy but its more annoying and time consuming to have to know the peculiarities each vendor can come up with for installing/supporting their product(s). If the product is that simple, the SMP install should also be simple (again, I don't know what the packager has to go through to get it ready for me but defining a few SMP data sets then running receive/apply/accept from then on really isn't that big a deal). Remember, you asked for opinions and just about everyone has one :) Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
WLM type SPM entry for SYSSTC
I am updating our WLM policy to add report classes. The existing policy doesn't have any SPM entries in the subsystem type STC rules. In RTFM it appears that entries of a type SPM should exist for SYSSTC and SYSTEM. It also appears they should be pretty early in the list so I added them and then coded my TN and TNG entries and associated them to the desired service/report classes. After that several address spaces that weren't previously in SYSSTC appeard there. I understand why this is but have the following questions: 1. Should the STC's SMS, TNF, VMCF really go in SYSSTC or a user defined class. They were previously in an importance 1, velocity goal 50% service class. I am thinking that SMS should go to SYSSTC (SMSPDSE STC was already in SYSTEM) but perhaps not the others? 2. Should TRACE really go in SYSTEM instead of the previously mentioned importance 1 velocity goal 50% class (This would seem correct but I am not the one that built this back under 2.6 or 2.8 so I don't know why they put any of these tasks in the user defined service class)? 3. In looking at examples, it seems that the SPM entry for SYSSTC should go pretty early in the list but my results indicate I would need to move the SPM entry for SYSSTC almost to the bottom of my list to keep some of my STC's out of SYSSTC that don't seem to belong there (such as DB2 regions, I think MQ regions some of the montor address spaces etc.). This doesn't seem to match what I have heard and seen in examples. Where is the best placement for this entry? My system is rarely under heavy load but the few times it has been WLM seemed to control it pretty well so I don't want to make it worse should it get loaded down in the future. Thank you all very much for your assistance, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FTP userid propagation
Charles, I am curious what security disaster exists with each of the users that will use this process having a userid.NETRC file with a UACC(NONE) be? If it is the OPERATIONS ATTRIBUTE users being able to access the files that is hte problem, if they are all in a single group (or limited groups) give that group(s) access of NONE and even the users with OPERATIONS attributes will not be able to access the NETRC files. I am not trying to be difficult but we currently do something very much like this and I can't see where this causes any security exposure. It is a little bit of a pain for the users to maintain the password in the NETRC file but so far they are living with that. What did I miss and what exposure do I now have? Thank you, Greg -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Barry Schwarz Sent: Thursday, January 05, 2006 12:23 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: FTP userid propagation What is the problem with a userid.NETRC with a UACC of NONE [and maybe an additional PE ID(*) ACC(NONE)]? Except for someone with OPERATIONS, everyone but the user should be locked out. I don't think NETRC does the job because a local NETRC is a security disaster and a global NETRC file would only provide one userid and password for the remote machine -- my whole point is I want to propagate each individual user id. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: FTP userid propagation
Charles, Thank you for your reply. It sounds like the individual NETRC files may not really be a security disaster but more of a maintenance disaster. I would agree, it is very inconvenient to require each user to update the NETRC file each time the password(s) on the remote system(s) change. Not perfect but so far not causing that much pain here (at least not that I am aware of). I hope as you progress through your solution I see more details here on IBM MAIN. Who knows, maybe I'll get a chance to change our process once you work out all the details and create the cook book for me :) Regards, Greg Perhaps I am not understanding you. If you are saying give each user their own NETRC file with UACC(NONE) I think the objection would be the maintenance headache. Each user's password would have to be maintained once in RACF (two instances) and once in their NETRC. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Time Change Problem
I would be surprised if Barbara does this for standard timechanges. Either way, why do you? If for some reason you have to set GMT=LOCAL and run with no offset, then why not set up the new CDS's in advance and point to a different couple member during daylight savings (or standard, which- ever). This would save you the additional IPL and if you are the one that must create/init the CDS's, eliminates you from the off shift picture. Mind you this is only if you must change GMT for your timechange, I personally run with GMT set to GMT and use an operator command to do the timechange, then IPL with the appropriate CLOCKxx member next time we IPL (usually IPL both systems at next IPL just so the local time is in sync between the two). Good luck however you choose to do it, Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: auto reIPL
I like it. My shop could make use of a function like that if it worked in a basic sysplex (no CF, no need or $$ for one). Also, no unauthorized program interface desired :) Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: HELP - can't IPL ZOS
In case all the sharp guys have gone home I will attempt to help you some. It looks like you have message areas defined so the top part of the screen is full with non-scrollable (or is that non-deletable?) messages. You need to clear these, I prefer the 'put the cursor under the * and press enter' approach myself. Once enough of non-scrollable messages have been removed the iee159e should go away. I expect you will see the WTOR you need to reply to before then. Now, this isn't going to fix your JES2 checkpoint problem but should get you to the rest of your messages. Once you get through the IPL I would suggest that you do a K E,D to delete all messages in the area followed by a K A,NONE to make it one big area but this is a preference kind of thing. Some probably like haveing the two areas so the they can get responses to their commands if the console is flodded. I hope this helps get you to the next step. Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: PTF/FMID corellation
Steve, Another good place to get a list of the FMID's installed on your system with a description is the Migration Assistant using the SMP/E panels. I don't think you even have to download the Software Information Base anymore but I could be wrong. Either way more information and the Software Information Base download can be found at: http://www-1.ibm.com/servers/eserver/zseries/zos/smpe/pma/ Greg -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html