Re: Unusual FTP request.
In [EMAIL PROTECTED], on 03/06/2006 at 08:41 AM, Paul Gilmartin [EMAIL PROTECTED] said: What's a job scheduler Software that schedules jobs. Is it made of silicon or carbon? No. It's made out of data. I thought that nowadays almost all jobs (barring those actually submitted on physical, necrodendritic cards) go through an INTRDR; Or remote batch or a virtual card reader under VM. But the issue is not how the job is submitted but rather who submits it. Are programmers in a production environment likewise discouraged from using the TSO SUBMIT command Only at well run shops. Must a human bureaucrat (job scheduler) sign off on each one? No: see above. If the process is in fact automated, can't one job submit another through the automated sanctioned channel, How does that conflict with what Chris wrote? -- Shmuel (Seymour J.) Metz, SysProg and JOAT ISO position; see http://patriot.net/~shmuel/resume/brief.html We don't care. We don't have to care, we're Congress. (S877: The Shut up and Eat Your spam act of 2003) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
Hello Gil, In the production environments that I am involved in, most TSO users are not permitted to SUBMIT jobs. The people schedulers have TSO access to the job scheduler product, and maybe also to SDSF. Batch jobs have been created(developed) under a rigorous development regime that is managed by a source management product. The jobs have (supposedly) been tested and peer approved before they are promoted to the libraries in production. The job scheduler (CA-7 or the successor to OPC) is a product which manages job streams. Job submission is dependent on triggers... like time of day, successful completion of one or more jobs/jobsteps, or some other event. Submitting via the job scheduler also ensures the job is run under the correct userid, ensuring correct access to resources. The job scheduler monitors the progress of the job, highlighting if it takes longer than normal, and captures the aggregate return code / abend code upon completion. All nicely known and managed. There is no need to have a job submit (INTRDR) another job, as that introduces a wild card. One consequence is unhappy auditors. :-) In regard to John's comments regarding additional documentation required, just as the JCL for the STEP exists, then the relevent documentation describing that step should also exist. Extra work then is:- 1. extract step and add jobcard. 2. remove step from old job. 3. extract step documentation to new document. 4. remove step documentation from old job document. 5. add new job to schedule, dependent on original job. 6. change existing dependencies from old job to new. 7. convince application teams that doing this once per ftp step will stop many 0dark30 phone calls. On Mon, 6 Mar 2006 08:41:56 -0700, Paul Gilmartin [EMAIL PROTECTED] wrote: snip Hmmm. Clearly I work in a development environment, not a production environment, so I'm curious about protocols. What's a job scheduler Is it made of silicon or carbon? I thought that nowadays almost all jobs (barring those actually submitted on physical, necrodendritic cards) go through an INTRDR; it's simply a matter of how they get there. Are programmers in a production environment likewise discouraged from using the TSO SUBMIT command (which, AFAIK, also uses an INTRDR)? How do jobs get submitted? Must a human bureaucrat (job scheduler) sign off on each one? If the process is in fact automated, can't one job submit another through the automated sanctioned channel, as opposed to via INTRDR? -- gil -- StorageTek INFORMATION made POWERFUL Regards Bruce Hewson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Mon, 6 Mar 2006 16:43:01 -0700, Paul Gilmartin [EMAIL PROTECTED] wrote: I work in a development lab. Backups are automated by the meager scheduling facilities of HSM. That's all the admin I asked offered; I elected not to badger her for more, but she offered that our production shop uses more sophisticated commercial schedulers. What scheduled maintenance should I expect in a development lab? Perhaps scratching tramp data sets? Others? Not all that much is required for a system that is 100% development. It depends what you or capacity planning people might want to look at for planning purposes or perhaps to find out something that happened on the system at a previous date/time. Here are a few things I can think of: SMF data - May be dumped automatically via IEFU29 SMF exit after a switch or once a day via SMFDUMP program (see the archvies), but are there any scheduled jobs to combine all the daily dumps into weekly (or monthly). If there is a production LPAR on the same CEC, then perhaps the people that watch capacity only care about the total utilization of the LPAR, which is available in the other LPARs' SMF (RMF) data. SMF dump could go to GDG via IEFU29 with no combining of data or SMF can be set to not record at all (I had a P390 client that had their system set up that way to save cycles). SYSLOG - This is an easy one to handle with JES2 automatic command or automation command at a regular time each day to issue the WRITELOG command and submit a job or start an STC to copy the syslog to a GDG. Another option: the log just gets spun when IPLing. Then it stays in the spool until someone feels like purging it. LOGREC - Collection and archival, or at least ZERO-ing the logrec data set / logstream. Easy to handle the same way as syslog. LOGREC could be set to ignore or I suppose the clear job just submitted manually when someone sees LOGREC FULL message. I wouldn't want to set it to ignore on a flex or p390 because it contains software errors as well as hardware errors. There can be many others, but they are not required. For example, JES2 automatic commands to purge spool output older than n number of days. If you don't have that, do people just wait until the spool gets full and start purging output, or do the operators just issue the command to do so picking a number for N arbitrarily? It all depends on how many people really use your system. If you can just shout over a couple of cubicles and ask everyone to purge their output they don't need, that works also. Scratching temp data sets is another one. I can't remember if temp data sets are controlled via SMS on ADCD systems. But even if they are, there is a good chance there is junk on the temp work packs if the system has ever crashed. Find out where some of yours go and look at the volume via ISPF 3.4 and you may find lots of old temp data sets that need to be deleted. There are developers on this list the run flex or P390s still where they are the only user of the system or there are only a few users. I guess they can tell you what regular maintenance they run if any. Regards, Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
At 11:13 -0600 on 03/03/2006, McKown, John wrote about Re: Unusual FTP request.: Can the last step of the job that creates the data set simply submit via an internal reader a single-step job which performs the FTP, with its SYSOUT directed to the ftp team? They don't want to do this. They want the act of creating to automagically do the ftp for them with no other work on their part. All that is needed is to write a QuickDirty program that uses the RDJFCB command to get access to the DSN associated with a DD Name (which points at the created dataset). It then feeds a canned JCL Steam thorough INTRDR (inserting the DSN as needed) which has a NOTIFY for the FTP Group. The FTP Jobs run and the output goes to the FTP Group to review and rerun if needed. The JCL to execute the program is placed at the end of the create job stream in lieu of the current FTP Step. Since the program reads the DSN and creates the submitted JCL on the fly it meets your automagic requirement. The only thing that is needed is to write the program (which should be simple to do. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Tue, 7 Mar 2006 14:12:31 -0500, Robert A. Rosenberg [EMAIL PROTECTED] wrote: All that is needed is to write a QuickDirty program that uses the RDJFCB command to get access to the DSN associated with a DD Name (which points at the created dataset). It then feeds a canned JCL Steam thorough INTRDR (inserting the DSN as needed) which has a NOTIFY for the FTP Group. The FTP Jobs run and the output goes to the FTP Group to review and rerun if needed. 1) Do you really want to notify everyone in the group? Can't be just one, that person may be on vacation, sick, etc. 2) Who will look at that notify message at 3:00 am when it fails and impacts something like an online application that must be avialable at 7:00 am to meet SLA's? 3) At 8:00 am when people in that group come in and logon to TSO and 100 notify mesages fly by, who will notice the one that had the bad return code. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
This is a common theme in our distribution process. Our job scheduler does all that for us. We use ESP. Our collective CA-7 memories had a half decade of rust, but we think it ought to be ale to do what you need. HTH and good luck -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Friday, March 03, 2006 8:52 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Unusual FTP request. What the programmer would like would be for the production job to simply create the dataset which is to be ftp'ed. All datasets which are to be ftp'ed are created with a specific, unique, high level qualifier. Whenever a dataset with this high level qualifier is created, something triggers a process (job, started task, other) which is passed the name of the dataset just created. This process then does some sort of look up on the name of the dataset just created and generates the appropriate ftp commands, which are somehow passed to an ftp processor. If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
John (Bruce), I've only been glancing over this topic as the posts flew by so please excuse me if I've missed a vital consideration. Now that Bruce has suggested a separate job, I remembered the days I put together some very crude automation where each started task procedure had a job step which fed a card deck to the internal reader, INTRDR, in order to initiate the next activity - or something like that. (In order to make sense of the technique, you need to know that a job step which delayed execution of the following step, a program wrapped around the appropriate macro, is needed.) Isn't the internal reader trick the simpler way to implement this excellent idea? I dare say the COND code could be used in order to run the step with IEBGENER to the internal reader only when the file was known successfully to have been created. Chris Mason - Original Message - From: Bruce Hewson [EMAIL PROTECTED] Newsgroups: bit.listserv.ibm-main To: IBM-MAIN@BAMA.UA.EDU Sent: Monday, 06 March, 2006 4:25 AM Subject: Re: Unusual FTP request. Hello John, late response due to weekend. :-) My understanding is that application staff dont want to be called for FTP problem. And that application staff are getting called due to jobname prefix. You have indicated that existing jobs use a STEP to trigger the FTP request, and that you are using CA-7. My simplistic response would be to solve the problem by turning each FTP STEP into a new job. Use CA-7 to handle the COND code checking currently being done in the JCL STEP. Each new FTP job would have a new FTP team specific jobname. The applications staff can convert the FTP step into a FTP job reasonably quickly. Coordination between application / scheduler / FTP teams would be required for FTP jobname allocation. Result is that existing step jcl can be used, minimising testing and implementation times. no need for any other product, external or in-house. :-D Regards Bruce Hewson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Bruce Hewson Sent: Sunday, March 05, 2006 9:25 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. Hello John, snip You have indicated that existing jobs use a STEP to trigger the FTP request, and that you are using CA-7. My simplistic response would be to solve the problem by turning each FTP STEP into a new job. Use CA-7 to handle the COND code checking currently being done in the JCL STEP. Each new FTP job would have a new FTP team specific jobname. The applications staff can convert the FTP step into a FTP job reasonably quickly. Coordination between application / scheduler / FTP teams would be required for FTP jobname allocation. Result is that existing step jcl can be used, minimising testing and implementation times. no need for any other product, external or in-house. :-D Regards Bruce Hewson Bruce, I'll try to float that. I am not sure, but the programmers may not really like it because they will be required to create the second job's JCL and documentation. But it would solve the who to call problem. I think. Maybe. I hope. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Mon, 6 Mar 2006 13:03:11 +0100, Chris Mason [EMAIL PROTECTED] wrote: Isn't the internal reader trick the simpler way to implement this excellent idea? INTRDR submission is usually frowned upon in a production environment because the job scheduler usually can't track the job (nor trigger jobs afterwards / satisfy dependencies). I happen to agree with that. If you let the programmers (that control JCL changes at many shops) new jobs would be added all the time without scheduling them. Splitting it up is still a good idea, but it more scheduling work. Still waiting to here if an SMTP step based on FTP RC could work or why it wouldn't. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
I know you can't, but tell them all that their NIMBY problem needs a rational solution, not a FYUSTM (Obscene comment abbreviated) impossible solution. Shove the output of the FTP step into a dataset, rexx or what ever to evaluate, ship email if needed. PUT THE damn thing in a cataloged procedure and be done with it. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Friday, March 03, 2006 11:13 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Dennis Trojak Sent: Friday, March 03, 2006 12:53 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. How about a conditional step after the FTP that checks return code GT zero. We do that and send an e-mail via SMTP to the Production team along with any special instructions. Dennis.. Possible, but unlikely. The programmers are really wanting something so that they are not in the loop at all about ftp. That is, they don't want the responsibility to put the FTP step in the job, to check the RC and send mail, or anything else. They want the ftp team to set up all of that, including any userid/password requirements, maintaining the IP address of the server, maintaining the ftp statements, etc. From what I get, they want to say something like: I'm going to create dataset XYZ. It needs to be ftp'ed to server ABC, into subdirectory DEF, and given the name GHI. You figure out what needs to be done to get the ftp to work and set it up independantly of my job. Then, when XYZ is created, the ftp automagically occurs without anything in their JCL. Similar to a dataset trigger in CA-7. I.e. they really want out of the business of data transfer beyond the initial put this dataset on that server and give it this name in this subdirectory. Should anything change after that (e.g. the dataset should go to another server, the server file name or subdirectory should change), they don't even want to know about it. That would be the responsibility of the ftp team to update the ftp process (whatever it turns out to be). NFS/SMB has been mentioned in another post. I have done an NFS import of a UNIX subdirectory onto the z/OS system. It works quite well. However, the same problem occurs. If the job terminates trying to copy to the NFS/SMB share, the programmer would get called and they don't want to be. They would still want someone else to do the NFS/SMB copy function and be responsible for any problems with it. So, ftp or NFS/SMB it is basically all the same to them. They don't want anything related to the copying in any process for which they are responsible. And they really don't want to set up a second job to do the ftp/NFS work either. I know that sounds like they are being lazy, but they have had such problems with this - again due mainly to server problems - that they are frustrated and just want OUT! It is one thing to get called about a problem you can fix. It is another thing to get calls for a problem that is outside your ability to fix or even diagnose properly. Well - off to the annual company meeting. Such fun. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
In a recent note, Mark Zelden said: Date: Mon, 6 Mar 2006 08:25:50 -0600 On Mon, 6 Mar 2006 13:03:11 +0100, Chris Mason [log in to unmask] wrote: Isn't the internal reader trick the simpler way to implement this excellent idea? INTRDR submission is usually frowned upon in a production environment because the job scheduler usually can't track the job (nor trigger jobs afterwards / satisfy dependencies). I happen to agree with that. If you let the programmers (that control JCL changes at many shops) new jobs would be added all the time without scheduling them. Hmmm. Clearly I work in a development environment, not a production environment, so I'm curious about protocols. What's a job scheduler Is it made of silicon or carbon? I thought that nowadays almost all jobs (barring those actually submitted on physical, necrodendritic cards) go through an INTRDR; it's simply a matter of how they get there. Are programmers in a production environment likewise discouraged from using the TSO SUBMIT command (which, AFAIK, also uses an INTRDR)? How do jobs get submitted? Must a human bureaucrat (job scheduler) sign off on each one? If the process is in fact automated, can't one job submit another through the automated sanctioned channel, as opposed to via INTRDR? -- gil -- StorageTek INFORMATION made POWERFUL -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Mon, 6 Mar 2006 08:41:56 -0700, Paul Gilmartin [EMAIL PROTECTED] wrote: Hmmm. Clearly I work in a development environment, not a production environment, so I'm curious about protocols. What's a job scheduler Is it made of silicon or carbon? I meant job scheduling software. Are you nit picking, or did you really not know what I was referring to? And yes, all production jobs usually are scheduled through the software. Of course rules are made to be broken and there are always exceptions. We have some end users that submit jobs that are considered production. Obviously they don't get the benefit of job scheduling software to automatically check return codes etc. and kick off subsequent jobs. But we do have ThruPut Manager that emulates the Mellon Bank JES2 mods that include /*BEFORE and /*AFTER, and people do make use of that. Regards, Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Mon, 6 Mar 2006 10:46:53 -0700, Paul Gilmartin [EMAIL PROTECTED] wrote: We might have something like that around for testing. I've never used it; I don't know anyone in my department who has. Perhaps some of our testers. Maybe not your department, but don't any of the z/OS systems you work on have maintenance jobs that are regularly scheduled? Is it all started with JES2 automatic commands? Or does an operator submit them and cross the jobs off a flowchart? Not far fetched as one of my former small clients has no scheduling or restart/rerun software and still does it that way. There are some freebie ones out there including one from David Cole called SCHEDRUN that I have used before. http://www.colesoft.com/utilities.html Manager that emulates the Mellon Bank JES2 mods that include /*BEFORE and /*AFTER, and people do make use of that. Is this JES2-wannabe-JES3? I know it included /*BEFORE jobname, /*AFTER jobname and /*WITH jobname, but there may have been more to the mods. From what I know, Mellon Bank used to require anyone who communicated with them NJE to install the mods. You may be able to find more from the archives or someone like Schmuel can probably give you additional details. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
McKown, John wrote (to IBM-Main): [snip] programmers want a generic facility [snip] Using a dataset trigger requires updates to CA-7 to trigger a different job for each dataset (unless you know another way!). Also, everytime a new dataset is created, then they need a new ftp job and a new trigger to be entered. [snip] John: we use ESP from CyberMation but I expect that CA-7 will have similar capabilities. You can probably use wild-card patterns in the triggering name then use a CA-7 substitute variable (%ESPTRDSN for ESP) within the 'generic' JCL. With ESP, you can also trigger a Rexx- like procedure along with simple JCL submission that can create other 'user' variables ie. the destination server directory. I'd chat with your resident CA-7 guru. (or the vendor would likely be pleased to help with a solution.) I currently have JCL that operates for all of our environments (Dev, QA, Prod, etc.) to dump process an ISV log from Cics and it's triggered from a single ESP event. By parsing the triggering dataset name ie. the HLQ, in the ESP procedure, all sorts of JCL proc variables are set such as DB2 sub-sys name, Adabas DbId, StepLib HLQ, etc. 'course, you need naming rules that allow you to do this. -- signature = 6 lines follows -- Neil Duffee, Joe SysProg, U d'Ottawa, Ottawa, Ont, Canada telephone:1 613 562 5800 x4585 fax:1 613 562 5161 mailto:NDuffee of uOttawa.ca http:/ /aix1.uottawa.ca/ ~nduffee How *do* you plan for something like that? Guardian Bob, Reboot For every action, there is an equal and opposite criticism. Systems Programming: Guilty, until proven innocent John Norgauer 2004 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
Is this JES2-wannabe-JES3? Possibly. But, the Mellon Mods have been around for years. THRUPUT MGR has emulated them for over 10. It's a little ironic: I worked at a shop that had one JES3 and two JES2 sites. They got rid of JES3 due to support issues. The first thing they did was get THRUPUT MGR MSX/MSI to re-introduce the functionality to JES2, that they lost. - -teD I’m an enthusiastic proselytiser of the universal panacea I believe in! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
In a recent note, Mark Zelden said: Date: Mon, 6 Mar 2006 13:44:17 -0600 Maybe not your department, but don't any of the z/OS systems you work on have maintenance jobs that are regularly scheduled? Is it all started with JES2 automatic commands? Or does an operator submit them and cross the jobs off a flowchart? Not far fetched Damn! It's gotta be better than that (he muses). So I asked. I work in a development lab. Backups are automated by the meager scheduling facilities of HSM. That's all the admin I asked offered; I elected not to badger her for more, but she offered that our production shop uses more sophisticated commercial schedulers. What scheduled maintenance should I expect in a development lab? Perhaps scratching tramp data sets? Others? Thanks for motivating me toward enlightenment, gil -- StorageTek INFORMATION made POWERFUL -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-- snip -- Maybe not your department, but don't any of the z/OS systems you work on have maintenance jobs that are regularly scheduled? Is it all started with JES2 automatic commands? Or does an operator submit them and cross the jobs off a flowchart? Not far fetched Damn! It's gotta be better than that (he muses). So I asked. I work in a development lab. Backups are automated by the meager scheduling facilities of HSM. That's all the admin I asked offered; I elected not to badger her for more, but she offered that our production shop uses more sophisticated commercial schedulers. What scheduled maintenance should I expect in a development lab? Perhaps scratching tramp data sets? Others? -- snip -- Development lab. You probably produce a few abends (maybe even a few SVC dumps :-)). Do you care about SMF data, EREP. Old jobs in JES2. Also, as you mentioned, cleaning up old datasets (you may not have a well defined SMS/HSM environment set up) In a development environment you can take care of all these kinds of regular (timed or event) tasks via other means. A production environment has a very different quality. Accountablility. Clearly defined roles. Structured and controlled batch runs that are tuned to their current environment. Automatically created problem tickets, Emails, pagers. John. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
Hello John, late response due to weekend. :-) My understanding is that application staff dont want to be called for FTP problem. And that application staff are getting called due to jobname prefix. You have indicated that existing jobs use a STEP to trigger the FTP request, and that you are using CA-7. My simplistic response would be to solve the problem by turning each FTP STEP into a new job. Use CA-7 to handle the COND code checking currently being done in the JCL STEP. Each new FTP job would have a new FTP team specific jobname. The applications staff can convert the FTP step into a FTP job reasonably quickly. Coordination between application / scheduler / FTP teams would be required for FTP jobname allocation. Result is that existing step jcl can be used, minimising testing and implementation times. no need for any other product, external or in-house. :-D Regards Bruce Hewson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
How about the following: 1. Job A step 1 creates a file to transfer. 2. Job A step 2 updates a log identifying file to transfer. The log should contain info about where to transfer file, status of transfer, who to contact should file needs to be recreated, and other info as needed. 3. Job B reads log and transfer files based on status. Job B can update status as needed. Job B can transfer files itself or trigger appropriate number of Job Cs. CA7 can trigger Job B after Job A or on some other timed schedule. If the log show no files to transfer then no action is taken and Job B ends. 4. If external customer calls programmer or FTP staff can check the log and based on status and date transferred inform the customer that the file was transferred or change the status to be triggered by the next schedule Job B. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
John, I think you perhaps understood the SMB recommendation backwards. What I think we're suggesting -- well, I'm suggesting -- is to place the dataset on a z/OS SMB share and let the Windows servers fetch it as they wish, when they wish. Schedule deletion from the share every month, week, or whatever, but otherwise don't do much. The information is available -- come get it from Drive M. This would also probably vividly demonstrate how often Windows servers fail to those who run them, but that's a side benefit. :-) It's called pull instead of push (from the Windows server perspective). If they'd like push, ask for funding -- you need a pusher. PM4Data happens to be my favorite, and it's a lot less expensive than professional time. (Clients are free AFAIK.) All that said, here's something to keep in mind: every time I see FTP (or file transfer generally) in somebody's architecture I ask about it. It's often a sign of bad architecture. (Not always, but often.) One thing that's true for sure: if there's an online system on one end of the file transfer you've just made it batch. If you transfer a file from an online system the information is out-of-date the moment you do so. So that's the basic question I ask: why isn't the information available online, real-time? And that's my question here: why can't the Windows servers get online, real-time access to these data? Why are you copying files? There could be a perfectly reasonable explanation, but I'm just wondering. FTP should not be mistaken for integration: it's only file transfer. There's also the not so minor issue of information security -- copying simply multiplies your privacy protection challenge. These days Windows servers like to have Web services or ODBC for online access, depending on the source. Mainframes do both very, very well. - - - - - Timothy F. Sipples Consulting Enterprise Software Architect, z9/zSeries IBM Japan, Ltd. E-Mail: [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
Ed Gould wrote: I have seen this same issue through the years. There is no perfect answer, IMO. FTP (and other like programs) run async. AFAIK, FTP is a synchronous operation. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
If your job scheduler can trigger on dataset creation then just set up a trigger and job for each FTP with instructions to ops that the 'ftp team' is called if they fail. Or replace the ftp step in the creating job with a submit of the FTP job Applications sets up the jobs so no fault of yours if someone forgets or misses a file. 'FTP team' handles the abends. and can resubmit if necessary. Not very elegant solution but it works. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Friday, March 03, 2006 9:52 AM To: IBM-MAIN@BAMA.UA.EDU Subject: [IBM-MAIN] Unusual FTP request. One of our applications people came up with a, uh, unusual request today. We use ftp to transfer data from the z/OS system to various, internal, ftp servers. Currently, this is done by adding an ftp step to the end of the job which creates the data set to transfer. This usually works. However, there have been cases where the ftp step ends with a bad return code due to various problems on the remote (server) side. Some examples would be: (1) the userid on the ftp server has been deleted/revoked; (2) the subdirectory on the server has been removed or has the wrong attributes (i.e. the ftp userid cannot create a file in that subdirectory); (3) the file to be transferred in to already exists on the server, but is owned by another userid and so cannot be replaced; (5) the IP address of the server has changed (very rare!). What the programmer would like would be for the production job to simply create the dataset which is to be ftp'ed. All datasets which are to be ftp'ed are created with a specific, unique, high level qualifier. Whenever a dataset with this high level qualifier is created, something triggers a process (job, started task, other) which is passed the name of the dataset just created. This process then does some sort of look up on the name of the dataset just created and generates the appropriate ftp commands, which are somehow passed to an ftp processor. If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. Has anybody heard of any process which could do such a thing? There are two restrictions: (1) No money is budgetted for this; and (2) Tech Services doesn't want to be responsible for writing any code because we just don't have the time to support yet another application. There are only 3 of us to support z/OS, CICS, and all the vendor products. We are not developers (although two of us are fairly good HLASM programmers and have done development before). -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
This sort of process is very similar to those used by shops that have MQSeries (we do it here as well). Triggering is built into the product and, assuming you have some sort of monitoring product that can issue email/paging alerts, any problems encountered along the way can trapped. As for using MQ for ftp'ing a file, there are several different ways to approach this, but it will involve code (in-house developed, or purchased). But, once it's setup, it's pretty simple to use. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
In a recent note, McKown, John said: Date: Fri, 3 Mar 2006 08:51:43 -0600 (5) the IP address of the server has changed (very rare!). And shouldn't matter even when it does happen. What the programmer would like would be for the production job to simply create the dataset which is to be ftp'ed. All datasets which are to be ftp'ed are created with a specific, unique, high level qualifier. Whenever a dataset with this high level qualifier is created, something triggers a process (job, started task, other) which is passed the name of the dataset just created. This process then does some sort of look up on the name of the dataset just created and generates the appropriate ftp commands, which are somehow passed to an ftp processor. If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. Can the last step of the job that creates the data set simply submit via an internal reader a single-step job which performs the FTP, with its SYSOUT directed to the ftp team? Perhaps an additional step which conditionally SMTPs or TRANSMITs a failure notice to the ftp team. Or, use MGCR to start the transfer task. Has anybody heard of any process which could do such a thing? There are two restrictions: (1) No money is budgetted for this; and (2) Tech Services doesn't want to be responsible for writing any code because we just don't have the time to support yet another application. There are only 3 of us to support z/OS, CICS, and all the vendor products. We are not developers (although two of us are fairly good HLASM programmers and have done development before). How much latency can you tolerate? What about a crontab job that runs every few minutes and scans for files to transmit? Are you running one of the HTTPD family? The transfer could be performed by a cgi-bin application launched by an HTTP connection passing the data set name as query data. -- gil -- StorageTek INFORMATION made POWERFUL -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Fri, 3 Mar 2006 08:51:43 -0600, McKown, John [EMAIL PROTECTED] wrote: One of our applications people came up with a, uh, unusual request today. We use ftp to transfer data from the z/OS system to various, internal, ftp servers. Currently, this is done by adding an ftp step to the end of the job which creates the data set to transfer. This usually works. However, there have been cases where the ftp step ends with a bad return code due to various problems on the remote (server) side. snip What the programmer would like would be for the production job to simply create the dataset which is to be ftp'ed. Whenever a dataset with this high level qualifier is created, something triggers a process (job, started task, other) which is passed the name of the dataset just created. This process then does some sort of look up on the name of the dataset just created and generates the appropriate ftp commands, which are somehow passed to an ftp processor. If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. snip Has anybody heard of any process which could do such a thing? There are two restrictions: (1) No money is budgetted for this; and (2) Tech Services doesn't want to be responsible for writing any code because we just don't have the time to support yet another application. snip I once did something similar at a shop, but the process was not FTP, it was NFS. There was an OS/2 box connected to the lan that had the HLQ NFS mounted on the mainframe. I wrote REXX code on the OS/2 box that was started at boot time. The driver code included a SLEEP function that woke up every 30 minutes to do the transfers, which were nothing more than COPY commands since the MVS files were NFS mounted. There was no failure processing however, but I guess space would have been the only issue. You could do as you say want with most scheduling packages. Let the application create the data set and the scheduling package then fires off a job based on the data set creation. If the job (FTP) fails, then I guess the schedulers / operators could contact the appropriate people based on documention in the job or whatever. But... In your case, since people want to be notified anyway, why not tack on an SMTP step to send an email when you get a bad return code. Of course this requires pagers that work via email addresses. Some authomation packages can dial pagers also, so that is another option if you add a step that can trigger automation via WTO etc. Then you don't have to wait for an operator or scheduler to notice the job has a problem and contact someone. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Porowski, Ken Sent: Friday, March 03, 2006 9:06 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. If your job scheduler can trigger on dataset creation then just set up a trigger and job for each FTP with instructions to ops that the 'ftp team' is called if they fail. Or replace the ftp step in the creating job with a submit of the FTP job Applications sets up the jobs so no fault of yours if someone forgets or misses a file. 'FTP team' handles the abends. and can resubmit if necessary. Not very elegant solution but it works. That's what I suggested to them. However, the programmers want a generic facility so that they don't need to bother telling anybody about a new FTP'ed file. Using a dataset trigger requires updates to CA-7 to trigger a different job for each dataset (unless you know another way!). Also, everytime a new dataset is created, then they need a new ftp job and a new trigger to be entered. They don't want to bother with this. They just want to have it so that they just create a dataset with the correct high level and somebody else takes care of the rest. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Richard Tsujimoto Sent: Friday, March 03, 2006 9:12 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. This sort of process is very similar to those used by shops that have MQSeries (we do it here as well). Triggering is built into the product and, assuming you have some sort of monitoring product that can issue email/paging alerts, any problems encountered along the way can trapped. As for using MQ for ftp'ing a file, there are several different ways to approach this, but it will involve code (in-house developed, or purchased). But, once it's setup, it's pretty simple to use. MQSeries costs money. We have none. Well, we as in the z/OS people. the Windows people are spending dollars as if they were centavos (Mexican pennies). But z/OS is too expensive to be strategic here, anymore. Yeah, my on-going whine. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Paul Gilmartin Sent: Friday, March 03, 2006 9:17 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. In a recent note, McKown, John said: snip Can the last step of the job that creates the data set simply submit via an internal reader a single-step job which performs the FTP, with its SYSOUT directed to the ftp team? They don't want to do this. They want the act of creating to automagically do the ftp for them with no other work on their part. Perhaps an additional step which conditionally SMTPs or TRANSMITs a failure notice to the ftp team. Or, use MGCR to start the transfer task. Nope. I ain't gonna let no programmer nowhere near MGCR[E]. And we in Tech Services ain't gonna do it neither, 'cause we ain't got the time. snip How much latency can you tolerate? What about a crontab job that runs every few minutes and scans for files to transmit? Some latency is OK. I suggested a CA7 job to run hourly, but they didn't really like that. Please, no UNIX solutions. Everybody here, other than myself, thinks UNIX is one step lower than Windows. Are you running one of the HTTPD family? The transfer could be performed by a cgi-bin application launched by an HTTP connection passing the data set name as query data. And interesting idea, but still it lacks the automagic portion desired by the programming staff of no involvement on our part beyond the creation of the dataset. Nobody want to write any code or JCL or anything to accomplish this. Makes it difficult, don't it? -- gil -- StorageTek INFORMATION made POWERFUL We are running the free HTTPD server. That CGI thing sounds interesting. However, the people making the request want to only create an appropriately named dataset. They want to do absolutely NOTHING else, just create the dataset. They don't want a subsequent job to do the ftp (either submitted by the creating job or triggered via CA-7). They want magic. And they want it supported by anybody else, not us. I.e. they are sick of ftp and just don't want to be bothered with it any more, due to problems which are mostly on the Windows/server side. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
In a recent note, McKown, John said: Date: Fri, 3 Mar 2006 11:13:21 -0600 They don't want to do this. They want the act of creating to automagically do the ftp for them with no other work on their part. Sounds like NFS or SMB mounting on the target system. Nope. I ain't gonna let no programmer nowhere near MGCR[E]. And we in Tech Services ain't gonna do it neither, 'cause we ain't got the time. Not even for a minimal program in an authorized library to issue the MGCR Some latency is OK. I suggested a CA7 job to run hourly, but they didn't really like that. Please, no UNIX solutions. Everybody here, other than myself, thinks UNIX is one step lower than Windows. There may be no correct UNIX solution in this case. But they shouldn't let bigotry influence them to not cosider possibilities. And interesting idea, but still it lacks the automagic portion desired by the programming staff of no involvement on our part beyond the creation of the dataset. Nobody want to write any code or JCL or anything to accomplish this. Makes it difficult, don't it? I'd say impossible. But an often-heard requirement: Make it work, but don't change anything. We are running the free HTTPD server. That CGI thing sounds interesting. However, the people making the request want to only create an appropriately named dataset. They want to do absolutely NOTHING else, just create the dataset. They don't want a subsequent job to do the ftp (either submitted by the creating job or triggered via CA-7). They want magic. And they want it supported by anybody else, not us. I.e. they are sick of ftp and just don't want to be bothered with it any more, due to problems which are mostly on the Windows/server side. Again, NFS or SMB. And remember Clarke's third law. -- gil -- StorageTek INFORMATION made POWERFUL -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
In article [EMAIL PROTECTED], [EMAIL PROTECTED] (McKown, John) wrote: If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. How about just changing the documentation for the job to say that if it fails in the ftp step, to call the ftp team instead of the normal programmers? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Matt Simpson Sent: Friday, March 03, 2006 12:22 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. In article [EMAIL PROTECTED], [EMAIL PROTECTED] (McKown, John) wrote: If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. How about just changing the documentation for the job to say that if it fails in the ftp step, to call the ftp team instead of the normal programmers? Hum, that is actually a fairly good idea. Now, to see if our Production Control people can implement it. The reason that I say that is that each support group's jobs start with a specific prefix. So Prod Control looks at the first 4 characters of the job which failed and call that group. I don't think that they actually refer to any external documentation on the job to determine who to call. So the apps person would get called, look at the job, see that it was the ftp step, then ask that the ftp team be called instead. The apps people would like to avoid being woken up just to determine not my problem. That I can well understand! -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
How about a conditional step after the FTP that checks return code GT zero. We do that and send an e-mail via SMTP to the Production team along with any special instructions. Dennis.. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Friday, March 03, 2006 12:35 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Matt Simpson Sent: Friday, March 03, 2006 12:22 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. In article [EMAIL PROTECTED], [EMAIL PROTECTED] (McKown, John) wrote: If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. How about just changing the documentation for the job to say that if it fails in the ftp step, to call the ftp team instead of the normal programmers? Hum, that is actually a fairly good idea. Now, to see if our Production Control people can implement it. The reason that I say that is that each support group's jobs start with a specific prefix. So Prod Control looks at the first 4 characters of the job which failed and call that group. I don't think that they actually refer to any external documentation on the job to determine who to call. So the apps person would get called, look at the job, see that it was the ftp step, then ask that the ftp team be called instead. The apps people would like to avoid being woken up just to determine not my problem. That I can well understand! -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Dennis Trojak Sent: Friday, March 03, 2006 12:53 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. How about a conditional step after the FTP that checks return code GT zero. We do that and send an e-mail via SMTP to the Production team along with any special instructions. Dennis.. Possible, but unlikely. The programmers are really wanting something so that they are not in the loop at all about ftp. That is, they don't want the responsibility to put the FTP step in the job, to check the RC and send mail, or anything else. They want the ftp team to set up all of that, including any userid/password requirements, maintaining the IP address of the server, maintaining the ftp statements, etc. From what I get, they want to say something like: I'm going to create dataset XYZ. It needs to be ftp'ed to server ABC, into subdirectory DEF, and given the name GHI. You figure out what needs to be done to get the ftp to work and set it up independantly of my job. Then, when XYZ is created, the ftp automagically occurs without anything in their JCL. Similar to a dataset trigger in CA-7. I.e. they really want out of the business of data transfer beyond the initial put this dataset on that server and give it this name in this subdirectory. Should anything change after that (e.g. the dataset should go to another server, the server file name or subdirectory should change), they don't even want to know about it. That would be the responsibility of the ftp team to update the ftp process (whatever it turns out to be). NFS/SMB has been mentioned in another post. I have done an NFS import of a UNIX subdirectory onto the z/OS system. It works quite well. However, the same problem occurs. If the job terminates trying to copy to the NFS/SMB share, the programmer would get called and they don't want to be. They would still want someone else to do the NFS/SMB copy function and be responsible for any problems with it. So, ftp or NFS/SMB it is basically all the same to them. They don't want anything related to the copying in any process for which they are responsible. And they really don't want to set up a second job to do the ftp/NFS work either. I know that sounds like they are being lazy, but they have had such problems with this - again due mainly to server problems - that they are frustrated and just want OUT! It is one thing to get called about a problem you can fix. It is another thing to get calls for a problem that is outside your ability to fix or even diagnose properly. Well - off to the annual company meeting. Such fun. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
They just want to have it so that they just create a dataset with the correct high level and somebody else takes care of the rest I just want a $1,000,000 tax (and spouse) free. But, I'm not going to get it. They have the same chance. There are packages out there that do some (or all) of the requirements. But, they cost money and time (more money). ISR you originally asked for a cost free solutions? HAH!!! - -teD I’m an enthusiastic proselytiser of the universal panacea I believe in! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Fri, 3 Mar 2006 12:34:37 -0600, McKown, John [EMAIL PROTECTED] wrote: Hum, that is actually a fairly good idea. Now, to see if our Production Control people can implement it. The reason that I say that is that each support group's jobs start with a specific prefix. So Prod Control looks at the first 4 characters of the job which failed and call that group. I don't think that they actually refer to any external documentation on the job to determine who to call. So the apps person would get called, look at the job, see that it was the ftp step, then ask that the ftp team be called instead. The apps people would like to avoid being woken up just to determine not my problem. That I can well understand! Split the job in two then. The apps people who don't want to be called may be willing to do that work. Also scheduling work to set up the triggers. What about adding an SMTP step? Or did I miss your response? Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group mailto: [EMAIL PROTECTED] Systems Programming expert at http://expertanswercenter.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
Let's say you have 10 files that are FTPed. How would you identify the destination of the 10 files? Let's further assume that a month later a new FTP file is created. How would you determine the destination of the new file? Selvius Turner -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of McKown, John Sent: Friday, March 03, 2006 6:52 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Unusual FTP request. One of our applications people came up with a, uh, unusual request today. We use ftp to transfer data from the z/OS system to various, internal, ftp servers. Currently, this is done by adding an ftp step to the end of the job which creates the data set to transfer. This usually works. However, there have been cases where the ftp step ends with a bad return code due to various problems on the remote (server) side. Some examples would be: (1) the userid on the ftp server has been deleted/revoked; (2) the subdirectory on the server has been removed or has the wrong attributes (i.e. the ftp userid cannot create a file in that subdirectory); (3) the file to be transferred in to already exists on the server, but is owned by another userid and so cannot be replaced; (5) the IP address of the server has changed (very rare!). What the programmer would like would be for the production job to simply create the dataset which is to be ftp'ed. All datasets which are to be ftp'ed are created with a specific, unique, high level qualifier. Whenever a dataset with this high level qualifier is created, something triggers a process (job, started task, other) which is passed the name of the dataset just created. This process then does some sort of look up on the name of the dataset just created and generates the appropriate ftp commands, which are somehow passed to an ftp processor. If the ftp processor has a problem, then the ftp team would be alerted that an ftp failed. The ftp team would be able to look at the ftp output and hopefully determine what failed, why, and then fix it. This would releave the normal programmers from being called. These people: (1) don't have the authority on the ftp server to see what the problem might be, if the problem is there; (2) don't know how to determine if the problem is on the server or the z/OS side; (3) don't want to be responsible for ftp processing at all. Has anybody heard of any process which could do such a thing? There are two restrictions: (1) No money is budgetted for this; and (2) Tech Services doesn't want to be responsible for writing any code because we just don't have the time to support yet another application. There are only 3 of us to support z/OS, CICS, and all the vendor products. We are not developers (although two of us are fairly good HLASM programmers and have done development before). -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
How about just changing the documentation for the job to say that if it fails in the ftp step, to call the ftp team instead of the normal programmers? The cheapest solution, yet. As a matter of fact, our prod support team is also first line support for FTP problems. About the only thing they cannot fix is if a source/target is on a machine they don't have access to. (Usually external partners) - -teD I’m an enthusiastic proselytiser of the universal panacea I believe in! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
-Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Ted MacNEIL Sent: Thursday, March 02, 2006 6:00 PM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Unusual FTP request. They just want to have it so that they just create a dataset with the correct high level and somebody else takes care of the rest I just want a $1,000,000 tax (and spouse) free. But, I'm not going to get it. They have the same chance. There are packages out there that do some (or all) of the requirements. But, they cost money and time (more money). ISR you originally asked for a cost free solutions? HAH!!! - -teD You put your finger on the stickler. And I agree that they are not going to get what they want. Mainly because NOBODY reallys wants the responsibility for this. It is one of those things that crosses functional barriers. The mainframers don't really want to deal with a server issue. And the server/Windows people don't want to be bothered with what they consider a mainframe issue. This is another of those finger pointing exercises. Someday something will cause upper management to issue a fiat as to what is to be done. Last time this happened, Tech Services (me) got saddled with doing something. In the interim, everybody just complains. And my main reason for asking such a silly question is because one of the questions that I am usually asked is: Did you ask about this on IBM-MAIN?. This forum is now considered a resource for possible ideas and solutions. And it is usually a good one, too! -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
Or SMTP to the adminstrators (PFCSK's) of the remote SERVER, cc'ing the network team. Say Our system, which is UP, can't send a file to your system. Your receipt of this email confirms that our system is up and can communicate with the network. Fix your system and notify production supprot to resubmit job x. Possible, but unlikely. The programmers are really wanting something so that they are not in the loop at all about ftp. That is, they don't want the responsibility to put the FTP step in the job, to check the RC and send mail, or anything else. They want the ftp team to set up all of that, including any userid/password requirements, maintaining the IP address of the server, maintaining the ftp statements, etc. From what I get, they want to say something like: I'm going to create dataset XYZ. It needs to be ftp'ed to server ABC, into subdirectory DEF, and given the name GHI. You figure out what needs to be done to get the ftp to work and set it up independantly of my job. Then, when XYZ is created, the ftp automagically occurs without anything in their JCL. Similar to a dataset trigger in CA-7. I.e. they really want out of the business of data transfer beyond the initial put this dataset on that server and give it this name in this subdirectory. Should anything change after that (e.g. the dataset should go to another server, the server file name or subdirectory should change), they don't even want to know about it. That would be the responsibility of the ftp team to update the ftp process (whatever it turns out to be). All that might be possible in a rexx / batch job ran periodically. If the FTP succeeds, rename the dataset to XYZ.SENT or delete it, then your rexx won't trigger again. They fill the swamp with their files, you drain it with rexx / FTP. --- [This E-mail scanned for viruses by Declude Virus] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
John McKown wrote: ... because NOBODY reallys wants the responsibility for this. It is one of those things that crosses functional barriers. The mainframers don't really want to deal with a server issue. And the server/Windows people don't want to be bothered with what they consider a mainframe issue. This is another of those finger pointing exercises. Someday something will cause upper management to issue a fiat as to what is to be done. Last time this happened, Tech Services (me) got saddled with doing something. In the interim, everybody just complains. I guess until you solve the us and them issue and no to spare, you'll just get a bunch of whiners grin I agree with someone's suggestion to use either MVS NFS or SMB, which depends on what the receiving end can support. You provide the NFS Server on the mainframe and export the HLQ to your clients. Your mainframe pgmrs create the data set under the exported HLQ, the clients see it on their side, copy and delete from their mountpoint and it gets deleted on the mainframe. I can only attest to MVS NFS as a file server. Maybe someone else here has a similarly good experience with SMB. I know MVS NFS is available, if not already configured at your shop, at no new $$$s (Only true if you work for free). -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Unusual FTP request.
On Mar 3, 2006, at 1:12 PM, McKown, John wrote: -SNIP- NFS/SMB has been mentioned in another post. I have done an NFS import of a UNIX subdirectory onto the z/OS system. It works quite well. However, the same problem occurs. If the job terminates trying to copy to the NFS/SMB share, the programmer would get called and they don't want to be. They would still want someone else to do the NFS/SMB copy function and be responsible for any problems with it. So, ftp or NFS/SMB it is basically all the same to them. They don't want anything related to the copying in any process for which they are responsible. And they really don't want to set up a second job to do the ftp/NFS work either. I know that sounds like they are being lazy, but they have had such problems with this - again due mainly to server problems - that they are frustrated and just want OUT! It is one thing to get called about a problem you can fix. It is another thing to get calls for a problem that is outside your ability to fix or even diagnose properly. Well - off to the annual company meeting. Such fun. John, I have seen this same issue through the years. There is no perfect answer, IMO. FTP (and other like programs) run async. OEM said there answer is to do a TSO send to X users when the transmission fails. The original FTP concept was not thought about for batch (IMO). This might be a way to get IBM to support something like this if you want to wait for IBM to implement it (don't wait). There *MAYBE* a message that shows up on the console to indicate a failure, you might trigger off the message. I resisted this for political reasons, IIRC. ´d -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html