In a recent note, McKown, John said:

> Date:         Fri, 3 Mar 2006 08:51:43 -0600
> 
> (5) the IP address of the server has changed (very rare!).
> 
And shouldn't matter even when it does happen.

> What the programmer would like would be for the production job to simply
> create the dataset which is to be ftp'ed. All datasets which are to be
> ftp'ed are created with a specific, unique, high level qualifier.
> Whenever a dataset with this high level qualifier is created,
> "something" triggers a process (job, started task, other) which is
> passed the name of the dataset just created. This process then does some
> sort of "look up" on the name of the dataset just created and generates
> the appropriate ftp commands, which are "somehow" passed to an "ftp
> processor". If the "ftp processor" has a problem, then the "ftp team"
> would be alerted that an ftp failed. The "ftp team" would be able to
> look at the ftp output and hopefully determine what failed, why, and
> then fix it. This would releave the normal programmers from being
> called. These people: (1) don't have the authority on the ftp server to
> see what the problem might be, if the problem is there; (2) don't know
> how to determine if the problem is on the server or the z/OS side; (3)
> don't want to be responsible for ftp processing at all.
> 
Can the last step of the job that creates the data set simply
submit via an internal reader a single-step job which performs
the FTP, with its SYSOUT directed to the "ftp team"?

Perhaps an additional step which conditionally SMTPs or TRANSMITs
a failure notice to the "ftp team".

Or, use MGCR to start the transfer task.

> Has anybody heard of any "process" which could do such a thing? There
> are two restrictions: (1) No money is budgetted for this; and (2) Tech
> Services doesn't want to be responsible for writing any code because we
> just don't have the time to support yet another "application". There are
> only 3 of us to support z/OS, CICS, and all the vendor products. We are
> not developers (although two of us are fairly good HLASM programmers and
> have done development before).
> 
How much latency can you tolerate?  What about a crontab job that
runs every few minutes and scans for files to transmit?

Are you running one of the HTTPD family?  The transfer could be
performed by a cgi-bin application launched by an HTTP connection
passing the data set name as query data.

-- gil
-- 
StorageTek
INFORMATION made POWERFUL

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to