Alan,

This is a problem not just for what I need done but in general for
any exit processing.  If an exit processing routine is provided and
documented for use, then the stack needs to allow the server some
known time window to do it's job.  As it is now, the time window changes
each time.  It could be a few seconds, to tens of seconds.  I am not
sure what it is dependent on but I know it is not consistent.

It would be helpful if the server could be aware of exit processing
so that it can allow for the exit to finish properly.


On Wed, 29 Aug 2007 17:10:33 -0400 Alan Altmark said:
>On Wednesday, 08/29/2007 at 03:06 EDT, Aria Bamdad
><[EMAIL PROTECTED]> wrote:
>
>> I have a local exit defined in my DTCPARMS file for my SMTP servers.
>> When a server is started the exit is called with the 'BEGIN' parameter.
>> During this time, the server will process the log files that exist on
>> it's A disk.  However, this may take some time and at times, the server
>> does not finish before TCPIP notices that the server is not listening
>> and force/autologs the server.  This messes up the log processing
>routine!
>>
>> Is there a way to prevent TCPIP from restarting servers for a window of
>> time to allow for server exit processing to complete.
>
>No, sorry.  I would suggest using the exit to swap disks and then signal
>another virtual machine that it can link, access, and process the logs or
>maybe just SENDFILE them to another virtual machine.
>
>Alan Altmark
>z/VM Development
>IBM Endicott
>

Reply via email to