I think what he ment is that:

1. Access http://initiate-2-hour-process.cgi

This runs the script which writes to an HTML file and
returns an HTTP header with redirects the browser to another
URL. The script forks into the background so the parent returns
the HTTP redirect (if evrything startted fine) and the son runs
the long-term process whic writes to poll-the-html-file.html

2. The browser goes to the redirected URL:
http://poll-the-html-file.html

This is the file being generated by the 2 hour process child. Its
content can tell the browser to come back and re-fetch the file
every few seconds/minutes. In addition, it content might show progress
like time of last update and number of records processed so far and
such.

Does that make better sense?

Cheers,

--Amos

Louis wrote:
Hi Phil:

Thanks for the suggestion. I am looking at a way
to deal with this. But I want to see if I understood you here.

Source : The script that runs for 2-3 hrs;
Caller : Initiates Source, and polls every so and so
and update may be a progress window that the user sees.

So then to start off with, Caller starts off with an LWP
on Source. It sounds like with this approach Source has to
regularly send something back to Caller , and not wait till
the end (2 - 3 hrs).

But when u say polls the server every so and so what do u mean
here ?  The Caller if server is in Sussex mode does not know process
ids etc.. (not su here). Can you please clarify here ?

I will look further into this as well.

Thanks

Louis.

-----Original Message-----
From: Phil Scarratt [mailto:[EMAIL PROTECTED] Sent: Friday, 15 October 2004 16:37
To: Louis
Cc: [EMAIL PROTECTED]
Subject: Re: [SLUG] Scirpt Via browser Gives code=SERVER_RESPONSE_CLOSE
!



Louis wrote:

The script can be also be executed from the command line. It can take
anywhere
between 2 to 3 hrs to complete.

So does this mean that I cannot get it to run via browser ?

What about parsing the call via LWP ? If the browser times out would the
LWP
url called still run anyway ?


Louis.



Not sure if this is still an issue, but I doubt you'd get a web server/browser combination to control a script that runs for 2-3 hrs. What you'd be better of doing is something like modifying the script to output to a plain text file (for example) and then initiate it in the background from a web-based page. The returned page from the initiating page then polls the server every 5, 10, or whatever minutes to see if it

has finished and displays any results. Did that make any sense? Not sure

if this will do what you want....


Fil



-- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to