Hi Phil:

Thanks for the suggestion. I am looking at a way
to deal with this. But I want to see if I understood you here.

Source : The script that runs for 2-3 hrs;
Caller : Initiates Source, and polls every so and so
and update may be a progress window that the user sees.

So then to start off with, Caller starts off with an LWP
on Source. It sounds like with this approach Source has to
regularly send something back to Caller , and not wait till
the end (2 - 3 hrs).

But when u say polls the server every so and so what do u mean
here ?  The Caller if server is in Sussex mode does not know process
ids etc.. (not su here). Can you please clarify here ?

I will look further into this as well.

Thanks

Louis.

-----Original Message-----
From: Phil Scarratt [mailto:[EMAIL PROTECTED] 
Sent: Friday, 15 October 2004 16:37
To: Louis
Cc: [EMAIL PROTECTED]
Subject: Re: [SLUG] Scirpt Via browser Gives code=SERVER_RESPONSE_CLOSE
!


Louis wrote:
> The script can be also be executed from the command line. It can take
> anywhere
> between 2 to 3 hrs to complete.
>  
> So does this mean that I cannot get it to run via browser ?
>  
> What about parsing the call via LWP ? If the browser times out would 
> the
> LWP
> url called still run anyway ?
>  
> Louis.
> 

Not sure if this is still an issue, but I doubt you'd get a web 
server/browser combination to control a script that runs for 2-3 hrs. 
What you'd be better of doing is something like modifying the script to 
output to a plain text file (for example) and then initiate it in the 
background from a web-based page. The returned page from the initiating 
page then polls the server every 5, 10, or whatever minutes to see if it

has finished and displays any results. Did that make any sense? Not sure

if this will do what you want....


Fil


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to