A job queue is also better because it stops un-controlled forking or excessive numbers of "dead" web connections hanging around. It will just queue requests until resources are available.. You may find handling multiple of these jobs in parallel eats up all your processor/memory resources.. Where queuing you can limit the number of process running in parallel you have. (and if your site gets bigger you may be able to hand off some of this to a cluster of machines to handle the long running process....)

On 4/21/2016 3:25 PM, Perrin Harkins wrote:
On Thu, Apr 21, 2016 at 9:48 AM, Iosif Fettich <ifett...@netsoft.ro <mailto:ifett...@netsoft.ro>> wrote:

    I'm afraid that won't fit, actually. It's not a typical Cleanup
    I'm after - I actually want to not abandon the request I've
    started, just for closing the incoming original request. The
    cleanup handler could relaunch the slow back request - but doing
    so I'd pay twice for it.


You don't have to. You can just return immediately, and do all the work in the cleanup (or a job queue) while you let the client poll for status. It's a little extra work for simple requests, but it means all requests are handled the same and you never make extra requests to your expensive backend.

If you're determined not to do polling from the client, your best bet is probably to fork immediately and do the work in the fork, while you poll to check if it's done in your original process. You'd have to write the response to a database or something that the original process can pick it up from. But forking from mod_perl is a pain and easy to mess up, so I recommend doing one of the other approaches.

- Perrin




--
The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE.

Reply via email to