On 9 Aug 2006 at 10:35, Also Sprach Matt Sergeant:

> On 9-Aug-06, at 9:47 AM, [EMAIL PROTECTED] wrote:
> 
> > The server sits there listening for a request. This request
> > has to go to a db, retrieve lots of data, generate the xml etc
> > and eventually pass back html. Let's say that the retrieval of data
> > takes about 5 mins.
> > If 100 users put in a request at the same time, the server blocks to
> > service to first user, how does it service the other users without
> > forking or passing them on to other daemons ot threads?
> > (I have never used threads and have little idea about them)
> 
> So rather than answer this, I'll throw the question back to you - how  
> do you currently cope with a system that expects 100 concurrent users  
> requesting pages that take 5 minutes to generate?

The only similar thing I have is a daemon that sits and blocks on a 
pipe waiting for requests from a process that monitors directories.
So I don't lose requests (assuming I could, not sure if it's possible)
as soon as I get a request I fork and let the child handle the request 
without waiting for it and the parent goes back to listening on the 
pipe.
When there are a lot of requests it slows the system down 
somewhat :) I wrote it about 3 years ago when learnin perl so I am 
more than happy to explore better methods than perlIO teaches.

I presume the way to go would be to have the Danga stuff handle the
request in a listener and have it pass on the request to 1 of N other 
daemons, like apache does now. Probably.

John

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to