Supun Kamburugamuva wrote:
Hi all,

I'm writing a httpd module. This module is a content generator.

When a request comes to this module it has to forward it to a remote
machine. In that sense this module makes httpd acts like a proxy server.

Talking to a remote machine can take some time and if I use the default
threading model, this app won't scale. When httpd delivers the request to
the module, I want to use my own thread pool and handle the request. After
handling the request I want to send the response to the actual client
through Apache server.

Is this model supported by httpd? If so, can you point me to some
documentation where I can learn more about it?


My knowledge about this is almost 2 years old.


At my earlier job, we faced an almost similar problem and the only good solution at that point was to extend the (then) experimental event mpm to suit our needs. This new mpm made httpd asynchronous on a module by module basis - this code is owned by a corporation and is proprietary.


1. The basic principle was to startup a "polling" thread in addition to the listeners and workers. The "polling" thread uses epoll() (or suitable variant) to monitor i/o on client,server socket pairs.


2. When a request comes that needs to be handled asynchronously, the module calls an api that pushes the "request object" + context data (the client socket and the socket connected to the remote machine) to the epoll set of the polling thread. The worker thread is now free to handle a new request.


3. The polling thread waits for i/o to occur on one of the epoll set. Depending on the i/o it updates some state info and pushes the socket back into the "worker queue". A free worker then picks up this connection and processes it as required - including running the protocol stack, which might end up running a module that could push the socket back to the polling thread.


This implementation required changes to http protocol handling code, some apr_pool handling code and significant rehashing of the listener, worker code.

The implementation scaled rather well, with just 5 worker threads handling 400 concurrent long running requests.

srp
--
http://saju.net.in

Reply via email to