It's a problem of storing backlog requests.
When the request you sent is still in processing, but you already have
a new request to send.

You have 3 options:
1) Wait until previous one returns
2) Open a new connection and send new request through new connection
3) Use an asynchronous interface where you send requests as soon as
you get them, and then decouple replies by request_id to send the
answer to the original requestor.

Didn't check FastCGI API yet, but it seems cherokee and the others are
using option 2, in fact opening lots and lots of connections and
transferring the backlog storage onto the system kernel and FastCGI
daemon implementation. Linux holds the connections quite easily, but
at least the php-fpm and php-cgi daemons don't seem to like handling
thousands of outstanding connections.

The final solution should probably be option 3, but, as a workaround,
opening same amount of connections as number of FastCGI
children/threads and then holding requests to send in the web server
itself or its module seems like a sensible thing to do and should
bring easy performance benefits

--
 silvio

On Sun, May 30, 2010 at 8:21 PM, kevin beckford <[email protected]> wrote:
>> Is there a limit on conenctions per UNIX socket ?
>> Is there a way to use a small amount of php connections and push
>> requests through them, waiting for them to free up instead of opening
>> them by thousands?
>
> Keep-Alive?  Or is this problem not at that level?
>
_______________________________________________
Cherokee mailing list
[email protected]
http://lists.octality.com/listinfo/cherokee

Reply via email to