On 8/19/12 12:40 PM, Scott Reynolds wrote:
> On Fri, Aug 17, 2012 at 5:31 AM, Jeffrey Van Voorst
> <[email protected]> wrote:
> <snip>
>> Finally, this change breaks several handlers, and I don't
>> think that writers of handlers should be required to think about the number
>> of messages sent or msg buffer overrruns.
> Interesting, what handlers does it break? Can you demonstrate the breakage?
>
> Seems it was introduced in these two commits:
> 1391ff8a1db3e948295910f842a781c7e9f2ec1d and
> 00ca2623ecbd77ee46227f1f3211b71a56b4e9fb and perhaps I am reading it
> wrong, but it makes delivery an async task, which strikes me as a good
> thing. It just loops and delivers messages when it is able to de
> queue.
>> Best Regards,
>>
>> Jeff Van Voorst
For background, I am running uWSGI and mongrel2 on the same virtual 
machine that has 1 CPU and 4GB of RAM.

Using the default value of 16 for the maximum number of outstanding 
messages breaks uWSGI as a mongrel2 handler.  This is especially true if 
multiple uWSGI threads or processes are used.  The reason is uWSGI sends 
4 messages per HTTP header.  I agree that this is relatively easy to fix 
by having uWSGI send all HTTP headers as one message or increasing the 
maximum number of outstanding messages. However, having a fixed number 
of outstanding messages should be clearly documented and explicitly 
described in chapter 5 of the Mongrel2 document as something handler 
writers need to remember.

The other easy way to break this message ring is to have a streaming 
application.  I give two common examples.  The first is sending a text 
file line-by-line (a number of frameworks use this as an example; e.g. 
Flask).  The second is streaming a large tar.gz file from uWSGI (with 
the default of 64k chunks).  When I used 128 for the message ring, it 
always fails (overruns the buffer), and when I use 1024 as the size for 
the message ring, overrunning the message buffer is still a relatively 
common occurrence.  Note that this issue is when there is only one user, 
and it is my opinion, that the issue will be even more problematic with 
concurrent users.

--Jeff

Reply via email to