I have a bug where some Rack middleware that determines certain PUT/POST error 
responses without needing to read the request body.  On Linux, this causes a 
TCP reset when Unicorn closes the socket with unread data in the input buffer.  
The reset preempts the delivery written but still buffered response, so client 
receives a simple RST and no response.   If I inject some read-buffer draining 
code just before the close, the problem goes away.

The RST behavior is obviously not desired, but my question is what level of the 
stack is responsible for making sure it doesn't happen?  It strikes me that 
Unicorn is in the best position to do it efficiently and reliably, but anything 
higher up can also do it.

Thoughts?

-john

_______________________________________________
Unicorn mailing list - [email protected]
http://rubyforge.org/mailman/listinfo/mongrel-unicorn
Do not quote signatures (like this one) or top post when replying

Reply via email to