On 2011-01-07 09:47:12 -0800, Timothy Farrell said:

However, I'm requesting that servers _optionally_ provide environ['wsgi.executor'] as a futures executor that applications can use for the purpose of doing something after the response is fully sent to the client. This is feature request is designed to be concurrency methodology agnostic.

Done.  (In terms of implementation, not updating PEP 444.)  :3

The Marrow server now implements a thread pool executor using the concurrent.futures module (or equiv. futures PyPi package). The following are the commits; the changes will look bigger than they are due to cutting and pasting of several previously nested blocks of code into separate functions for use as callbacks. 100% unit test coverage is maintained (without errors), an example application is added, and the benchmark suite updated to support the definition of thread count.

        http://bit.ly/gUL33v
        http://bit.ly/gyVlgQ

Testing this yourself requires Git checkouts of the marrow.server/threading branch and marrow.server.http/threading branch, and likely the latest marrow.io from Git as well:

        https://github.com/pulp/marrow.io
        https://github.com/pulp/marrow.server/tree/threaded
        https://github.com/pulp/marrow.server.http/tree/threaded

This update has not been tested under Python 3.x yet; I'll do that shortly and push any fixes; I doubt there will be any.

On 2011-01-08 03:26:28 -0800, Alice Bevan–McGregor said in the [PEP 444] Future- and Generator-Based Async Idea thread:

As a side note, I'll be adding threading support to the server... but I suspect the overhead will outweigh the benefit for speedy applications.

I was surprisingly quite wrong in this prediction. The following is the output of a C25 pair of benchmarks, the first not threaded, the other with 30 threads (enough so there would be no waiting).

        https://gist.github.com/770893

The difference is the loss of 60 RSecs out of 3280. Note that the implementation I've devised can pass the concurrent.futures executor to the WSGI application (and, in fact, does), fufilling the requirements of this discussion. :D

The use of callbacks internally to the HTTP protocol makes a huge difference in overhead, I guess.

        - Alice.


_______________________________________________
Web-SIG mailing list
Web-SIG@python.org
Web SIG: http://www.python.org/sigs/web-sig
Unsubscribe: 
http://mail.python.org/mailman/options/web-sig/archive%40mail-archive.com

Reply via email to