The long lived connection problem has little to do with async IO. Async
IO make sure that you don't waste a thread reading and writing to a
potential slow client. The problem we are discussing has to do with
content generation.
The cherrypy WSGI server (which is really fast btw :) ) has part of the
request handeling build into the WorkThread.run() :
response =
request.wsgi_app(request.environ,
request.start_response)
for line in response:
request.write(line)
I have refactored the request handeling code into generator based based
coroutine such that WSGI apps that yield and empty strings can stopped
and resumed at will. My first attempt used a simple ScheduledQueue to
exponentialy delay the moment when the task is put into the Queue. This
however generates 'alot' of queue activity just to poll the pending
WSGI app. It handles up to 300 pending connections with just two
threads, but after that it starts hogging the CPU due to queu activity.
So, I'm looking into microthreads to solve the queu activity problem.
The problem however is that a poll to the WSGI APP could return data
which needs to be written to the socket. I don't see any clear solution
to inject this data into the coroutine. PEP 342 will solve this issue
but has to wait till python 2.5. So maybe we need a different polling
mechanism for WSGI? There is a thread on the Twisted mailing list
callded "[Web-SIG] WSGI woes" where an async WSGI extention api was
suggested. It would imply an env['paused'] flag which the server could
use to schedule the request.
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups
"TurboGears" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/turbogears
-~----------~----~----~----~------~----~------~--~---