> TurboGears would be a terrible choice.  Python does not do well on threads
> and has been known to lock up solid when executing a fork() out of a
> thread.  Also, unless you feel your webserver should use very little of
> your computers resources, the threaded approach of TurboGears may not give
> you what you want.  Python folk made a design decision way back to
> implement a Global Interpreter Lock that means one thread runs at a time
> in any process, even if you have 100 threads and 32 processor cores, one
> thread will be running on one processor.  So while TurboGears has a very
> short learning curve, it is not really for production performance.

The standard way to deploy turbogears on high performance sites is to
run several multi-threaded turbogears instances on the same box, and
load balance them behind apache, ngnx, or some other reverse proxy
server.

This configuration scales quite well, as you get more concurrency per
process than you would in a single threaded python web server,
(particularly because lots of time is spent in network IO, or dababase
IO, which don't block other threads) and ultimately you get less
memory overhead for the same number of concurrent connections.

Of course, this takes a little bit of work, but you setup a multi-
process TurboGears deployment in a few min.

--Your friendly neighborhood TurboGears man
(he does things only a turbogear can...?)

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To post to this group, send email to django-users@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/django-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to