> - Am I correct and Waitress is limited by Python threading?
> - What's the recommended modern multiprocess enabled web server to do more 
> scaleable Pyramid hosting?

Apologies for a "pop up" visit here, I have been AWOL forever, but I think I do 
know an answer to at least the first question.

If you choose to stick with Waitress, and you are indeed bottlenecked by the 
GIL, and you have a single-host-single-instance deployment, you could change it 
in a way that looks a lot like a solution for running on multiple hosts.  For 
example, if you now run a single Waitress instance on TCP port 8080, you could 
spin one up on 8081, one on 8082, etc.  It's the same scaling solution as might 
exist across multiple hosts, except just on one host.

Then you'd need some sort of proxy that is willing to round-robin requests (or 
whatever scheduling you want) to each of those instances.

I don't know the current state of the world wrt new coolness in WSGI servers 
is.  That said, the set of changes to your codebase that are required to 
support a multiprocessing model rather than a multithreading WSGI model are the 
same regardless of which server you choose, so I'd suggest just trying it first 
with Waitress, find the stuff that doesn't work or is awkward and fix it. If 
you decide to use another WSGI server, that work won't be wasted.

- C

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to pylons-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pylons-discuss/csDkUS3S6QOoTSaM_GHMW79WyUSfTNuII1VZbodxArXwNcvk1-EV129AWhm8_kUEIGbUz83NLwfQ7g2J5X7OzBZhdX6uooKHcWmTFZnh07c%3D%40plope.com.

Reply via email to