Chris Angelico <ros...@gmail.com>: > Okay, but how do you handle two simultaneous requests going through > the processing that you see above? You *MUST* separate them onto two > transactions, otherwise one will commit half of the other's work. (Or > are you forgetting Databasing 101 - a transaction should be a logical > unit of work?) And since you can't, with most databases, have two > transactions on one connection, that means you need a separate > connection for each request. Given that the advantages of asyncio > include the ability to scale to arbitrary numbers of connections, it's > not really a good idea to then say "oh but you need that many > concurrent database connections". Most systems can probably handle a > few thousand threads without a problem, but a few million is going to > cause major issues; but most databases start getting inefficient at a > few thousand concurrent sessions.
I will do whatever I have to. Pooling transaction contexts ("connections") is probably necessary. Point is, no task should ever block. I deal with analogous situations all the time, in fact, I'm dealing with one as we speak. > Alright. I'm throwing down the gauntlet. Write me a purely nonblocking > web site concept that can handle a million concurrent connections, > where each one requires one query against the database, and one in a > hundred of them require five queries which happen atomically. I can do > it with a thread pool and blocking database queries, and by matching > the thread pool size and the database concurrent connection limit, I > can manage memory usage fairly easily; how do you do it efficiently > with pure async I/O? Sorry, I'm going to pass. That doesn't look like a 5-liner. Marko -- https://mail.python.org/mailman/listinfo/python-list