Bryan Olson <fakeaddr...@nowhere.org> wrote:

>
>Where does this come up? Suppose that to take advantage of multi-core 
>processors, our server runs as four processes, each with a single thread 
>that responds to events via select(). Clients all connect to the same 
>server port, so the socket listening on that port is shared by all four 
>processes. A perfectly reasonable architecture (though with many more 
>processes the simple implementation suffers the "thundering herd problem").


Which is why it is common for real world servers to serialize the
select()/accept() code - usually via a file lock or a semaphore.
-srp 
-- 
http://saju.net.in

>
>Two of our processors may be waiting on select() when a new connections 
>comes in. The select() call returns in both processes, showing the 
>socket ready for read, so both call accept() to complete the connection. 
>  The O.S. ensures that accept() [and recv()] are atomic, so one process 
>gets the new connection; what happens in the other depends on whether we 
>use a blocking or non-blocking socket, and clearly we want non-blocking.
>
>
>-- 
>--Bryan





--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to