I'm trying to put a network facade around an API that looks like this: // once per process initialize();
// many of these, all from potentially different threads. As many as 1000 threads. operation(); // once per process finalize(); My ZMQ solution was to start up a central thread in initialize() listening on an inproc ROUTER and then each operation() became a zmq_socket, zmq_connect, zmq_send, zmq_recv, zmq_close sequence. (This is all on windows, for now). But we've noticed that when we start to scale this up, we start to run out of sockets. Which doesn't make a lot of sense - we shouldn't exceed ZMQ_MAX_SOCKETS, since we never have more than 1000 client threads. (Also, inproc appears to use a socket on Windows, which was surprising). Sample code that reproduces the problem is here: http://pastebin.com/qA8mZiDR A colleague of mine took a quick look at the ZMQ source and it appears that socket teardown is asynchronous, so zmq_close doesn't necessarily decrement the current socket count. Which would explain why we're getting errors when we try to do this. We've experimented with using a mutex-protected pool of client sockets, and that appears to work, but the ZMQ docs have me a little wary of sharing sockets amongst threads. Does anyone have any recommendations or experience implementing this sort of thing. Is there a setting that increase the socket reaping? Or block until socket handles become availble? (Unfortunately, I am not able to change the API, so there's no good way to, say, force a per-thread initialization process on the client. I'm avoiding using TLS or something like that until I know I have to) Thanks, Mark -- Mark Wright [email protected]
_______________________________________________ zeromq-dev mailing list [email protected] http://lists.zeromq.org/mailman/listinfo/zeromq-dev
