Hi,
I'm running a distributed simulation and there is one server
which opens a TCP socket, writes a message, and closes a socket to it's
registered clients for every message that it processes.
With hundreds (maybe 1000s) of messages per second being processed by this
server, will the server ever reach a point where it seems to block trying
to create new sockets to deliver the new messages? If so why?
(A simple answer will suffice.)
The reason I ask is because the people who developed it seem a little
reluctant to acknowledge that that maybe the cause of the stuttering.
None of this stuff is mine or my responsibility so please
don't tell me that it's bad design or anything like that.
I'm just curious if that might be the problem.
Thanks,
Tuan
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]