On 17 Nov 2004, at 18:09, David Mundie wrote:

We have a Java XML-RPC server running under Windows that talks to both Java and C++ clients. We run out of sockets after about 5,000 connections have been made. When I do a netstat, I see the sockets sitting it TIME_WAIT and CLOSE_WAIT.

The only message on this topic I found is from March of this year:

It could be that you are seeing an exhaustion of the number of sockets available. If the clients connect, do a transaction and then disconnect then you will be recycling a lot of sockets. A socket is not immediately reusable, it has to sit in the TIME_WAIT state for a period before it becomes free for reuse.

Check the setting for the duration of TIME_WAIT, it should be about 60 seconds.

Increase the number of sockets available.

Check that your clients are closing the sockets properly after use.

Setting the TIME_WAIT value in the registry to 30 s seemed to leave more of the packets in CLOSE_WAIT instead of TIME_WAIT, but didn't prevent socket exhaustion.


I know these are newbie questions, but could someone explain two things to me:

(a) What does it mean, "check that your clients are closing the sockets properly after use"? Isn't it the responsibility of the XML-RPC library to close the sockets? I wouldn't even know *how* to fail to close the sockets, since I'm not opening them - all I do is call the "execute" method. But surely this couldn't be a bug in the library, could it?

(b) Where does one go to increase the number of sockets available? Is this done in the registry? the xml-rpc library? someplace else?


Hi David!

Are you running your server on a Windows workstation version or a Windows server version? I seem to remember that Microsoft set some limits on the number of sockets can be supported on the Workstation flavour of their products.

When an XML-RPC transaction completes either end can close the connection. In practice both ends will try to close the connection at more or less the same time. There are three states that the system can get into:

1/ the client closes the connection first
2/ the server closes the connection first
3/ the sever and client close the connection at exactly the same time

The server will enter TIME_WAIT state for cases 1 and 3 and for case 2 it will enter CLOSE_WAIT. So seeing a mixture of these two states in the server is to be expected. In TIME_WAIT the server discards all the packets arriving on this port, this is to ensure that the next user of this port doesn't get confused by delayed packets form the old user. In CLOSE_WAIT the server is cleaning up internal socket state and sending an acknowledgement to the closing packet. It then enters the LAST_ACK state, send a FIN packet and waits for the acknowledgement packet (or a time-out if the packet is lost) after which the socket is closed and the port can be reused.

I would not expect there to be a problem with sockets 'hanging' in either of these states. There are time-outs which ensure that the sockets are closed and reused even if packets are lost or the other end of the connection misbehaves.

It is possible that some or all of your clients are using the HTTP keep-alive option. This is useful if the client expects to do several transactions at a time. It means that it does not have to create an new TCP connection for every transaction. The problem is that it makes the socket unavailable for reuse for a considerable period of time.

The first thing to do is to look at the version of Windows you are using as a server. If it's a workstation version you need to change it to a sever version.


John Wilson The Wilson Partnership http://www.wilson.co.uk



Reply via email to