If you set SoTimeout, it will close for you if you no longer using that socket 
(because of no activity). In messageContext you can call setTimeout to achieve this. 

But if it works for you the way you describe, then it is good. 

My only concern to your solution is : if in the future Axis Client starts to pooling 
sockets, or in RPC the socket always keep open, then you might be forced to touch your 
code again. While setTimeout is an interface call in MessageContext and it always 
should be valid (in terms of open socket tunnel, underline implementation can choose 
to send heartbeat to maintain the connection), then no matter what Axis Client 
implementation version you don't need to touch your code.


-----Original Message-----
From: Matteo Tamburini [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 05, 2003 9:37 AM
To: [EMAIL PROTECTED]
Subject: R: Too many CLOSE_WAIT socket connections


I got the solution!

The problem is not related to a timeout parameter. I think that each open
socket should be closed...
but, looking through the sources, I can't find where the axis client side
library closes the socket used to speak with the axis server.

If a process doesn't say to the operating system that he's not using a
socket any more, the OS doesn't release the socket. This means that if the
client is cycling lot's of times to repeately call a web service, each time
a new socket is opened without freeing the previous... Beacuse of this,
after a small amount of time, the client reaches the maximum number of
sockets the OS is allowed to give him (maybe there's a parameter for this, I
don't know exactely, but it's about 1000 in my Linux Box). In this case
using netstat you can see lots of "zombie" sockets in CLOSE_WAIT state!

Here is my workaround. 
In org.apache.axis.transport.http.HTTPSender a new socket is opened (using
the SocketFactory) for each call.invoke(...) issued, but each socked opened
in this way is never closed (sock.close() never called). So, I saved the
socket got from the factory in a property of the messageContext object
(reachable both from HTTPSender and Call object). In this way I can reach
the socket to close it using:
call.getMessageContext().getSocket().close();
imediately after the call.invoke(...) in my client.

Now, with netstat I can see that the number of opened sockets never grows,
never reaching the OS limit: each opened socket is correctly returned to the
OS which never refuses to give a new one to my process.

I don't know is this is an Axis bug, but I'm quite sure mine is a
"bad-but-working" bugfix.

What do you think?

Bye,
Matteo 

> -----Messaggio originale-----
> Da: Wang, Pengyu [IT] [mailto:[EMAIL PROTECTED] 
> Inviato: luned́ 3 novembre 2003 15.54
> A: '[EMAIL PROTECTED]'
> Oggetto: RE: Too many CLOSE_WAIT socket connections
> 
> By default java.net package of HttpURLConnection is using 
> Http1.1 which is keep live connection. This will cause you 
> CLOSE_WAIT since from the client side you are not closing the 
> socket (keep-live), and from the server side, it will take 
> some time to figure out that you are no longer using the 
> connection so it just close it (after a long time). Then 
> client is hung on CLOSE_WAIT since it tries to wait for 
> another state before it gave up (I don't remember which one 
> now, better to pick up my TCPIP book).
> The best way to observe this is using TCPMon to see if you 
> are using Keep-alive header.
> 
> 
> This is specifically true for Apache webserver, since I have 
> to due with similar issue on embedded C++ apache server 
> before. The way I get around is set Java.net package not to 
> use keep live header and setSoTimeout to a lower
> 
> threshold. Another parameter is SO_LINGER, but I don't seem 
> to see the obvious effect if the above two have been set.

Reply via email to