Hi all,

sorry for the late answer. 

I configured the hbase-site.conf like this

  <property>
    <name>dfs.client.socketcache.capacity</name>
    <value>0</value>
  </property>
  <property>
    <name>dfs.datanode.socket.reuse.keepalive</name>
    <value>0</value>
  </property>

and restarted the hbase master and all regionservers. 
I still can see the same behavior. Each snapshot creates
new CLOSE_WAIT Sockets which stay there till hbase master restart.

I there any other setting I can try?

Update is not possible at the moment.

Regards Hansi

> Gesendet: Sonntag, 20. April 2014 um 02:05 Uhr
> Von: Stack <st...@duboce.net>
> An: Hbase-User <user@hbase.apache.org>
> Betreff: Re: taking snapshot's creates to many TCP CLOSE_WAIT handles on the 
> hbase master server
>
> On Thu, Apr 17, 2014 at 9:50 PM, Stack <st...@duboce.net> wrote:
> 
> > On Thu, Apr 17, 2014 at 6:51 AM, Hansi Klose <hansi.kl...@web.de> wrote:
> >
> >> Hi,
> >>
> >> we use a script to take on a regular basis snapshot's and delete old
> >> one's.
> >>
> >> We recognizes that the web interface of the hbase master was not working
> >> any more becaues of too many open files.
> >>
> >> The master reaches his number of open file limit of 32768
> >>
> >> When I run lsof I saw that there where a lot of TCP CLOSE_WAIT handles
> >> open
> >> with the regionserver as target.
> >>
> >> On the regionserver there is just one connection to the hbase master.
> >>
> >> I can see that the count of the CLOSE_WAIT handles grow each time
> >> i take a snapshot. When i delete on nothing changes.
> >> Each time i take a snapshot  there are 20 - 30 new CLOSE_WAIT handles.
> >>
> >> Why does the master do not close the handles? Is there a parameter
> >> with a timeout we can use?
> >>
> >> We use hbase 0.94.2-cdh4.2.0.
> >>
> >
> > Does
> > https://issues.apache.org/jira/browse/HBASE-9393?jql=text%20~%20%22CLOSE_WAIT%22help?
> >   In particular, what happens if you up the socket cache as suggested
> > on the end of the issue?
> >
> > HDFS-4911 may help (the CLOSE_WAIT is against local/remote DN, right?) or
> quoting one of our lads off an internal issue, "You could get most of the
> same benefit of HDFS-4911...by setting dfs.client.socketcache.expiryMsec to
> 900 in your HBase client configuration. The goal is that the client should
> not hang on to sockets longer than the DataNode does...."
> 
> Or, can you upgrade?
> 
> Thanks,
> 
> St.Ack
> 

Reply via email to