Hi everybody,

I saw that you put some advises concerning the Hadoop settings when one has
a problem of max xceivers reached, in the troubleshooting section of the
wiki.

About this topic, I recently post a question in hadoop-core user mailing
list about their 'xcievers' thread behavior, since I still had to increase
their amount as my HBase table grows, in order to avoid to reach the limit
at startup time. And therefore my jvm use a lot of virtual memory (actually
with 500MB for the heap, 1100 threads allocate 2GB virtual memory). This
evenutally yields to swap and failure.

Here is the link to my post. With a graph showing the number of thread the
datanode creates when I start hbase.
http://www.nabble.com/xceiverCount-limit-reason-td21349807.html#a21352818

You can see that all threads are created at HBase startup time, and, if the
timeout ( dfs.datanode.socket.write.timeout
) is set, they all ends with a timeout failure.

The question for HBase is, why are the connection with hadoop kept open (and
the thread as well) ? Does it happen only in my case ?
I think that Slava has the same problem. But I don't think everybody does,
since the cluster could not run without disabling the timeout parameter
dfs.datanode.socket.write.timeout

Anybody made those observations ?
Thanks

Jean-Adrien

-- 
View this message in context: 
http://www.nabble.com/Datanode-Xceivers-tp21372227p21372227.html
Sent from the HBase User mailing list archive at Nabble.com.

Reply via email to