Hey Jun:

For 0.19.x, do not set the *dfs.datanode.socket.write.timeout* to zero as we suggest for hadoop 0.18.x. Leave it at its default of 8 minutes. On timeout, in 0.19.0, the client will reestablish the connection (See HADOOP-3831).

If lots of activity against hdfs, up *dfs.datanode.handler.count* as suggested in troubleshooting. To be safe, up *dfs.datanode.max.xcievers*. Set it up to 1024 or more even.*
*

We're waiting with much anticipation on hadoop 0.19.1. It will address many of the issues we've been seeing of late -- see HBASE-1151 for a list -- as well as HADOOP-4379.

For your purposes, presuming HIndex, you might to experiment with blockcache, especially if you can give your JVMs a larger than default heap.

Good on you Jun,
St.Ack

**

**



Jun Rao wrote:
Hi,

We want to try hbase 0.19.0. Do we still need to set any of the HDFS
configuration parameters listed in troubleshooting? Thanks,

Jun
IBM Almaden Research Center
K55/B1, 650 Harry Road, San Jose, CA  95120-6099

Reply via email to