I am having some GC pauses (70 secs) but I don't think this could cause
480 secs
timeout. And its even more weird when it happens from one datanode to
ITSELF.
> Socket is ready for receiving, but client closed abnormally. so you
generally got this error.
What would abnormally be in this case
xcievers : 4096 is enough, and I don't think you pasted a full stack
exception.
Socket is ready for receiving, but client closed abnormally. so you
generally got this error.
On Mon, Mar 11, 2013 at 2:33 AM, Pablo Musa wrote:
> This variable was already set:
>
> dfs.datanode.max.xcievers
>
This variable was already set:
dfs.datanode.max.xcievers
4096
true
Should I increase it more?
Same error happening every 5-8 minutes in the datanode 172.17.2.18.
2013-03-10 15:26:42,818 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
PSLBHDN002:50010:DataXceiver error processin
Hi,
If all of the # of open files limit ( hbase , and hdfs : users ) are set to
more than 30 K. Please change the dfs.datanode.max.xcievers to more than
the value below.
dfs.datanode.max.xcievers
2096
PRIVATE CONFIG VARIABLE
Try to increase this one and tunne it t
I am also having this issue and tried a lot of solutions, but could not
solve it.
]# ulimit -n ** running as root and hdfs (datanode user)
32768
]# cat /proc/sys/fs/file-nr
208008047008
]# lsof | wc -l
5157
Sometimes this issue happens from one node to the same node :(
I also think t
Hi Varun
I believe is not ulimit issue.
/etc/security/limits.conf
# End of file
* - nofile 100
* - nproc 100
please guide me Guys, I want fix this. share your thoughts DataXceiver
error.
Did I learn something today? If not, I wa
Hi Dhana,
Increase the ulimit for all the datanodes.
If you are starting the service using hadoop increase the ulimit value for
hadoop user.
Do the changes in the following file.
*/etc/security/limits.conf*
Example:-
*hadoop softnofile 35000*
*hadoop hardnof