xcievers : 4096 is enough, and I don't think you pasted a full stack
exception.
Socket is ready for receiving, but client closed abnormally. so you
generally got this error.
On Mon, Mar 11, 2013 at 2:33 AM, Pablo Musa wrote:
> This variable was already set:
>
> dfs.datanode.max.xcievers
>
Thanks Sir. I think I must read it one time completely.
that is the only way. Then I should come here.
*
--
Cheers,
Mayur.*
On Mon, Mar 11, 2013 at 6:05 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi Mayur,
>
> What do you mean by "which configuration mode"? Are you still looking
Hi Mayur,
What do you mean by "which configuration mode"? Are you still looking
at "pseudo-distributed" vs "fully-distributed"?
I think you should take a look at "Hadoop: The definitive guide". That
will give you man examples and details that you will find usefull...
JM
2013/3/10 Mayur Patil :
This variable was already set:
dfs.datanode.max.xcievers
4096
true
Should I increase it more?
Same error happening every 5-8 minutes in the datanode 172.17.2.18.
2013-03-10 15:26:42,818 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
PSLBHDN002:50010:DataXceiver error processin
What is the version of hadoop?
Sent from phone
On Mar 7, 2013, at 11:53 AM, Daning Wang wrote:
> We have hive query processing zipped csv files. the query was scanning for 10
> days(partitioned by date). data for each day around 130G. The problem is not
> consistent since if you run it again,
pls send
some related real time hive quiries
Problem I could see in you log file is , No available free map slot for
job.
I think you have to increase the block size to reduce the # of MAP , Bcz
you are passing Big data as Input.
The ideal approach is , first increase the
1) block size,
2) mapp site buffer
3) jvm re-use etc.
re