sry forgot about that
this is the while datanode log since the last startup:
http://pastebin.com/DAN6tQJY
The hadoop version is 2.6.0, i installed it via the tarball.
It is a two node cluster with one being both master and slave and one
pure slave node. I already tested this with dfs.replication on 1 and 3.
And your translation is correct
Thanks
Am 17.07.2015 um 18:15 schrieb Ted Yu:
bq. IOException: Die Verbindung wurde vom Kommunikationspartner
zurückgesetzt
Looks like the above means 'The connection was reset by the
communication partner'
Which hadoop release do you use ?
Can you pastebin more of the datanode log ?
Thanks
On Fri, Jul 17, 2015 at 9:11 AM, marius <m.die0...@googlemail.com
<mailto:m.die0...@googlemail.com>> wrote:
Hi,
when i tried to run some Jobs on my hadoop cluster, i found the
following error in my datanode logs:
(the german means connection reseted by peer)
2015-07-17 16:33:45,671 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode:
BlockSender.sendChunks() exception:
java.io.IOException: Die Verbindung wurde vom
Kommunikationspartner zurückgesetzt
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:443)
at
sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:575)
at org.apache.hadoop.net
<http://org.apache.hadoop.net>.SocketOutputStream.transferToFully(SocketOutputStream.java:223)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:728)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:496)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
at java.lang.Thread.run(Thread.java:745)
i already googled this but i could not find anything...
This appears several times and then the error vanishes and the
jobs proceeds normally, and the job does not fail. This happens on
various nodes. I already formated my namenode but that did not fix it.
Thanks and greetings
Marius