Which Hadoop release are you using ?

Have you run fsck ?

Cheers

On Oct 14, 2014, at 2:31 AM, sunww <spe...@outlook.com> wrote:

> Hi
>     I'm using hbase with about 20 regionserver. And  one regionserver failed 
> to write  most of datanodes quickly, finally cause this regionserver die. 
> While other regionserver is ok. 
> 
> logs like this:
>     
> java.io.IOException: Bad response ERROR for block 
> BP-165080589-132.228.248.11-1371617709677:blk_5069077415583579127_39339217 
> from datanode 132.228.248.20:50010
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:681)
> 2014-10-13 09:23:01,227 WARN org.apache.hadoop.hdfs.DFSClient: Error Recovery 
> for block 
> BP-165080589-132.228.248.11-1371617709677:blk_5069077415583579127_39339217 in 
> pipeline 132.228.248.17:50010, 132.228.248.20:50010, 132.228.248.41:50010: 
> bad datanode 132.228.248.20:50010
> 2014-10-13 09:23:32,021 WARN org.apache.hadoop.hdfs.DFSClient: 
> DFSOutputStream ResponseProcessor exception  for block 
> BP-165080589-132.228.248.11-1371617709677:blk_5069077415583579127_39339415
> java.io.IOException: Bad response ERROR for block 
> BP-165080589-132.228.248.11-1371617709677:blk_5069077415583579127_39339415 
> from datanode 132.228.248.41:50010
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:681)
>     
>     
>     
>     then serveral  "firstBadLink error "
>     2014-10-13 09:23:33,390 INFO org.apache.hadoop.hdfs.DFSClient: Exception 
> in createBlockOutputStream
> java.io.IOException: Bad connect ack with firstBadLink as 132.228.248.18:50010
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1090)
>     
>     
>     then serveral "Failed to add a datanode"
>     2014-10-13 09:23:44,331 WARN org.apache.hadoop.hdfs.DFSClient: Error 
> while syncing
> java.io.IOException: Failed to add a datanode.  User may turn off this 
> feature by setting dfs.client.block.write.replace-datanode-on-failure.policy 
> in configuration, where the current policy is DEFAULT.  (Nodes: 
> current=[132.228.248.17:50010, 132.228.248.35:50010], 
> original=[132.228.248.17:50010, 132.228.248.35:50010])
> 
>     the full log is in http://paste2.org/xfn16jm2
>     
>     Any suggestion will be appreciated. Thanks.

Reply via email to