Peter, I believe that this will help you:

https://ccp.cloudera.com/display/CDHDOC/CDH3+Deployment+on+a+Cluster

DataNode Configuration

By default, the failure of a single dfs.data.dir will cause the HDFS
DataNode process to shut down, which results in the NameNode scheduling
additional replicas for each block that is present on the DataNode. This
causes needless replications of blocks that reside on disks that have not
failed.

To prevent this, you can configure DataNodes to tolerate the failure of
dfs.data.dir directories; use thedfs.datanode.failed.volumes.tolerated
parameter
in hdfs-site.xml. For example, if the value for this parameter is 3, the
DataNode will only shut down after four or more data directories have
failed. This value is respected on DataNode startup; in this example the
DataNode will start up as long as no more than three directories have
failed.

   ~ Minh

On Thu, Jun 21, 2012 at 11:57 AM, Peter Naudus <pnau...@dataraker.com>wrote:

> **
> Hello All,
>
> We are currently running CDH3u2 which has the HDFS-457 patch applied (
> http://archive.cloudera.com/cdh/3/hadoop-0.20.2+923.256.releasenotes.html)
>
> On one of our servers we had 1 (mounted on /sdc) out of 4 disks go bad and
> become unresponsive to I/O.
>
> In the log, I saw: "DataNode.handleDiskError: Keep Running: false". Is
> there a config setting we need to set so that the datanode won't die on
> disk failure?
>
> Here is the log from the offending data node:
>
> <snip>
> ...
> 2012-06-21 12:47:11,705 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode.handleDiskError:
> Keep Running: false
> 2012-06-21 12:47:11,705 WARN org.apache.hadoop.ipc.Client: interrupted
> waiting to send params to server
> java.lang.InterruptedException
> at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
> at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:790)
> at org.apache.hadoop.ipc.Client.call(Client.java:1080)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
> at $Proxy4.errorReport(Unknown Source)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.notifyNamenode(DataNode.java:1139)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.handleDiskError(DataNode.java:834)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:821)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.checkDiskError(DataNode.java:809)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:478)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:534)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:417)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
> 2012-06-21 12:47:11,706 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down.
> DataNode failed volumes:/sdc/hadoop/data/current;
> 2012-06-21 12:47:11,706 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
> for block blk_756090714282851526_834256 java.io.IOException: Read-only file
> system
> 2012-06-21 12:47:11,706 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_756090714282851526_834256 1 : Thread is interrupted.
> 2012-06-21 12:47:11,706 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_756090714282851526_834256 received exception java.io.IOException:
> Interrupted receiveBlock
> 2012-06-21 12:47:11,706 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for
> block blk_756090714282851526_834256 terminating
> 2012-06-21 12:47:11,706 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Interrupted receiveBlock
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:579)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:417)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
> 2012-06-21 12:47:11,707 WARN
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to
> getBlockMetaDataInfo for block (=blk_756090714282851526_834256) from
> datanode (=xxx.xxx.xxx.129:50010)
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,707 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 50020, call recoverBlock(blk_756090714282851526_834256, false,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@39a37e71) from
> xxx.xxx.xxx.128:43690: error: java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1848)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,708 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-6374010588609388072_831932 at file
> /sdb/hadoop/data/current/subdir25/blk_-6374010588609388072
> 2012-06-21 12:47:11,708 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_7923514174580759037_831929 at file
> /sde/hadoop/data/current/subdir46/blk_7923514174580759037
> 2012-06-21 12:47:11,708 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-4267381526325117149_831936 at file
> /sdd/hadoop/data/current/subdir14/blk_-4267381526325117149
> 2012-06-21 12:47:11,708 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-8884355619222786632_832052 file
> /sde/hadoop/data/current/subdir38/blk_-8884355619222786632 for deletion
> 2012-06-21 12:47:11,708 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for
> block blk_-7835975243412854302_834276 terminating
> 2012-06-21 12:47:11,709 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for
> block blk_-3161121865501595468_834276 terminating
> 2012-06-21 12:47:11,709 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-8584676450860125008_832052 file
> /sdb/hadoop/data/current/subdir55/blk_-8584676450860125008 for deletion
> 2012-06-21 12:47:11,709 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_-8106636971608299082_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,709 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_-4816772976912946934_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,709 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.122:59554, bytes: 54390528, op:
> HDFS_READ, cliID: DFSClient_1194150529, offset: 0, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-5716027214525849548_832039, duration: 216435192000
> 2012-06-21 12:47:11,709 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_-4622508447364766680_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-3865621336973617340_832052 file
> /sdd/hadoop/data/current/subdir38/blk_-3865621336973617340 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_-2961805311037266571_832052 file
> /sdd/hadoop/data/current/subdir38/blk_-2961805311037266571 for deletion
> 2012-06-21 12:47:11,710 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_-816586314561202668_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,710 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_-383234823758984044_832293. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,710 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_-157669042669884002_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_43286769639614986_832052 file
> /sdb/hadoop/data/current/subdir55/blk_43286769639614986 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_786008731282663486_831903 file
> /sdb/hadoop/data/current/subdir50/blk_786008731282663486 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_1776179598985630190_832052 file
> /sdb/hadoop/data/current/subdir55/blk_1776179598985630190 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_2252288897780561733_832052 file
> /sde/hadoop/data/current/subdir38/blk_2252288897780561733 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_3044161429430832282_832052 file
> /sde/hadoop/data/current/subdir38/blk_3044161429430832282 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_3287435341129001299_832052 file
> /sde/hadoop/data/current/subdir38/blk_3287435341129001299 for deletion
> 2012-06-21 12:47:11,710 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_3501579761720041228_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_3593596671483663656_832052 file
> /sdd/hadoop/data/current/subdir38/blk_3593596671483663656 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_4250025208057886173_831905 file
> /sdd/hadoop/data/current/subdir39/blk_4250025208057886173 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_6537090144252885125_832052 file
> /sdb/hadoop/data/current/subdir55/blk_6537090144252885125 for deletion
> 2012-06-21 12:47:11,710 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_6629563683646635480_831912. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_6769371904360405394_832052 file
> /sdb/hadoop/data/current/subdir55/blk_6769371904360405394 for deletion
> 2012-06-21 12:47:11,710 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_6809384844247772372_832052 file
> /sdd/hadoop/data/current/subdir38/blk_6809384844247772372 for deletion
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_6845075627500840603_832052 file
> /sdb/hadoop/data/current/subdir55/blk_6845075627500840603 for deletion
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_6961320273620079218_832615 file
> /sdd/hadoop/data/current/subdir0/subdir5/blk_6961320273620079218 for
> deletion
> 2012-06-21 12:47:11,711 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_6988979608041206044_831907. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,711 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_7489965239571967939_833347. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_7557243874021667270_832052 file
> /sde/hadoop/data/current/subdir38/blk_7557243874021667270 for deletion
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_7656867602436824177_832052 file
> /sdd/hadoop/data/current/subdir38/blk_7656867602436824177 for deletion
> 2012-06-21 12:47:11,711 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to
> delete block blk_7704114148357173132_832052. BlockInfo not found in
> volumeMap.
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_7937346445259125479_832052 file
> /sdb/hadoop/data/current/subdir55/blk_7937346445259125479 for deletion
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_8038107898837112724_831903 file
> /sde/hadoop/data/current/subdir4/blk_8038107898837112724 for deletion
> 2012-06-21 12:47:11,711 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Scheduling block
> blk_8485563042743902581_832052 file
> /sde/hadoop/data/current/subdir38/blk_8485563042743902581 for deletion
> 2012-06-21 12:47:11,714 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls
> recoverBlock(block=blk_756090714282851526_834256,
> targets=[xxx.xxx.xxx.129:50010])
> 2012-06-21 12:47:11,714 WARN
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to
> getBlockMetaDataInfo for block (=blk_756090714282851526_834256) from
> datanode (=xxx.xxx.xxx.129:50010)
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,715 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020, call recoverBlock(blk_756090714282851526_834256, false,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@730087fa) from
> xxx.xxx.xxx.128:44143: error: java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1848)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,718 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Error processing datanode
> Command
> java.io.IOException: Error in deleting blocks.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1850)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1070)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1032)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:889)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1415)
> at java.lang.Thread.run(Thread.java:662)
> 2012-06-21 12:47:11,721 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls
> recoverBlock(block=blk_756090714282851526_834256,
> targets=[xxx.xxx.xxx.129:50010])
> 2012-06-21 12:47:11,722 WARN
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to
> getBlockMetaDataInfo for block (=blk_756090714282851526_834256) from
> datanode (=xxx.xxx.xxx.129:50010)
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,722 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 50020, call recoverBlock(blk_756090714282851526_834256, false,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@216aa6f4) from
> xxx.xxx.xxx.128:44144: error: java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1848)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,725 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.127:35049, bytes: 0, op:
> HDFS_READ, cliID: DFSClient_1521303574, offset: 0, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-3865621336973617340_832052, duration: 16921000
> 2012-06-21 12:47:11,725 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):Got exception while serving blk_-3865621336973617340_832052
> to /xxx.xxx.xxx.127:
> java.io.IOException: Block blk_-3865621336973617340_832052 is not valid.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:1050)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1013)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender$MemoizedBlock.hasBlockChanged(BlockSender.java:507)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:316)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:436)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:214)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:114)
>
> 2012-06-21 12:47:11,725 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Block blk_-3865621336973617340_832052 is not valid.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getBlockFile(FSDataset.java:1050)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.getLength(FSDataset.java:1013)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender$MemoizedBlock.hasBlockChanged(BlockSender.java:507)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:316)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:436)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:214)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:114)
> 2012-06-21 12:47:11,728 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls
> recoverBlock(block=blk_756090714282851526_834256,
> targets=[xxx.xxx.xxx.129:50010])
> 2012-06-21 12:47:11,729 WARN
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to
> getBlockMetaDataInfo for block (=blk_756090714282851526_834256) from
> datanode (=xxx.xxx.xxx.129:50010)
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,729 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 50020, call recoverBlock(blk_756090714282851526_834256, false,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@56eea928) from
> xxx.xxx.xxx.128:44145: error: java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1848)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,730 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 50020, call startBlockRecovery(blk_756090714282851526_834256)
> from xxx.xxx.xxx.128:44146: error: java.io.IOException: Block
> blk_756090714282851526_834256 does not exist in volumeMap.
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,732 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-8884355619222786632_832052 at file
> /sde/hadoop/data/current/subdir38/blk_-8884355619222786632
> 2012-06-21 12:47:11,735 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls
> recoverBlock(block=blk_756090714282851526_834256,
> targets=[xxx.xxx.xxx.129:50010])
> 2012-06-21 12:47:11,735 WARN
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to
> getBlockMetaDataInfo for block (=blk_756090714282851526_834256) from
> datanode (=xxx.xxx.xxx.129:50010)
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,735 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020, call recoverBlock(blk_756090714282851526_834256, false,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@443cbaf4) from
> xxx.xxx.xxx.128:44147: error: java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1848)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,742 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Client calls
> recoverBlock(block=blk_756090714282851526_834256,
> targets=[xxx.xxx.xxx.129:50010])
> 2012-06-21 12:47:11,742 WARN
> org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol: Failed to
> getBlockMetaDataInfo for block (=blk_756090714282851526_834256) from
> datanode (=xxx.xxx.xxx.129:50010)
> java.io.IOException: Block blk_756090714282851526_834256 does not exist in
> volumeMap.
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.validateBlockMetadata(FSDataset.java:1727)
> at
> org.apache.hadoop.hdfs.server.datanode.FSDataset.startBlockRecovery(FSDataset.java:2004)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startBlockRecovery(DataNode.java:1678)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1792)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,742 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 50020, call recoverBlock(blk_756090714282851526_834256, false,
> [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@7d11e003) from
> xxx.xxx.xxx.128:44148: error: java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> java.io.IOException: All datanodes failed:
> block=blk_756090714282851526_834256, datanodeids=[xxx.xxx.xxx.129:50010]
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1848)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.recoverBlock(DataNode.java:1950)
> at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
> 2012-06-21 12:47:11,751 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.126:40705, bytes: 0, op:
> HDFS_READ, cliID: DFSClient_721146172, offset: 0, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-2066938416069838376_831911, duration: 42320000
> 2012-06-21 12:47:11,762 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_2252288897780561733_832052 at file
> /sde/hadoop/data/current/subdir38/blk_2252288897780561733
> 2012-06-21 12:47:11,773 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.121:40247, bytes: 472, op:
> HDFS_READ, cliID: DFSClient_984919487, offset: 26843136, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-8032084076070692561_832831, duration: 65014000
> 2012-06-21 12:47:11,773 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3044161429430832282_832052 at file
> /sde/hadoop/data/current/subdir38/blk_3044161429430832282
> 2012-06-21 12:47:11,775 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-1371895831080126517_831934 at file
> /sdd/hadoop/data/current/subdir14/blk_-1371895831080126517
> 2012-06-21 12:47:11,784 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3287435341129001299_832052 at file
> /sde/hadoop/data/current/subdir38/blk_3287435341129001299
> 2012-06-21 12:47:11,784 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.121:40211, bytes: 243, op:
> HDFS_READ, cliID: DFSClient_984919487, offset: 41398272, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-1123609166779808332_412549, duration: 75723000
> 2012-06-21 12:47:11,787 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-859811438759109122_831932 at file
> /sdd/hadoop/data/current/subdir14/blk_-859811438759109122
> 2012-06-21 12:47:11,791 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-5229586271242997282_831934 at file
> /sdb/hadoop/data/current/subdir25/blk_-5229586271242997282
> 2012-06-21 12:47:11,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_4948741162064721048_831934 at file
> /sdd/hadoop/data/current/subdir14/blk_4948741162064721048
> 2012-06-21 12:47:11,796 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_7557243874021667270_832052 at file
> /sde/hadoop/data/current/subdir38/blk_7557243874021667270
> 2012-06-21 12:47:11,811 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-3865621336973617340_832052 at file
> /sdd/hadoop/data/current/subdir38/blk_-3865621336973617340
> 2012-06-21 12:47:11,817 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-2961805311037266571_832052 at file
> /sdd/hadoop/data/current/subdir38/blk_-2961805311037266571
> 2012-06-21 12:47:11,818 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-2341194009427527750_831936 at file
> /sdb/hadoop/data/current/subdir43/blk_-2341194009427527750
> 2012-06-21 12:47:11,819 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_8038107898837112724_831903 at file
> /sde/hadoop/data/current/subdir4/blk_8038107898837112724
> 2012-06-21 12:47:11,823 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_3593596671483663656_832052 at file
> /sdd/hadoop/data/current/subdir38/blk_3593596671483663656
> 2012-06-21 12:47:11,827 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_8485563042743902581_832052 at file
> /sde/hadoop/data/current/subdir38/blk_8485563042743902581
> 2012-06-21 12:47:11,836 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_4250025208057886173_831905 at file
> /sdd/hadoop/data/current/subdir39/blk_4250025208057886173
> 2012-06-21 12:47:11,841 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_7692470437968994895_831932 at file
> /sdb/hadoop/data/current/subdir25/blk_7692470437968994895
> 2012-06-21 12:47:11,841 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_6809384844247772372_832052 at file
> /sdd/hadoop/data/current/subdir38/blk_6809384844247772372
> 2012-06-21 12:47:11,856 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_6961320273620079218_832615 at file
> /sdd/hadoop/data/current/subdir0/subdir5/blk_6961320273620079218
> 2012-06-21 12:47:11,865 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_7656867602436824177_832052 at file
> /sdd/hadoop/data/current/subdir38/blk_7656867602436824177
> 2012-06-21 12:47:11,882 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_-8584676450860125008_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_-8584676450860125008
> 2012-06-21 12:47:11,889 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_43286769639614986_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_43286769639614986
> 2012-06-21 12:47:11,897 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_786008731282663486_831903 at file
> /sdb/hadoop/data/current/subdir50/blk_786008731282663486
> 2012-06-21 12:47:11,902 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_1776179598985630190_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_1776179598985630190
> 2012-06-21 12:47:11,915 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_6537090144252885125_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_6537090144252885125
> 2012-06-21 12:47:11,930 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_6769371904360405394_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_6769371904360405394
> 2012-06-21 12:47:11,932 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_6845075627500840603_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_6845075627500840603
> 2012-06-21 12:47:11,936 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Deleted block
> blk_7937346445259125479_832052 at file
> /sdb/hadoop/data/current/subdir55/blk_7937346445259125479
> 2012-06-21 12:47:14,265 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):Finishing DataNode in:
> FSDataset{dirpath='/sdb/hadoop/data/current,/sdd/hadoop/data/current,/sde/hadoop/data/current'}
> 2012-06-21 12:47:14,330 INFO org.mortbay.log: Stopped
> SelectChannelConnector@0.0.0.0:50075
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 50020
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 4 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 5 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 6 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 3 on 50020: exiting
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server listener on 50020
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: Stopping IPC
> Server Responder
> 2012-06-21 12:47:14,461 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 9
> 2012-06-21 12:47:14,460 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 7 on 50020: exiting
> 2012-06-21 12:47:14,461 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-1992844506960452554_834267 received exception java.io.IOException:
> Interrupted receiveBlock
> 2012-06-21 12:47:14,461 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.IOException: Interrupted receiveBlock
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:579)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:417)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
> 2012-06-21 12:47:14,461 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):DataXceiverServer: IOException due
> to:java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketAccept(Native Method)
> at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
> at java.net.ServerSocket.implAccept(ServerSocket.java:462)
> at java.net.ServerSocket.accept(ServerSocket.java:430)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:129)
> at java.lang.Thread.run(Thread.java:662)
>
> 2012-06-21 12:47:14,461 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting DataXceiverServer
> 2012-06-21 12:47:14,461 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder
> blk_-1992844506960452554_834267 1 : Thread is interrupted.
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for
> block blk_-1992844506960452554_834267 terminating
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
> for block blk_-3161121865501595468_834276 java.net.SocketException: Socket
> closed
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in receiveBlock
> for block blk_-7835975243412854302_834276 java.net.SocketException: Socket
> closed
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-3161121865501595468_834276 received exception
> java.net.SocketException: Socket closed
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
> blk_-7835975243412854302_834276 received exception
> java.net.SocketException: Socket closed
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.124:56765, bytes: 31934208, op:
> HDFS_READ, cliID: DFSClient_270924412, offset: 0, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-1870007771180646254_832050, duration: 2711384000
> 2012-06-21 12:47:14,464 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketException: Socket closed
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.read(DataInputStream.java:132)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:267)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:314)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:378)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:534)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:417)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
> 2012-06-21 12:47:14,464 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration(xxx.xxx.xxx.129:50010,
> storageID=DS-1885735740-10.7.10.47-50010-1321113234606, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketException: Socket closed
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.read(SocketInputStream.java:129)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
> at java.io.DataInputStream.read(DataInputStream.java:132)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:267)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:314)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:378)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:534)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:417)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:122)
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.125:56816, bytes: 33651456, op:
> HDFS_READ, cliID: DFSClient_-1387885588, offset: 0, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_4110763901171842902_831894, duration: 2747733000
> 2012-06-21 12:47:14,464 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
> /xxx.xxx.xxx.129:50010, dest: /xxx.xxx.xxx.122:57634, bytes: 1172352, op:
> HDFS_READ, cliID: DFSClient_1194150529, offset: 0, srvID:
> DS-1885735740-10.7.10.47-50010-1321113234606, blockid:
> blk_-2765810887078658676_833474, duration: 287962443000
> 2012-06-21 12:47:15,463 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to
> exit, active threads is 0
> 2012-06-21 12:47:19,712 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
> succeeded for blk_3404275465438906670_803784
> 2012-06-21 12:47:19,713 INFO
> org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting
> DataBlockScanner thread.
> 2012-06-21 12:47:19,713 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting
> down all async disk service threads...
> 2012-06-21 12:47:19,713 INFO
> org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async
> disk service threads have been shut down.
> 2012-06-21 12:47:19,715 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
> 2012-06-21 12:47:19,716 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at
> xxxxxpnb009.dataraker.net/xxx.xxx.xxx.129
> ************************************************************/
> </snip>
>
>
> --
>
> Sincerely,
>
>     ~Peter
> P
> Peter Naudus
> DataRaker
> Cell: 917.689.8451
> Work: 703.639.4010
> Email: pnau...@dataraker.com
> ------------------------------
> The contents of this email and any attachments are confidential. They are
> intended for the named recipient(s) only.
>
>

Reply via email to