Thanks everyone for help. I believe the error was coming because of increase in Non DFS memory usage.
Name: 172.31.11.49:50010 Decommission Status : Normal Configured Capacity: 74033672192 (68.95 GB) DFS Used: 6814511104 (6.35 GB) Non DFS Used: 4128133120 (3.84 GB) DFS Remaining: 63091027968 (58.76 GB) DFS Used%: 9.20% DFS Remaining%: 85.22% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Name: 172.31.11.49:50010 Decommission Status : Normal Configured Capacity: 74033672192 (68.95 GB) DFS Used: 7336382464 (6.83 GB) Non DFS Used: 60541867008 (56.38 GB) DFS Remaining: 6155422720 (5.73 GB) DFS Used%: 9.91% DFS Remaining%: 8.31% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 847 -- Madhav Sharan On Mon, Jul 25, 2016 at 10:00 AM, Gagan Brahmi <gaganbra...@gmail.com> wrote: > There can be several reasons you see this error. > > The most common ones are: > > Disk Space on Datanodes - Like mentioned earlier in the thread. > Inconsistent DataNodes - You can try to restart HDFS which should clean it > up. > Bad or Unresponsive Datanode > Negative 'Block Size' in hdfs-site.xml. > Network communication issues. > > > Regards, > Gagan Brahmi > > On Mon, Jul 25, 2016 at 9:14 AM, Gabriel Balan <gabriel.ba...@oracle.com> > wrote: > >> >> Hi >> >> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File >> /user/pts/output/OTSOutput/_temporary/1/_temporary/attempt_1463_0008_m_000018_2/video.mp4.of.txt >> could only be replicated to 0 nodes instead of minReplication (=1). *There >> are 9 datanode(s) running* and no node(s) are excluded in this operation. >> >> Can it be there is no more space left (for HDFS) on the host running data >> nodes? >> >> Try running "hdfs dfsadmin -report" >> >> hth >> >> Gabriel Balan >> >> On 7/24/2016 7:53 PM, Madhav Sharan wrote: >> >> Hi hadoop users, >> >> We are running a mapreduce jobs with 10 nodes. Each map job process a >> video and generate a .txt file as output. We are getting DataStreamer >> Exception that File could only be replicated to 0 nodes instead of >> minReplication (=1). This is the output file we expect after successful >> run. >> >> Any help is appreciated. We checked hdfs has space available and all >> nodes are responding. >> >> Full trace - >> >> 2016-07-24 21:55:08,343 WARN [Thread-214] >> org.apache.hadoop.hdfs.DFSClient: DataStreamer Exception >> >> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File >> /user/pts/output/OTSOutput/_temporary/1/_temporary/attempt_1463_0008_m_000018_2/video.mp4.of.txt >> could only be replicated to 0 nodes instead of minReplication (=1). There >> are 9 datanode(s) running and no node(s) are excluded in this operation. >> >> at >> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1547) >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107) >> >> at >> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031) >> >> at >> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:724) >> >> at >> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) >> >> at >> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) >> >> at >> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) >> >> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) >> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) >> >> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) >> >> at java.security.AccessController.doPrivileged(Native Method) >> >> at javax.security.auth.Subject.doAs(Subject.java:415) >> >> at >> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) >> >> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) >> >> >> at org.apache.hadoop.ipc.Client.call(Client.java:1475) >> >> at org.apache.hadoop.ipc.Client.call(Client.java:1412) >> >> at >> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) >> >> at com.sun.proxy.$Proxy12.addBlock(Unknown Source) >> >> at >> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) >> >> at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source) >> >> at >> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >> >> at java.lang.reflect.Method.invoke(Method.java:606) >> >> at >> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) >> >> at >> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) >> >> at com.sun.proxy.$Proxy13.addBlock(Unknown Source) >> >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1459) >> >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1255) >> >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) >> >> >> -- >> Madhav Sharan >> >> >> -- >> The statements and opinions expressed here are my own and do not necessarily >> represent those of Oracle Corporation. >> >> >