[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-19 Thread Tatyana Alexeyev (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16910854#comment-16910854
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

Hello Chen, I have enabled the debugger...

What is the next step?

Thanks,
tanya

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-16 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908812#comment-16908812
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

 There are some error in datanode log file:

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
(BP-56322450-10.6.14.101-1565836229790 heartbeating to 
ip-10-6-14-101.us-east-2.compute.internal/10.6.14.101:8020): 
DatanodeRegistration(10.6.13.226:50010, 
datanodeUuid=4a8a4e5b-604d-4a8d-96b7-246ccf4d9baf, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-39f69814-6f95-4195-8272-37b6d2166de4;nsid=950102054;c=1565836229790)
 Starting thread to transfer 
BP-56322450-10.6.14.101-1565836229790:blk_1073973783_232959 to 10.6.14.73:50010 
10.6.13.248:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
(BP-56322450-10.6.14.101-1565836229790 heartbeating to 
ip-10-6-14-101.us-east-2.compute.internal/10.6.14.101:8020): 
DatanodeRegistration(10.6.13.226:50010, 
datanodeUuid=4a8a4e5b-604d-4a8d-96b7-246ccf4d9baf, infoPort=50075, 
infoSecurePort=0, ipcPort=50020, 
storageInfo=lv=-57;cid=CID-39f69814-6f95-4195-8272-37b6d2166de4;nsid=950102054;c=1565836229790)
 Starting thread to transfer 
BP-56322450-10.6.14.101-1565836229790:blk_1073973798_232974 to 
10.6.13.248:50010 10.6.14.73:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
([org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@32ac48b6|mailto:org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@32ac48b6]):
 DataTransfer, at ip-10-6-13-226.us-east-2.compute.internal:50010: Transmitted 
BP-56322450-10.6.14.101-1565836229790:blk_1073973727_232903 (numBytes=60530) to 
/10.6.13.248:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
([org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@57370a18|mailto:org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@57370a18]):
 DataTransfer, at ip-10-6-13-226.us-east-2.compute.internal:50010: Transmitted 
BP-56322450-10.6.14.101-1565836229790:blk_1073973701_232877 (numBytes=36427) to 
/10.6.14.73:50010

2019-08-16 03:52:05,696 INFO org.apache.hadoop.hdfs.server.datanode.DataNode 
([org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@766e0d79|mailto:org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer@766e0d79]):
 DataTransfer, at ip-10-6-13-226.us-east-2.compute.internal:50010: Transmitted 
BP-56322450-10.6.14.101-1565836229790:blk_1073973741_232917 (numBytes=36427) to 
/10.6.14.73:50010

2019-08-16 03:52:05,697 WARN org.apache.hadoop.hdfs.server.datanode.DataNode 
(BP-56322450-10.6.14.101-1565836229790 heartbeating to 
ip-10-6-14-101.us-east-2.compute.internal/10.6.14.101:8020): *Can't replicate 
block BP-56322450-10.6.14.101-1565836229790:blk_1073973750_232926 because 
on-disk length 402868 is shorter than NameNode recorded length 
9223372036854775807*

 ** 

There are some messages related to the replication in the Namenode log:

 

2019-08-16 04:00:00,141 INFO org.apache.hadoop.hdfs.StateChange (IPC Server 
handler 30 on 8020): DIR* completeFile: 
/tmp/hadoop-yarn/staging/sphdadm/.staging/job_1565836275738_5267/job.xml is 
closed by DFSClient_NONMAPREDUCE_-1718547537_1

2019-08-16 04:00:00,152 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory 
(IPC Server handler 1 on 8020): *Increasing replication from 1 to 4 for 
/tmp/hadoop-yarn/staging/sphdadm/.staging/job_1565836275738_5268/libjars/parquet-encoding-1.6.0.jar*

 ** 

 

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-3333) java.io.IOException: File /user/root/lwr/test31.txt could only be replicated to 0 nodes instead of minReplication (=1). There are 3 datanode(s) running and 3 node(s) ar

2019-08-15 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908533#comment-16908533
 ] 

Tatyana Alexeyev commented on HDFS-:


Hello, Can you please explain how to open the port to the datanode?

Thanks,

Tanya

> java.io.IOException: File /user/root/lwr/test31.txt could only be replicated 
> to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running 
> and 3 node(s) are excluded in this operation.
> --
>
> Key: HDFS-
> URL: https://issues.apache.org/jira/browse/HDFS-
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 0.23.1, 2.0.0-alpha
> Environment: namenode:1 (IP:10.18.40.154)
> datanode:3 (IP:10.18.40.154,10.18.40.102,10.18.52.55)
> HOST-10-18-40-154:/home/APril20/install/hadoop/namenode/bin # ./hadoop 
> dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
> Configured Capacity: 129238446080 (120.36 GB)
> Present Capacity: 51742765056 (48.19 GB)
> DFS Remaining: 49548591104 (46.15 GB)
> DFS Used: 2194173952 (2.04 GB)
> DFS Used%: 4.24%
> Under replicated blocks: 14831
> Blocks with corrupt replicas: 1
> Missing blocks: 100
> -
> Datanodes available: 3 (3 total, 0 dead)
> Live datanodes:
> Name: 10.18.40.102:50010 (10.18.40.102)
> Hostname: linux.site
> Decommission Status : Normal
> Configured Capacity: 22765834240 (21.2 GB)
> DFS Used: 634748928 (605.34 MB)
> Non DFS Used: 1762299904 (1.64 GB)
> DFS Remaining: 20368785408 (18.97 GB)
> DFS Used%: 2.79%
> DFS Remaining%: 89.47%
> Last contact: Fri Apr 27 10:35:57 IST 2012
> Name: 10.18.40.154:50010 (HOST-10-18-40-154)
> Hostname: HOST-10-18-40-154
> Decommission Status : Normal
> Configured Capacity: 23259897856 (21.66 GB)
> DFS Used: 812396544 (774.76 MB)
> Non DFS Used: 8297279488 (7.73 GB)
> DFS Remaining: 14150221824 (13.18 GB)
> DFS Used%: 3.49%
> DFS Remaining%: 60.84%
> Last contact: Fri Apr 27 10:35:58 IST 2012
> Name: 10.18.52.55:50010 (10.18.52.55)
> Hostname: HOST-10-18-52-55
> Decommission Status : Normal
> Configured Capacity: 83212713984 (77.5 GB)
> DFS Used: 747028480 (712.42 MB)
> Non DFS Used: 67436101632 (62.8 GB)
> DFS Remaining: 15029583872 (14 GB)
> DFS Used%: 0.9%
> DFS Remaining%: 18.06%
> Last contact: Fri Apr 27 10:35:58 IST 2012
>Reporter: liaowenrui
>Priority: Major
>   Original Estimate: 0.2h
>  Remaining Estimate: 0.2h
>
> log4j:WARN No appenders could be found for logger 
> (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
> log4j:WARN Please initialize the log4j system properly.
> java.io.IOException: File /user/root/lwr/test31.txt could only be replicated 
> to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running 
> and 3 node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1259)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1916)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:472)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:292)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42602)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:428)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:905)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1684)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1682)
> i:4284
>   at org.apache.hadoop.ipc.Client.call(Client.java:1159)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:185)
>   at $Proxy9.addBlock(Unknown Source)
>   at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> 

[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-15 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908531#comment-16908531
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

Can you please let me know how to enable debugger?

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-15 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908454#comment-16908454
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

bash-4.2$ hdfs dfsadmin -report
Configured Capacity: 1630750064640 (1.48 TB)
Present Capacity: 1591627264267 (1.45 TB)
DFS Remaining: 1398788173824 (1.27 TB)
DFS Used: 192839090443 (179.60 GB)
DFS Used%: 12.12%
Under replicated blocks: 318
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 48

-
Live datanodes (3):

Name: 10.6.13.226:50010 (ip-10-6-13-226.us-east-2.compute.internal)
Hostname: ip-10-6-13-226.us-east-2.compute.internal
Decommission Status : Normal
Configured Capacity: 543583354880 (506.25 GB)
DFS Used: 62489766754 (58.20 GB)
Non DFS Used: 13059818654 (12.16 GB)
DFS Remaining: 468033769472 (435.89 GB)
DFS Used%: 11.50%
DFS Remaining%: 86.10%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 3
Last contact: Thu Aug 15 20:33:25 UTC 2019


Name: 10.6.13.248:50010 (ip-10-6-13-248.us-east-2.compute.internal)
Hostname: ip-10-6-13-248.us-east-2.compute.internal
Decommission Status : Normal
Configured Capacity: 543583354880 (506.25 GB)
DFS Used: 66380404936 (61.82 GB)
Non DFS Used: 13126735672 (12.23 GB)
DFS Remaining: 464076214272 (432.20 GB)
DFS Used%: 12.21%
DFS Remaining%: 85.37%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 3
Last contact: Thu Aug 15 20:33:25 UTC 2019


Name: 10.6.14.73:50010 (ip-10-6-14-73.us-east-2.compute.internal)
Hostname: ip-10-6-14-73.us-east-2.compute.internal
Decommission Status : Normal
Configured Capacity: 543583354880 (506.25 GB)
DFS Used: 63968918753 (59.58 GB)
Non DFS Used: 12936246047 (12.05 GB)
DFS Remaining: 466678190080 (434.63 GB)
DFS Used%: 11.77%
DFS Remaining%: 85.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 4
Last contact: Thu Aug 15 20:33:25 UTC 2019

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-15 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908453#comment-16908453
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

Can you please let me know how to enable debugger?

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-15 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16908452#comment-16908452
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

I have enought free space:

[root@ip-10-6-14-101 hadoop]# hdfs dfsadmin -report
Configured Capacity: 1630750064640 (1.48 TB)
Present Capacity: 1594786349372 (1.45 TB)
DFS Remaining: 1401552084992 (1.27 TB)
DFS Used: 193234264380 (179.96 GB)
DFS Used%: 12.12%
Under replicated blocks: 613
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 134

-

 

 

 

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-14 Thread Tatyana Alexeyev (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907747#comment-16907747
 ] 

Tatyana Alexeyev commented on HDFS-14735:
-

This error happens intermitently during the Sqoop and Pig operations...

> File could only be replicated to 0 nodes instead of minReplication (=1)
> ---
>
> Key: HDFS-14735
> URL: https://issues.apache.org/jira/browse/HDFS-14735
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Tatyana Alexeyev
>Priority: Major
>
> Hello I have intermitent error when running my EMR Hadoop Cluster:
> "Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
> /user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
> bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
>  could only be replicated to 0 nodes instead of minReplication (=1). There 
> are 5 datanode(s) running and no node(s) are excluded in this operation."
> I am running Hadoop version 
> sphdadm@ip-10-6-15-108 hadoop]$ hadoop version
> Hadoop 2.8.5-amzn-4
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14735) File could only be replicated to 0 nodes instead of minReplication (=1)

2019-08-14 Thread Tatyana Alexeyev (JIRA)
Tatyana Alexeyev created HDFS-14735:
---

 Summary: File could only be replicated to 0 nodes instead of 
minReplication (=1)
 Key: HDFS-14735
 URL: https://issues.apache.org/jira/browse/HDFS-14735
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Reporter: Tatyana Alexeyev


Hello I have intermitent error when running my EMR Hadoop Cluster:

"Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/sphdadm/_sqoop/00501bd7b05e4182b5006b9d51 
bafb7f_f405b2f3/_temporary/1/_temporary/attempt_1565136887564_20057_m_00_0/part-m-0.snappy
 could only be replicated to 0 nodes instead of minReplication (=1). There are 
5 datanode(s) running and no node(s) are excluded in this operation."

I am running Hadoop version 

sphdadm@ip-10-6-15-108 hadoop]$ hadoop version

Hadoop 2.8.5-amzn-4

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org