[jira] [Updated] (HDFS-15461) TestDFSClientRetries#testGetFileChecksum fails intermittently

2020-10-30 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15461:
---
Fix Version/s: 3.2.2

cherry-pick to branch-3.2.2 and verify at local, Thanks [~ahussein].

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -
>
> Key: HDFS-15461
> URL: https://issues.apache.org/jira/browse/HDFS-15461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsclient, test
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum » IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15461) TestDFSClientRetries#testGetFileChecksum fails intermittently

2020-10-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15461:
-
Component/s: test
 dfsclient
 Labels: pull-request-available  (was: pull-request-available test)

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -
>
> Key: HDFS-15461
> URL: https://issues.apache.org/jira/browse/HDFS-15461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: dfsclient, test
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum » IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15461) TestDFSClientRetries#testGetFileChecksum fails intermittently

2020-10-27 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15461:
-
Fix Version/s: 3.2.3
   3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk, branch-3.3, and branch-3.2. Thank you [~ahussein] for 
the contribution and thanks [~ayushtkn] and [~elgoiri] for your reviews.

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -
>
> Key: HDFS-15461
> URL: https://issues.apache.org/jira/browse/HDFS-15461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available, test
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum » IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15461) TestDFSClientRetries#testGetFileChecksum fails intermittently

2020-10-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15461:
--
Labels: pull-request-available test  (was: test)

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -
>
> Key: HDFS-15461
> URL: https://issues.apache.org/jira/browse/HDFS-15461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available, test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum » IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15461) TestDFSClientRetries#testGetFileChecksum fails intermittently

2020-10-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HDFS-15461:
-
Status: Patch Available  (was: Open)

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -
>
> Key: HDFS-15461
> URL: https://issues.apache.org/jira/browse/HDFS-15461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: test
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum » IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15461) TestDFSClientRetries#testGetFileChecksum fails intermittently

2020-10-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HDFS-15461:
-
Parent: HDFS-15646
Issue Type: Sub-task  (was: Bug)

> TestDFSClientRetries#testGetFileChecksum fails intermittently
> -
>
> Key: HDFS-15461
> URL: https://issues.apache.org/jira/browse/HDFS-15461
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>  Labels: test
>
> {{TestDFSClientRetries.testGetFileChecksum}} fails intermittently on hadoop 
> trunk
> {code:bash}
> [INFO] Running org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 10.491 s <<< FAILURE! - in org.apache.hadoop.hdfs.TestGetFileChecksum
> [ERROR] testGetFileChecksum(org.apache.hadoop.hdfs.TestGetFileChecksum)  Time 
> elapsed: 4.248 s  <<< ERROR!
> java.io.IOException: Failed to replace a bad datanode on the existing 
> pipeline due to no more good datanodes being available to try. (Nodes: 
> current=[DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK]],
>  
> original=[DatanodeInfoWithStorage[127.0.0.1:52472,DS-91ec34d5-3f0a-494e-aed6-b01fa0131d8a,DISK],
>  
> DatanodeInfoWithStorage[127.0.0.1:52468,DS-e35b6720-8ac2-4e5e-98df-306985da6924,DISK]]).
>  The current failed datanode replacement policy is DEFAULT, and a client may 
> configure this via 
> 'dfs.client.block.write.replace-datanode-on-failure.policy' in its 
> configuration.
>   at 
> org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
>   at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:719)
> [INFO]
> [INFO] Results:
> [INFO]
> [ERROR] Errors:
> [ERROR]   TestGetFileChecksum.testGetFileChecksum » IO Failed to replace a 
> bad datanode ...
> [INFO]
> [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0
> [INFO]
> [ERROR] There are test failures.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org