[
https://issues.apache.org/jira/browse/HDFS-14540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17272159#comment-17272159
]
Anton Kutuzov commented on HDFS-14540:
--------------------------------------
[~dustinday], [~ayushtkn], Could I take this task?
I propose to use the library
[awaitility|https://github.com/awaitility/awaitility]. And instead of code
{code:java}
while (cluster.getFsDatasetTestUtils(1).getStoredReplicas(bpid1).hasNext()) {
try {
Thread.sleep(3000);
} catch (Exception ignored) {
}
}
{code}
we will have following
{code:java}
await()
.atMost(5, MINUTES)
.pollInterval(3, SECONDS)
.until(cluster.getFsDatasetTestUtils(1).getStoredReplicas(bpid1).hasNext())
{code}
Then we won't have loops.
> Block deletion failure causes an infinite polling in TestDeleteBlockPool
> ------------------------------------------------------------------------
>
> Key: HDFS-14540
> URL: https://issues.apache.org/jira/browse/HDFS-14540
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.23.0
> Reporter: John Doe
> Priority: Major
>
> In the testDeleteBlockPool function, when file deletion failure, the while
> loop hangs.
> {code:java}
> fs1.delete(new Path("/alpha"), true); //deletion failure
>
> // Wait till all blocks are deleted from the dn2 for bpid1.
> while ((MiniDFSCluster.getFinalizedDir(dn2StorageDir1,
> bpid1).list().length != 0) || (MiniDFSCluster.getFinalizedDir(
> dn2StorageDir2, bpid1).list().length != 0)) {
> try {
> Thread.sleep(3000);
> } catch (Exception ignored) {
> }
> }
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]