[ 
https://issues.apache.org/jira/browse/HDFS-14747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16910965#comment-16910965
 ] 

Ayush Saxena commented on HDFS-14747:
-------------------------------------

Thanx [~xuzq_zander] for the patch fix LGTM.
Regarding the test.

{code:java}
boolean result = routerFs.getClient().isFileClosed(testPath);
+      assertFalse(result);
+    } catch (IOException e) {
+      // Maybe throw "Commit or complete block blk_xxx,"
+      // " whereas it is under recovery" when close.
+      assertTrue(e.getMessage().contains("whereas it is under recovery"));
+    } finally {
{code}

Our intention is to get false, Not the IOE, so I guess you can wait to get 
false, GenericTestUtils.waitFor(...), till you get false and in the get method 
you can ignore the IOE(whereas it is under recovery).

Regarding merging both, I am not sure if it is required or not, both ways 
should be ok(unless and unless there are plenty like these!!!). 
[~elgoiri] any opinions

> RBF:IsFileClosed should be return false when the file is open in multiple 
> destination
> -------------------------------------------------------------------------------------
>
>                 Key: HDFS-14747
>                 URL: https://issues.apache.org/jira/browse/HDFS-14747
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: rbf
>            Reporter: xuzq
>            Assignee: xuzq
>            Priority: Major
>         Attachments: HDFS-14747-trunk-001.patch
>
>
> *IsFileClosed* should be return false when the file is open or be writing in 
> multiple destinations.
> Liks this:
> Mount point has multiple destination(ns0 and ns1).
> And the file is in ns0 but it is be writing, ns1 doesn't has this file.
> In this case *IsFileClosed* should return false instead of throw 
> FileNotFoundException.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to