[ 
https://issues.apache.org/jira/browse/HDFS-15112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17015229#comment-17015229
 ] 

Íñigo Goiri commented on HDFS-15112:
------------------------------------

Thanks [~ayushtkn] for the comments.

{quote}
The test expects unavailable RouterRpcClient.isUnavailableException(ioe) but 
the thrown is NoNamenodeException
{quote}
Yes, I messed up copying from the internal to the external branch.
Good news are that this shows that the unit test is doing its job.

{quote}
Secondly, Seems the jenkins didn't complained about this but the test failed at 
my local due to this, I think we should have refreshRoutersCaches(routers); 
after creating mount entry. SInce we are using random routers first for mount 
entry and then for filesystem.
{quote}
I changed the {{createMountTableEntry()}} to call all routers.

Fixes in  [^HDFS-15112.006.patch].

> RBF: Do not return FileNotFoundException when a subcluster is unavailable 
> --------------------------------------------------------------------------
>
>                 Key: HDFS-15112
>                 URL: https://issues.apache.org/jira/browse/HDFS-15112
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Íñigo Goiri
>            Assignee: Íñigo Goiri
>            Priority: Major
>         Attachments: HDFS-15112.000.patch, HDFS-15112.001.patch, 
> HDFS-15112.002.patch, HDFS-15112.004.patch, HDFS-15112.005.patch, 
> HDFS-15112.patch
>
>
> If we have a mount point using HASH_ALL across two subclusters and one of 
> them is down, we may return FileNotFoundException while the file is just in 
> the unavailable subcluster.
> We should not return FileNotFoundException but something that shows that the 
> subcluster is unavailable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to