[ 
https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14546107#comment-14546107
 ] 

Sean Busbey commented on HDFS-8332:
-----------------------------------

+1 for trunk/branc-3 only. A "Known Issue" note on the next set of branch-2 
release notes would be a nice-to-have as well.

{quote}
Also, I'd like to suggest that we change pre-commit to trigger 
hadoop-hdfs-httpfs tests automatically for all hadoop-hdfs patches. We've seen 
problems like this in the past. hadoop-hdfs-httpfs gets patched so infrequently 
that it's easy to miss it when a hadoop-hdfs change introduces a test failure. 
As a practical matter, we might not be able to add those tests until the 
current HDFS test runs get optimized.
{quote}

Leave a note on HADOOP-11929, [~aw] is already specifying that hadoop-hdfs 
needs to have hadoop-common built with native bits. Not sure if expanding to 
"under tests always do this other module if this module changes" will be in 
scope or a new ticket.

> DFS client API calls should check filesystem closed
> ---------------------------------------------------
>
>                 Key: HDFS-8332
>                 URL: https://issues.apache.org/jira/browse/HDFS-8332
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Rakesh R
>            Assignee: Rakesh R
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, 
> HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, 
> HDFS-8332.001.branch-2.patch
>
>
> I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be 
> called even after the filesystem close. Instead these calls should do 
> {{checkOpen}} and throws:
> {code}
> java.io.IOException: Filesystem closed
>       at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to