[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548171#comment-14548171 ] Hudson commented on HDFS-8332: -- SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2147 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2147/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548070#comment-14548070 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #189 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/189/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548061#comment-14548061 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2129 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2129/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547980#comment-14547980 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #199 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/199/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14548178#comment-14548178 ] Chris Nauroth commented on HDFS-8332: - Thanks, Uma! DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14549719#comment-14549719 ] Rakesh R commented on HDFS-8332: Thanks [~umamaheswararao], [~busbey], [~vinayrpet], [~cnauroth] for the helpful discussions and resolving this. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547774#comment-14547774 ] Uma Maheswara Rao G commented on HDFS-8332: --- I have just reverted this from branch-2 and changed CHANGES.txt entry to trunk. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547783#comment-14547783 ] Hudson commented on HDFS-8332: -- SUCCESS: Integrated in Hadoop-trunk-Commit #7850 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7850/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547852#comment-14547852 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #200 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/200/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547856#comment-14547856 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Yarn-trunk #931 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/931/]) Updating CHANGES.txt for moving entry of HDFS-8332 from branch-2 to trunk (umamahesh: rev 363c35541d4f9da4974f3e346cb397796173824c) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 3.0.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14547066#comment-14547066 ] Rakesh R commented on HDFS-8332: OK, I got it. +1 to keep this change only in trunk/branc-3. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545141#comment-14545141 ] Sean Busbey commented on HDFS-8332: --- also, please release note this as an incompatible change in behavior. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545162#comment-14545162 ] Rakesh R commented on HDFS-8332: [~busbey] Do you have the test run logs, it would be very helpful in understanding the background and do more analysis. Thanks! DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545336#comment-14545336 ] Uma Maheswara Rao G commented on HDFS-8332: --- After correcting the usage, please find the test results: {noformat} --- T E S T S --- Running org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.553 sec - in org.apache.hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true Running org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.453 sec - in org.apache.hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true Running org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.924 sec - in org.apache.hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true Running org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.806 sec - in org.apache.hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true Results : Tests run: 184, Failures: 0, Errors: 0, Skipped: 0 {noformat} DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545319#comment-14545319 ] Uma Maheswara Rao G commented on HDFS-8332: --- Yeah, you are right Vinay. I had filed a jira for it. HDFS-8412 Just correcting the usage of test should be fine. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545312#comment-14545312 ] Vinayakumar B commented on HDFS-8332: - Hi [~busbey], I understand that Tests started failing after the commit, that doesnt mean that this Jira change is incompatible. Failure was due to the error in test, which was calling {{setReplication(..)}} even after {{fs.close()}}. of-course it was passing due to this bug. :) Below code is from BaseTestHttpFSWith.java {code} private void testSetReplication() throws Exception { FileSystem fs = FileSystem.get(getProxiedFSConf()); Path path = new Path(getProxiedFSTestDir(), foo.txt); OutputStream os = fs.create(path); os.write(1); os.close(); fs.close(); fs.setReplication(path, (short) 2); fs = getHttpFSFileSystem(); fs.setReplication(path, (short) 1); fs.close(); fs = FileSystem.get(getProxiedFSConf()); FileStatus status1 = fs.getFileStatus(path); fs.close(); Assert.assertEquals(status1.getReplication(), (short) 1); }{code} IMO, incompatible change is only when the user's valid code fails. Not when error code fails after change. Agree? DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545373#comment-14545373 ] Rakesh R commented on HDFS-8332: Thanks [~vinayrpet] and [~umamahesh] for finding out the root cause. bq. IMO, incompatible change is only when the user's valid code fails. Not when error code fails after change. True. +1(non-binding) from me DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545457#comment-14545457 ] Sean Busbey commented on HDFS-8332: --- Being incompatible and breaking some tests are two different problems. It's true that just because tests fail it does not mean a change is incompatible. However, this change is still incompatible. * The [FileSystem specification doesn't say that all operations must fail after a close|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/filesystem.html] * Neither does the javadoc on FileSystem.close (though it does imply it) * The [specification specifically says that HDFS' behavior is correct|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/extending.html] I agree that this change is good and one we should do. However, it *will* break some downstream user code that worked before. A good sign of this is that it broke some code maintained by the Hadoop community, ostensibly those most familiar with how HDFS works. It's important that we properly document when we change things in a way that might break downstream users (wether or not they were doing the correct thing before) so that they can make appropriate adjustments before upgrading, especially when those changes are in a minor version. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14546107#comment-14546107 ] Sean Busbey commented on HDFS-8332: --- +1 for trunk/branc-3 only. A Known Issue note on the next set of branch-2 release notes would be a nice-to-have as well. {quote} Also, I'd like to suggest that we change pre-commit to trigger hadoop-hdfs-httpfs tests automatically for all hadoop-hdfs patches. We've seen problems like this in the past. hadoop-hdfs-httpfs gets patched so infrequently that it's easy to miss it when a hadoop-hdfs change introduces a test failure. As a practical matter, we might not be able to add those tests until the current HDFS test runs get optimized. {quote} Leave a note on HADOOP-11929, [~aw] is already specifying that hadoop-hdfs needs to have hadoop-common built with native bits. Not sure if expanding to under tests always do this other module if this module changes will be in scope or a new ticket. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545800#comment-14545800 ] Chris Nauroth commented on HDFS-8332: - This is very strange. It appears that this only worked because the RPC proxy is still operable even after calling {{RPC#stopProxy}} inside {{DFSClient#closeConnectionToNamenode}}. This is not what I would have expected. I thought that this patch by calling {{checkOpen}} consistently just changed a failure to give a more descriptive error. This is going to be a gray area for compatibility. Code that uses a {{FileSystem}} after closing it is incorrect code. Many operations already fail fast. We might be within the letter of the law for the compatibility policy by making this change, but there is an argument that callers could be dependent on the existing bug. In this kind of situation, I like to consider if the risks outweigh the benefits. This change isn't an absolute requirement to fix a critical bug or ship a new feature. Considering that, I think a conservative approach would be to re-target this patch to trunk/3.0.0 and revert from branch-2. We can set the incompatible flag and enter a release note for 3.0.0 stating that callers who were dependent on the buggy behavior must fix their code when upgrading. What do others think of this? Also, I'd like to suggest that we change pre-commit to trigger hadoop-hdfs-httpfs tests automatically for all hadoop-hdfs patches. We've seen problems like this in the past. hadoop-hdfs-httpfs gets patched so infrequently that it's easy to miss it when a hadoop-hdfs change introduces a test failure. As a practical matter, we might not be able to add those tests until the current HDFS test runs get optimized. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14545850#comment-14545850 ] Uma Maheswara Rao G commented on HDFS-8332: --- {quote} In this kind of situation, I like to consider if the risks outweigh the benefits. This change isn't an absolute requirement to fix a critical bug or ship a new feature. Considering that, I think a conservative approach would be to re-target this patch to trunk/3.0.0 and revert from branch-2. We can set the incompatible flag and enter a release note for 3.0.0 stating that callers who were dependent on the buggy behavior must fix their code when upgrading. What do others think of this? {quote} Yes, The strange part here is api calls working even after streamClosed I think. So users simply using even closed fs for some operation like setReplication. This we noticed from test cases already. Since this is not so critical issue, we can revert from branch-2. I am fine with that. Even though issue comes with wrong usage, some users might have that wrong code already in their app. So, upgrading system would expect code change from users. In this perspective we can mark this as incompatible change. Let's revert from branch-2 and leave it from trunk. What do you think Vinay/Rakesh? DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533988#comment-14533988 ] Uma Maheswara Rao G commented on HDFS-8332: --- +1 committing it. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534054#comment-14534054 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-trunk-Commit #7771 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/7771/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534101#comment-14534101 ] Rakesh R commented on HDFS-8332: Thank you [~umamaheswararao] for reviewing and committing the patch. Also, thank you [~ajisakaa] for the help. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534127#comment-14534127 ] Uma Maheswara Rao G commented on HDFS-8332: --- I missed your comment [~ajisakaa]. I merge it to branch-2 as well. Thankyou DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534492#comment-14534492 ] Hudson commented on HDFS-8332: -- SUCCESS: Integrated in Hadoop-Yarn-trunk #921 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/921/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534530#comment-14534530 ] Hudson commented on HDFS-8332: -- SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #190 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/190/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534688#comment-14534688 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Hdfs-trunk #2119 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2119/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534789#comment-14534789 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #179 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/179/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534824#comment-14534824 ] Chris Nauroth commented on HDFS-8332: - Rakesh, thank you for the patch. Thanks also to Uma and Akira for finishing off the review and commit. DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534921#comment-14534921 ] Hudson commented on HDFS-8332: -- FAILURE: Integrated in Hadoop-Mapreduce-trunk #2137 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2137/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534990#comment-14534990 ] Hudson commented on HDFS-8332: -- SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #189 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/189/]) HDFS-8332. DFS client API calls should check filesystem closed. Contributed by Rakesh R. (umamahesh: rev e16f4b7f70b8675760cf5aaa471dfe29d48041e6) * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDistributedFileSystem.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java * hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgradeRollback.java * hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java * hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Fix For: 2.8.0 Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002-Branch-2.patch, HDFS-8332-002.patch, HDFS-8332.001.branch-2.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HDFS-8332) DFS client API calls should check filesystem closed
[ https://issues.apache.org/jira/browse/HDFS-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14533877#comment-14533877 ] Rakesh R commented on HDFS-8332: Jenkins complains about few checkstyle issues but those are unrelated to my patch. Kindly review. Thanks! {code} ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:711:41: 'blocks' hides a field. ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:717: Line is longer than 80 characters (found 85). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:711:41: 'blocks' hides a field. ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:717: Line is longer than 80 characters (found 85). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1: File length is 3,218 lines (max allowed is 2,000). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:711:41: 'blocks' hides a field. ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:717: Line is longer than 80 characters (found 85). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1: File length is 3,218 lines (max allowed is 2,000). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1: File length is 3,241 lines (max allowed is 2,000). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:711:41: 'blocks' hides a field. ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:717: Line is longer than 80 characters (found 85). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1: File length is 3,218 lines (max allowed is 2,000). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1: File length is 3,241 lines (max allowed is 2,000). ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:1: File length is 3,241 lines (max allowed is 2,000). {code} DFS client API calls should check filesystem closed --- Key: HDFS-8332 URL: https://issues.apache.org/jira/browse/HDFS-8332 Project: Hadoop HDFS Issue Type: Bug Reporter: Rakesh R Assignee: Rakesh R Labels: BB2015-05-RFC Attachments: HDFS-8332-000.patch, HDFS-8332-001.patch, HDFS-8332-002.patch I could see {{listCacheDirectives()}} and {{listCachePools()}} APIs can be called even after the filesystem close. Instead these calls should do {{checkOpen}} and throws: {code} java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:464) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)