[ https://issues.apache.org/jira/browse/HDFS-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15042448#comment-15042448 ]
Tony Wu commented on HDFS-9491: ------------------------------- Hi [~eddyxu], I just ran {{TestDecommission}} with the patch with JDK 8 on OSX, the test passes without error. The failed test {{TestDecommission}} is not related to the patch. Please see the error log from Jenkins is below: {code} Tests run: 19, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 160.801 sec <<< FAILURE! - in org.apache.hadoop.hdfs.TestDecommission testDecommissionWithOpenfile(org.apache.hadoop.hdfs.TestDecommission) Time elapsed: 1.753 sec <<< ERROR! java.lang.RuntimeException: Error while running command to get file permissions : ExitCodeException exitCode=127: /bin/ls: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Permission denied at org.apache.hadoop.util.Shell.runCommand(Shell.java:927) at org.apache.hadoop.util.Shell.run(Shell.java:838) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1117) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1211) at org.apache.hadoop.util.Shell.execCommand(Shell.java:1193) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1081) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:702) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:677) at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:155) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:172) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2459) at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2501) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2484) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2376) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441) at org.apache.hadoop.hdfs.TestDecommission.startCluster(TestDecommission.java:334) at org.apache.hadoop.hdfs.TestDecommission.testDecommissionWithOpenfile(TestDecommission.java:830) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:742) at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:677) at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:155) at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:172) at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2459) at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2501) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2484) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2376) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:844) at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:482) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441) at org.apache.hadoop.hdfs.TestDecommission.startCluster(TestDecommission.java:334) at org.apache.hadoop.hdfs.TestDecommission.testDecommissionWithOpenfile(TestDecommission.java:830) {code} The test case in {{TestDecommission}} that uses the changed API is {{testDecommissionOnStandby}}, which passes in the Hadoop QA Jenkins run. There seems to be some system issue with the Jenkins servers. Thanks, Tony > Tests should get the number of pending async delets via FsDatasetTestUtils > -------------------------------------------------------------------------- > > Key: HDFS-9491 > URL: https://issues.apache.org/jira/browse/HDFS-9491 > Project: Hadoop HDFS > Issue Type: Improvement > Components: test > Affects Versions: 2.7.1 > Reporter: Tony Wu > Assignee: Tony Wu > Priority: Minor > Attachments: HDFS-9491.001.patch, HDFS-9491.002.patch > > > A few unit tests use {{DataNodeTestUtils#getPendingAsyncDeletions}} to > retrieve the number of pending async deletions. It internally calls > {{FsDatasetTestUtil#getPendingAsyncDeletions}}: > {code:java} > public static long getPendingAsyncDeletions(FsDatasetSpi<?> fsd) { > return ((FsDatasetImpl)fsd).asyncDiskService.countPendingDeletions(); > } > {code} > This assumes {{FsDatasetImpl}} is (the only implementation of) {{FsDataset}}. > However {{FsDataset}} is pluggable and can have other implementations. > We can abstract getting the number of async deletions in > {{FsDatasetTestUtils}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)