[jira] [Created] (HDFS-11145) Implement getTrashRoot() for ViewFileSystem
Manoj Govindassamy created HDFS-11145: - Summary: Implement getTrashRoot() for ViewFileSystem Key: HDFS-11145 URL: https://issues.apache.org/jira/browse/HDFS-11145 Project: Hadoop HDFS Issue Type: Task Components: federation Affects Versions: 3.0.0-alpha1 Reporter: Manoj Govindassamy Assignee: Manoj Govindassamy ViewFileSystem doesn't have the custom implementation of FileSystem#getTrashRoot(Path) yet and hence irrespective of Paths passed in, ViewFileSystem always returns the user specific .Trash directory. ViewFileSystem should implement getTrashRoot(Path) and delegate the call to the respective mounted file system which can then examine about EZ or other criteria and return a proper Trash directory. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11144) TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception
Brahma Reddy Battula created HDFS-11144: --- Summary: TestFileCreationDelete#testFileCreationDeleteParent fails wind bind exception Key: HDFS-11144 URL: https://issues.apache.org/jira/browse/HDFS-11144 Project: Hadoop HDFS Issue Type: Bug Components: test Reporter: Brahma Reddy Battula {noformat} java.net.BindException: Problem binding to [localhost:57908] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:535) at org.apache.hadoop.ipc.Server$Listener.(Server.java:919) at org.apache.hadoop.ipc.Server.(Server.java:2667) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:959) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:801) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:434) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:723) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:937) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:916) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1633) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1263) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1032) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:907) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:839) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:491) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:450) at org.apache.hadoop.hdfs.TestFileCreationDelete.testFileCreationDeleteParent(TestFileCreationDelete.java:77) {noformat} *Reference* https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/testReport/junit/org.apache.hadoop.hdfs/TestFileCreationDelete/testFileCreationDeleteParent/ -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-10786) Erasure Coding: Add removeErasureCodingPolicy API
[ https://issues.apache.org/jira/browse/HDFS-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HDFS-10786. Resolution: Duplicate Closing since I think this is a dupe of HDFS-11072. > Erasure Coding: Add removeErasureCodingPolicy API > - > > Key: HDFS-10786 > URL: https://issues.apache.org/jira/browse/HDFS-10786 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xinwei Qin > Labels: hdfs-ec-3.0-must-do > > HDFS-7859 has developed addErasureCodingPolicy API to add some user-added > Erasure Coding policies, and as discussed in HDFS-7859, we should also add > removeErasureCodingPolicy API to support removing some user-added Erasure > Coding Polices. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11143) start.sh doesn't return any error message even namenode is not up.
Yufei Gu created HDFS-11143: --- Summary: start.sh doesn't return any error message even namenode is not up. Key: HDFS-11143 URL: https://issues.apache.org/jira/browse/HDFS-11143 Project: Hadoop HDFS Issue Type: Bug Reporter: Yufei Gu -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/ [Nov 14, 2016 3:09:39 PM] (brahma) HDFS-11135. The tests in TestBalancer run fails due to NPE. Contributed [Nov 14, 2016 7:05:29 PM] (zhz) HDFS-10872. Add MutableRate metrics for FSNamesystemLock operations. [Nov 14, 2016 8:20:50 PM] (jlowe) MAPREDUCE-6797. Job history server scans can become blocked on a single, [Nov 15, 2016 3:38:10 AM] (liuml07) HADOOP-13810. Add a test to verify that Configuration handles &-encoded [Nov 15, 2016 5:26:28 AM] (rohithsharmaks) YARN-5873. RM crashes with NPE if generic application history is [Nov 15, 2016 5:28:25 AM] (rohithsharmaks) YARN-5874. RM -format-state-store and [Nov 15, 2016 7:57:37 AM] (naganarasimha_gr) Reverted due to issue YARN-5765. Revert "YARN-5287. [Nov 15, 2016 10:11:56 AM] (naganarasimha_gr) YARN-4355. NPE while processing localizer heartbeat. Contributed by -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.crypto.key.kms.server.TestKMS hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.TestFileAppend3 hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService hadoop.yarn.server.nodemanager.TestNodeManagerShutdown hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.timelineservice.storage.common.TestRowKeys hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters hadoop.yarn.server.timelineservice.storage.common.TestSeparator hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestTokenClientRMService hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageApps hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorageEntities hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapred.pipes.TestPipeApplication Timed out junit tests : org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache org.apache.hadoop.mapred.TestMRIntermediateDataEncryption compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-compile-root.txt [168K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-compile-root.txt [168K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-compile-root.txt [168K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [200K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/156/artifact/out/patch-unit-hadoop-yarn-projec
[jira] [Created] (HDFS-11142) TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk
Yiqun Lin created HDFS-11142: Summary: TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit fails in trunk Key: HDFS-11142 URL: https://issues.apache.org/jira/browse/HDFS-11142 Project: Hadoop HDFS Issue Type: Bug Reporter: Yiqun Lin Assignee: Yiqun Lin The test {{TestLargeBlockReport#testBlockReportSucceedsWithLargerLengthLimit}} fails in trunk. I looked into this, it seemed the long-time gc caused the datanode to be shutdown unexpectedly when did the large block reporting. And then the NPE thew in the test. The related output log: {code} 2016-11-15 11:31:18,889 [DataNode: [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1, [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]] heartbeating to localhost/127.0.0.1:51450] INFO datanode.DataNode (BPServiceActor.java:blockReport(415)) - Successfully sent block report 0x2ae5dd91bec02273, containing 2 storage report(s), of which we sent 2. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 49 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5. 2016-11-15 11:31:18,890 [DataNode: [[[DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data1, [DISK]file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2]] heartbeating to localhost/127.0.0.1:51450] INFO datanode.DataNode (BPOfferService.java:processCommandFromActive(696)) - Got finalize command for block pool BP-814229154-172.17.0.3-1479209475497 2016-11-15 11:31:24,026 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@97e93f1] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or host machine (eg GC): pause of approximately 4936ms GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms GC pool 'PS Scavenge' had collection(s): count=1 time=765ms 2016-11-15 11:31:24,026 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5a4bef8] INFO util.JvmPauseMonitor (JvmPauseMonitor.java:run(205)) - Detected pause in JVM or host machine (eg GC): pause of approximately 4898ms GC pool 'PS MarkSweep' had collection(s): count=1 time=4194ms GC pool 'PS Scavenge' had collection(s): count=1 time=765ms 2016-11-15 11:31:24,114 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1943)) - Shutting down the Mini HDFS Cluster 2016-11-15 11:31:24,114 [main] INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdownDataNodes(1983)) - Shutting down DataNode 0 {code} The stack infos: {code} java.lang.NullPointerException: null at org.apache.hadoop.hdfs.server.datanode.TestLargeBlockReport.testBlockReportSucceedsWithLargerLengthLimit(TestLargeBlockReport.java:97) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/ [Nov 14, 2016 3:09:39 PM] (brahma) HDFS-11135. The tests in TestBalancer run fails due to NPE. Contributed [Nov 14, 2016 7:05:29 PM] (zhz) HDFS-10872. Add MutableRate metrics for FSNamesystemLock operations. [Nov 14, 2016 8:20:50 PM] (jlowe) MAPREDUCE-6797. Job history server scans can become blocked on a single, [Nov 15, 2016 3:38:10 AM] (liuml07) HADOOP-13810. Add a test to verify that Configuration handles &-encoded [Nov 15, 2016 5:26:28 AM] (rohithsharmaks) YARN-5873. RM crashes with NPE if generic application history is [Nov 15, 2016 5:28:25 AM] (rohithsharmaks) YARN-5874. RM -format-state-store and -1 overall The following subsystems voted -1: asflicense findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.crypto.key.kms.server.TestKMS hadoop.hdfs.TestFileCreationDelete hadoop.hdfs.TestCrcCorruption hadoop.mapred.pipes.TestPipeApplication hadoop.yarn.server.resourcemanager.TestTokenClientRMService hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-compile-javac-root.txt [164K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-checkstyle-root.txt [16M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-patch-shellcheck.txt [28K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/whitespace-eol.txt [11M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/whitespace-tabs.txt [1.3M] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [4.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [124K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [460K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [92K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt [124K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [316K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/226/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-11141) [viewfs] Listfile gives complete Realm as User
Archana T created HDFS-11141: Summary: [viewfs] Listfile gives complete Realm as User Key: HDFS-11141 URL: https://issues.apache.org/jira/browse/HDFS-11141 Project: Hadoop HDFS Issue Type: Bug Components: federation Reporter: Archana T Priority: Minor When defaultFS is configured as viewfs -- fs.defaultFS viewfs://CLUSTER/ List Files showing Realm as User -- hdfs dfs -ls / Found 2 items -r-xr-xr-x - {color:red} h...@hadoop.com {color} hadoop 0 2016-11-07 15:31 /Dir1 -r-xr-xr-x - {color:red} h...@hadoop.com {color} hadoop 0 2016-11-07 15:31 /Dir2 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org