[jira] [Created] (HDDS-418) Use a TimeStamp based counter to track when to close openKeys
Anu Engineer created HDDS-418: - Summary: Use a TimeStamp based counter to track when to close openKeys Key: HDDS-418 URL: https://issues.apache.org/jira/browse/HDDS-418 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Anu Engineer This patch proposes to close open keys that have been open for more than 7 days using a efficient time stamp mechanism. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-412) OzoneCLI: clean up ozone oz help messages
[ https://issues.apache.org/jira/browse/HDDS-412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer resolved HDDS-412. --- Resolution: Implemented > OzoneCLI: clean up ozone oz help messages > - > > Key: HDDS-412 > URL: https://issues.apache.org/jira/browse/HDDS-412 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone CLI >Reporter: Xiaoyu Yao >Assignee: Dinesh Chitlangia >Priority: Major > Labels: newbie > Fix For: 0.3.0 > > > hdfs o3 should be ozone oz > > {code} > -createVolume creates a volumefor the specified user. > For example : hdfs o3 -createVolume > -root -user > {code} > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/ [Sep 5, 2018 4:53:42 AM] (xiao) HDFS-13812. Fix the inconsistent default refresh interval on Caching [Sep 5, 2018 5:56:57 AM] (xyao) HDDS-268. Add SCM close container watcher. Contributed by Ajay Kumar. [Sep 5, 2018 10:41:06 AM] (elek) HDDS-315. ozoneShell infoKey does not work for directories created as [Sep 5, 2018 12:31:36 PM] (elek) HDDS-333. Create an Ozone Logo. Contributed by Priyanka Nagwekar. [Sep 5, 2018 12:47:54 PM] (skumpf) YARN-8638. Allow linux container runtimes to be pluggable. Contributed [Sep 5, 2018 1:05:33 PM] (nanda) HDDS-358. Use DBStore and TableStore for DeleteKeyService. Contributed [Sep 5, 2018 3:33:27 PM] (yqlin) HDFS-13815. RBF: Add check to order command. Contributed by Ranith [Sep 5, 2018 4:52:35 PM] (weichiu) HADOOP-15696. KMS performance regression due to too many open file [Sep 5, 2018 5:50:25 PM] (gifuma) HADOOP-15707. Add IsActiveServlet to be used for Load Balancers. [Sep 5, 2018 7:26:37 PM] (xyao) HDDS-303. Removing logic to identify containers to be closed from SCM. -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 192] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 192] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-compile-javac-root.txt [328K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/888/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K]
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/ [Sep 2, 2018 8:05:52 AM] (bibinchundatt) YARN-8535. Fix DistributedShell unit tests. Contributed by Abhishek [Sep 2, 2018 6:47:32 PM] (aengineer) HDDS-357. Use DBStore and TableStore for OzoneManager non-background -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 192] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 192] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.security.TestRaceWhenRelogin hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes hadoop.yarn.client.api.impl.TestAMRMProxy cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-compile-javac-root.txt [328K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_client.txt [40K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-hdds_tools.txt [24K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/885/artifact/out/branch-findbugs-hadoop-ozone_client.txt [4.0K]
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/ [Sep 3, 2018 6:56:34 AM] (msingh) HDDS-263. Add retries in Ozone Client to handle BlockNotCommitted [Sep 3, 2018 8:58:31 AM] (vinayakumarb) HDFS-13867. RBF: Add validation for max arguments for Router admin ls, [Sep 3, 2018 9:07:57 AM] (vinayakumarb) HDFS-13774. EC: 'hdfs ec -getPolicy' is not retrieving policy details [Sep 3, 2018 11:32:55 AM] (elek) HDDS-336. Print out container location information for a specific ozone [Sep 3, 2018 2:44:45 PM] (nanda) HDDS-343. Containers are stuck in closing state in scm. Contributed by -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 192] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 192] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.client.api.impl.TestAMRMClient hadoop.yarn.sls.TestSLSRunner cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-compile-javac-root.txt [328K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_client.txt [36K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_framework.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/886/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
[jira] [Created] (HDDS-417) Ambiguous error message when using genconf tool
Dinesh Chitlangia created HDDS-417: -- Summary: Ambiguous error message when using genconf tool Key: HDDS-417 URL: https://issues.apache.org/jira/browse/HDDS-417 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: Tools Reporter: Dinesh Chitlangia Assignee: Dinesh Chitlangia Fix For: 0.2.1 When using genconf tool and specifying output path as file name, ambiguous error message is thrown. {{aengineer@alpha ~/t/o/bin> ./ozone genconf -output /Users/aengineer/ozone-site.xml}} {{Invalid path or insufficient permission}} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/ [Sep 4, 2018 5:37:37 AM] (xiao) HDFS-13885. Add debug logs in dfsclient around decrypting EDEK. [Sep 4, 2018 3:46:12 PM] (stevel) HADOOP-10219. ipc.Client.setupIOstreams() needs to check for [Sep 4, 2018 6:11:50 PM] (nanda) HDDS-75. Support for CopyContainer. Contributed by Elek, Marton. [Sep 4, 2018 6:41:07 PM] (nanda) HDDS-98. Adding Ozone Manager Audit Log. Contributed by Dinesh [Sep 4, 2018 7:17:17 PM] (inigoiri) HDFS-13857. RBF: Choose to enable the default nameservice to read/write [Sep 4, 2018 9:57:54 PM] (hanishakoneru) HDDS-369. Remove the containers of a dead node from the container state [Sep 4, 2018 11:27:31 PM] (aengineer) HDDS-396. Remove openContainers.db from SCM. Contributed by Dinesh [Sep 5, 2018 12:10:44 AM] (szetszwo) HDDS-383. Ozone Client should discard preallocated blocks from closed -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 192] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 192] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.client.impl.TestBlockReaderLocal hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart hadoop.yarn.server.resourcemanager.scheduler.constraint.TestPlacementProcessor hadoop.yarn.service.TestServiceAM cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-compile-javac-root.txt [328K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/branch-findbugs-hadoop-hdds_client.txt [68K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/887/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt [60K]
[jira] [Created] (HDDS-416) Fix bug in ChunkInputStreamEntry
Lokesh Jain created HDDS-416: Summary: Fix bug in ChunkInputStreamEntry Key: HDDS-416 URL: https://issues.apache.org/jira/browse/HDDS-416 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone Client Reporter: Lokesh Jain Assignee: Lokesh Jain Fix For: 0.2.1 ChunkInputStreamEntry maintains currentPosition field. This field is redundant and can be replaced by getPos(). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-415) bin/ozone om with incorrect argument first logs all the STARTUP_MSG
Namit Maheshwari created HDDS-415: - Summary: bin/ozone om with incorrect argument first logs all the STARTUP_MSG Key: HDDS-415 URL: https://issues.apache.org/jira/browse/HDDS-415 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Namit Maheshwari bin/ozone om with incorrect argument first logs all the STARTUP_MSG {code:java} ➜ ozone-0.2.1-SNAPSHOT bin/ozone om -hgfj 2018-09-07 12:56:12,391 [main] INFO - STARTUP_MSG: / STARTUP_MSG: Starting OzoneManager STARTUP_MSG: host = HW11469.local/10.22.16.67 STARTUP_MSG: args = [-hgfj] STARTUP_MSG: version = 3.2.0-SNAPSHOT STARTUP_MSG: classpath =
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/ [Sep 6, 2018 3:53:21 AM] (vrushali) HADOOP-15657 Registering MutableQuantiles via Metric annotation. [Sep 6, 2018 11:16:54 AM] (elek) HDDS-404. Implement toString() in OmKeyLocationInfo. Contributed by [Sep 6, 2018 7:13:29 PM] (jlowe) MAPREDUCE-7131. Job History Server has race condition where it moves [Sep 6, 2018 7:44:08 PM] (aengineer) HDDS-405. User/volume mapping is not cleaned up during the deletion of [Sep 6, 2018 9:35:07 PM] (szetszwo) HDDS-297. Add pipeline actions in Ozone. Contributed by Mukul Kumar [Sep 6, 2018 9:48:00 PM] (gifuma) HDFS-13695. Move logging to slf4j in HDFS package. Contributed by Ian [Sep 6, 2018 10:09:21 PM] (aengineer) HDDS-406. Enable acceptace test of the putKey for rpc protocol. [Sep 6, 2018 11:47:54 PM] (inigoiri) HDFS-13836. RBF: Handle mount table znode with null value. Contributed [Sep 6, 2018 11:58:15 PM] (xyao) HDDS-397. Handle deletion for keys with no blocks. Contributed by Lokesh -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine Unread field:FSBasedSubmarineStorageImpl.java:[line 39] Found reliance on default encoding in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component):in org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component): new java.io.FileWriter(File) At YarnServiceJobSubmitter.java:[line 192] org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceJobSubmitter.generateCommandLaunchScript(RunJobParameters, TaskType, Component) may fail to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:to clean up java.io.Writer on checked exception Obligation to clean up resource created at YarnServiceJobSubmitter.java:[line 192] is not discharged org.apache.hadoop.yarn.submarine.runtimes.yarnservice.YarnServiceUtils.getComponentArrayJson(String, int, String) concatenates strings using + in a loop At YarnServiceUtils.java:using + in a loop At YarnServiceUtils.java:[line 72] Failed CTEST tests : test_test_libhdfs_threaded_hdfs_static test_libhdfs_threaded_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy hadoop.hdfs.TestLeaseRecovery2 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher hadoop.yarn.client.api.impl.TestAMRMProxy cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-compile-javac-root.txt [304K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-checkstyle-root.txt [17M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/diff-patch-shelldocs.txt [16K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/whitespace-eol.txt [9.4M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/xml.txt [4.0K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/889/artifact/out/branch-findbugs-hadoop-hdds_client.txt [68K]
[jira] [Created] (HDDS-414) sbin/stop-all.sh does not stop Ozone daemons
Namit Maheshwari created HDDS-414: - Summary: sbin/stop-all.sh does not stop Ozone daemons Key: HDDS-414 URL: https://issues.apache.org/jira/browse/HDDS-414 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Namit Maheshwari sbin/stop-all.sh does not stop Ozone daemons. Please see below: {code:java} ➜ ozone-0.2.1-SNAPSHOT jps 8896 Jps 8224 HddsDatanodeService 8162 OzoneManager 7701 StorageContainerManager ➜ ozone-0.2.1-SNAPSHOT pwd /tmp/ozone-0.2.1-SNAPSHOT ➜ ozone-0.2.1-SNAPSHOT sbin/stop-all.sh WARNING: Stopping all Apache Hadoop daemons as nmaheshwari in 10 seconds. WARNING: Use CTRL-C to abort. Stopping namenodes on [localhost] localhost: ssh: connect to host localhost port 22: Connection refused Stopping datanodes localhost: ssh: connect to host localhost port 22: Connection refused Stopping secondary namenodes [HW11469.local] HW11469.local: ssh: connect to host hw11469.local port 22: Connection refused 2018-09-07 12:38:49,044 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable ➜ ozone-0.2.1-SNAPSHOT jps 8224 HddsDatanodeService 8162 OzoneManager 7701 StorageContainerManager 9150 Jps ➜ ozone-0.2.1-SNAPSHOT {code} The Ozone daemons processes are not stopped even after sbin/stop-all.sh finished executing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-413) Ozone freon help needs the Scm and OM running
Namit Maheshwari created HDDS-413: - Summary: Ozone freon help needs the Scm and OM running Key: HDDS-413 URL: https://issues.apache.org/jira/browse/HDDS-413 Project: Hadoop Distributed Data Store Issue Type: Bug Reporter: Namit Maheshwari Ozone freon help needs the Scm and OM running {code:java} ./ozone freon --help 2018-09-07 12:23:28,983 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2018-09-07 12:23:30,203 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9862. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) 2018-09-07 12:23:31,204 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9862. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS) ^C⏎ HW11767 ~/t/o/bin> jps 52445 86095 Jps{code} If Scm and Om are running, freon help works fine: {code:java} HW11767 ~/t/o/bin> /ozone freon --help 2018-09-07 12:30:18,535 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Options supported are: -numOfThreads number of threads to be launched for the run. -validateWrites do random validation of data written into ozone, only subset of data is validated. -jsonDirdirectory where json is created. -mode [online | offline]specifies the mode in which Freon should run. -source specifies the URL of s3 commoncrawl warc file to be used when the mode is online. -numOfVolumes specifies number of Volumes to be created in offline mode -numOfBuckets specifies number of Buckets to be created per Volume in offline mode -numOfKeys specifies number of Keys to be created per Bucket in offline mode -keySize specifies the size of Key in bytes to be created in offline mode -help prints usage.{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-412) OzoneCLI: createVolume example needs to be udpated
Xiaoyu Yao created HDDS-412: --- Summary: OzoneCLI: createVolume example needs to be udpated Key: HDDS-412 URL: https://issues.apache.org/jira/browse/HDDS-412 Project: Hadoop Distributed Data Store Issue Type: Bug Components: Ozone CLI Reporter: Xiaoyu Yao Fix For: 0.2.1 hdfs o3 should be ozone oz {code} -createVolume creates a volumefor the specified user. For example : hdfs o3 -createVolume -root -user {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-13904) ContentSummary does not always respect processing limit, resulting in long lock acquisitions
Erik Krogen created HDFS-13904: -- Summary: ContentSummary does not always respect processing limit, resulting in long lock acquisitions Key: HDFS-13904 URL: https://issues.apache.org/jira/browse/HDFS-13904 Project: Hadoop HDFS Issue Type: Bug Components: hdfs, namenode Reporter: Erik Krogen Assignee: Erik Krogen HDFS-4995 added a config {{dfs.content-summary.limit}} which allows for an administrator to set a limit on the number of entries processed during a single acquisition of the {{FSNamesystemLock}} during the creation of a content summary. This is useful to prevent very long (multiple seconds) pauses on the NameNode when {{getContentSummary}} is called on large directories. However, even on versions with HDFS-4995, we have seen warnings like: {code} INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem read lock held for 9398 ms via java.lang.Thread.getStackTrace(Thread.java:1552) org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:950) org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.readUnlock(FSNamesystemLock.java:188) org.apache.hadoop.hdfs.server.namenode.FSNamesystem.readUnlock(FSNamesystem.java:1486) org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext.yield(ContentSummaryComputationContext.java:109) org.apache.hadoop.hdfs.server.namenode.INodeDirectory.computeDirectoryContentSummary(INodeDirectory.java:679) org.apache.hadoop.hdfs.server.namenode.INodeDirectory.computeContentSummary(INodeDirectory.java:642) org.apache.hadoop.hdfs.server.namenode.INodeDirectory.computeDirectoryContentSummary(INodeDirectory.java:656) {code} happen quite consistently when {{getContentSummary}} was called on a large directory on a heavily-loaded NameNode. Such long pauses completely destroy the performance of the NameNode. We have the limit set to its default of 5000; if it was respected, clearly there would not be a 10-second pause. The current {{yield()}} code within {{ContentSummaryComputationContext}} looks like: {code} public boolean yield() { // Are we set up to do this? if (limitPerRun <= 0 || dir == null || fsn == null) { return false; } // Have we reached the limit? long currentCount = counts.getFileCount() + counts.getSymlinkCount() + counts.getDirectoryCount() + counts.getSnapshotableDirectoryCount(); if (currentCount <= nextCountLimit) { return false; } // Update the next limit nextCountLimit = currentCount + limitPerRun; boolean hadDirReadLock = dir.hasReadLock(); boolean hadDirWriteLock = dir.hasWriteLock(); boolean hadFsnReadLock = fsn.hasReadLock(); boolean hadFsnWriteLock = fsn.hasWriteLock(); // sanity check. if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock || hadFsnWriteLock || dir.getReadHoldCount() != 1 || fsn.getReadHoldCount() != 1) { // cannot relinquish return false; } // unlock dir.readUnlock(); fsn.readUnlock("contentSummary"); try { Thread.sleep(sleepMilliSec, sleepNanoSec); } catch (InterruptedException ie) { } finally { // reacquire fsn.readLock(); dir.readLock(); } yieldCount++; return true; } {code} We believe that this check in particular is the culprit: {code} if (!hadDirReadLock || !hadFsnReadLock || hadDirWriteLock || hadFsnWriteLock || dir.getReadHoldCount() != 1 || fsn.getReadHoldCount() != 1) { // cannot relinquish return false; } {code} The content summary computation will only relinquish the lock if it is currently the _only_ holder of the lock. Given the high volume of read requests on a heavily loaded NameNode, especially when unfair locking is enabled, it is likely there may be another holder of the read lock performing some short-lived operation. By refusing to give up the lock in this case, the content summary computation ends up never relinquishing the lock. We propose to simply remove the readHoldCount checks from this {{yield()}}. This should alleviate the case described above by giving up the read lock and allowing other short-lived operations to complete (while the content summary thread sleeps) so that the lock can finally be given up completely. This has the drawback that sometimes, the content summary may give up the lock unnecessarily, if the read lock is never actually released by the time the thread continues again. The only negative impact from this is to make some large content summary operations slightly slower, with the tradeoff of reducing NameNode-wide performance impact. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail:
Re: HADOOP-14163 proposal for new hadoop.apache.org
Thanks all the positive feedback. I just uploaded the new site to the new repository: https://gitbox.apache.org/repos/asf/hadoop-site.git (asf-site branch) It contains: 1. Same content, new layout. (source files of the site) 2. The rendered content under /content together with all the javadocs (289003 file) 3. The old site (as suggested by Vinod. I added a link back to the old site). https://hadoop.apache.org/old Infra has already changed the pubsub script. The new site is live. Please let me know if you see any problem... I will update the wiki pages / release instruction very soon. Thanks, Marton ps: Please give me write permission to the OLD wiki (https://wiki.apache.org/hadoop/), if you can. My username is MartonElek Thanks a lot. On 08/31/2018 10:07 AM, Elek, Marton wrote: Bumping this thread at last time. I have the following proposal: 1. I will request a new git repository hadoop-site.git and import the new site to there (which has exactly the same content as the existing site). 2. I will ask infra to use the new repository as the source of hadoop.apache.org 3. I will sync manually all of the changes in the next two months back to the svn site from the git (release announcements, new committers) IN CASE OF ANY PROBLEM we can switch back to the svn without any problem. If no-one objects within three days, I'll assume lazy consensus and start with this plan. Please comment if you have objections. Again: it allows immediate fallback at any time as svn repo will be kept as is (+ I will keep it up-to-date in the next 2 months) Thanks, Marton On 06/21/2018 09:00 PM, Elek, Marton wrote: Thank you very much to bump up this thread. About [2]: (Just for the clarification) the content of the proposed website is exactly the same as the old one. About [1]. I believe that the "mvn site" is perfect for the documentation but for website creation there are more simple and powerful tools. Hugo has more simple compared to jekyll. Just one binary, without dependencies, works everywhere (mac, linux, windows) Hugo has much more powerful compared to "mvn site". Easier to create/use more modern layout/theme, and easier to handle the content (for example new release announcements could be generated as part of the release process) I think it's very low risk to try out a new approach for the site (and easy to rollback in case of problems) Marton ps: I just updated the patch/preview site with the recent releases: *** * http://hadoop.anzix.net * *** On 06/21/2018 01:27 AM, Vinod Kumar Vavilapalli wrote: Got pinged about this offline. Thanks for keeping at it, Marton! I think there are two road-blocks here (1) Is the mechanism using which the website is built good enough - mvn-site / hugo etc? (2) Is the new website good enough? For (1), I just think we need more committer attention and get feedback rapidly and get it in. For (2), how about we do it in a different way in the interest of progress? - We create a hadoop.apache.org/new-site/ where this new site goes. - We then modify the existing web-site to say that there is a new site/experience that folks can click on a link and navigate to - As this new website matures and gets feedback & fixes, we finally pull the plug at a later point of time when we think we are good to go. Thoughts? +Vinod On Feb 16, 2018, at 3:10 AM, Elek, Marton wrote: Hi, I would like to bump this thread up. TLDR; There is a proposed version of a new hadoop site which is available from here: https://elek.github.io/hadoop-site-proposal/ and https://issues.apache.org/jira/browse/HADOOP-14163 Please let me know what you think about it. Longer version: This thread started long time ago to use a more modern hadoop site: Goals were: 1. To make it easier to manage it (the release entries could be created by a script as part of the release process) 2. To use a better look-and-feel 3. Move it out from svn to git I proposed to: 1. Move the existing site to git and generate it with hugo (which is a single, standalone binary) 2. Move both the rendered and source branches to git. 3. (Create a jenkins job to generate the site automatically) NOTE: this is just about forrest based hadoop.apache.org, NOT about the documentation which is generated by mvn-site (as before) I got multiple valuable feedback and I improved the proposed site according to the comments. Allen had some concerns about the used technologies (hugo vs. mvn-site) and I answered all the questions why I think mvn-site is the best for documentation and hugo is best for generating site. I would like to finish this effort/jira: I would like to start a discussion about using this proposed version and approach as a new site of Apache Hadoop. Please let me know what you think. Thanks a lot, Marton - To unsubscribe,
Re: [DISCUSS] Alpha Release of Ozone
Thank you all the feedback about the first ozone release. I just cut ozone-0.2 branch from trunk. This will be the base of the first ozone release. I changed ozone/hdds version on the trunk to 0.3.0-SNAPSHOT. Marton On 08/06/2018 07:34 PM, Elek, Marton wrote: Hi All, I would like to discuss creating an Alpha release for Ozone. The core functionality of Ozone is complete but there are two missing features; Security and HA, work on these features are progressing in Branches HDDS-4 and HDDS-151. Right now, Ozone can handle millions of keys and has a Hadoop compatible file system, which allows applications like Hive, Spark, and YARN use Ozone. Having an Alpha release of Ozone will help in getting some early feedback (this release will be marked as an Alpha -- and not production ready). Going through a complete release cycle will help us flush out Ozone release process, update user documentation and nail down deployment models. Please share your thoughts on the Alpha release (over mail or in HDDS-214), as voted on by the community earlier, Ozone release will be independent of Hadoop releases. Thanks a lot, Marton Elek - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-321) ozoneFS put/copyFromLocal command does not work for a directory when the directory contains file(s) as well as subdirectories
[ https://issues.apache.org/jira/browse/HDDS-321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nilotpal Nandi resolved HDDS-321. - Resolution: Fixed > ozoneFS put/copyFromLocal command does not work for a directory when the > directory contains file(s) as well as subdirectories > - > > Key: HDDS-321 > URL: https://issues.apache.org/jira/browse/HDDS-321 > Project: Hadoop Distributed Data Store > Issue Type: Bug >Reporter: Nilotpal Nandi >Assignee: Nilotpal Nandi >Priority: Blocker > Fix For: 0.2.1 > > > Steps taken : > - > # Created a local directory 'TEST_DIR1' which contains directory "SUB_DIR1" > and a file "test_file1". > # Ran "./ozone fs -put TEST_DIR1/ /" . The command kept on running , > throwing error on console. > stack trace of the error thrown on the console : > {noformat} > 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.flow.control.window = 1MB > (=1048576) (default) > 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.grpc.message.size.max = 33554432 > (custom) > 2018-08-02 12:55:46 INFO ConfUtils:41 - raft.client.rpc.request.timeout = > 3000 ms (default) > Aug 02, 2018 12:55:46 PM > org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl detectProxy > WARNING: Failed to construct URI for proxy lookup, proceeding without proxy > java.net.URISyntaxException: Illegal character in hostname at index 13: > https://ozone_datanode_3.ozone_default:9858 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseHostname(URI.java:3387) > at java.net.URI$Parser.parseServer(URI.java:3236) > at java.net.URI$Parser.parseAuthority(URI.java:3155) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:673) > at > org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.detectProxy(ProxyDetectorImpl.java:128) > at > org.apache.ratis.shaded.io.grpc.internal.ProxyDetectorImpl.proxyFor(ProxyDetectorImpl.java:118) > at > org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.startNewTransport(InternalSubchannel.java:207) > at > org.apache.ratis.shaded.io.grpc.internal.InternalSubchannel.obtainActiveTransport(InternalSubchannel.java:188) > at > org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$SubchannelImpl.requestConnection(ManagedChannelImpl.java:1130) > at > org.apache.ratis.shaded.io.grpc.PickFirstBalancerFactory$PickFirstBalancer.handleResolvedAddressGroups(PickFirstBalancerFactory.java:79) > at > org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$NameResolverListenerImpl$1NamesResolved.run(ManagedChannelImpl.java:1032) > at > org.apache.ratis.shaded.io.grpc.internal.ChannelExecutor.drain(ChannelExecutor.java:73) > at > org.apache.ratis.shaded.io.grpc.internal.ManagedChannelImpl$4.get(ManagedChannelImpl.java:403) > at > org.apache.ratis.shaded.io.grpc.internal.ClientCallImpl.start(ClientCallImpl.java:238) > at > org.apache.ratis.shaded.io.grpc.internal.CensusTracingModule$TracingClientInterceptor$1.start(CensusTracingModule.java:386) > at > org.apache.ratis.shaded.io.grpc.internal.CensusStatsModule$StatsClientInterceptor$1.start(CensusStatsModule.java:679) > at > org.apache.ratis.shaded.io.grpc.stub.ClientCalls.startCall(ClientCalls.java:293) > at > org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncStreamingRequestCall(ClientCalls.java:283) > at > org.apache.ratis.shaded.io.grpc.stub.ClientCalls.asyncBidiStreamingCall(ClientCalls.java:92) > at > org.apache.ratis.shaded.proto.grpc.RaftClientProtocolServiceGrpc$RaftClientProtocolServiceStub.append(RaftClientProtocolServiceGrpc.java:208) > at > org.apache.ratis.grpc.client.RaftClientProtocolClient.appendWithTimeout(RaftClientProtocolClient.java:139) > at > org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:109) > at > org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:88) > at > org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:302) > at > org.apache.ratis.client.impl.RaftClientImpl.sendRequestWithRetry(RaftClientImpl.java:256) > at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:192) > at org.apache.ratis.client.impl.RaftClientImpl.send(RaftClientImpl.java:173) > at org.apache.ratis.client.RaftClient.send(RaftClient.java:80) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.sendRequest(XceiverClientRatis.java:218) > at > org.apache.hadoop.hdds.scm.XceiverClientRatis.sendCommand(XceiverClientRatis.java:235) > at > org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.writeChunk(ContainerProtocolCalls.java:219) > at >
[jira] [Created] (HDDS-410) ozone scmcli is not working properly
Nilotpal Nandi created HDDS-410: --- Summary: ozone scmcli is not working properly Key: HDDS-410 URL: https://issues.apache.org/jira/browse/HDDS-410 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Reporter: Nilotpal Nandi Fix For: 0.2.1 On running ozone scmcli for a container ID, it gives the following output : {noformat} [root@ctr-e138-1518143905142-459606-01-02 bin]# ./ozone scmcli list --start=17 Infinite recursion (StackOverflowError) (through reference chain: