[jira] [Resolved] (HDDS-1938) Change omPort parameter type from String to int in BasicOzoneFileSystem#createAdapter
[ https://issues.apache.org/jira/browse/HDDS-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-1938. -- Resolution: Fixed Fix Version/s: 0.5.0 > Change omPort parameter type from String to int in > BasicOzoneFileSystem#createAdapter > - > > Key: HDDS-1938 > URL: https://issues.apache.org/jira/browse/HDDS-1938 > Project: Hadoop Distributed Data Store > Issue Type: Improvement > Components: Ozone Filesystem >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Attachments: HDDS-1938.001.patch > > Time Spent: 2h > Remaining Estimate: 0h > > The diff will be based on HDDS-1891. > Goal: > 1. Change omPort type to int because it is eventually used as int anyway > 2. Refactor the parser code in BasicOzoneFileSystem#initialize > Will post a PR after HDDS-1891 is merged. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDDS-1979) Fix checkstyle errors
[ https://issues.apache.org/jira/browse/HDDS-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDDS-1979. -- Resolution: Fixed Fix Version/s: 0.5.0 > Fix checkstyle errors > - > > Key: HDDS-1979 > URL: https://issues.apache.org/jira/browse/HDDS-1979 > Project: Hadoop Distributed Data Store > Issue Type: Task > Components: SCM >Reporter: Vivek Ratnavel Subramanian >Assignee: Vivek Ratnavel Subramanian >Priority: Major > Labels: pull-request-available > Fix For: 0.5.0 > > Time Spent: 50m > Remaining Estimate: 0h > > There are checkstyle errors in ListPipelinesSubcommand.java that needs to be > fixed. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1979) Fix checkstyle errors
Vivek Ratnavel Subramanian created HDDS-1979: Summary: Fix checkstyle errors Key: HDDS-1979 URL: https://issues.apache.org/jira/browse/HDDS-1979 Project: Hadoop Distributed Data Store Issue Type: Task Components: SCM Reporter: Vivek Ratnavel Subramanian Assignee: Vivek Ratnavel Subramanian There are checkstyle errors in ListPipelinesSubcommand.java that needs to be fixed. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDDS-1978) Create helper script to run blockade tests
Nanda kumar created HDDS-1978: - Summary: Create helper script to run blockade tests Key: HDDS-1978 URL: https://issues.apache.org/jira/browse/HDDS-1978 Project: Hadoop Distributed Data Store Issue Type: Improvement Components: test Reporter: Nanda kumar Assignee: Nanda kumar To run blockade tests as part of jenkins job we need some kind of helper script. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/ [Aug 16, 2019 6:33:04 AM] (sammichen) HDDS-1894. Add filter to scmcli listPipelines. (#1286) [Aug 16, 2019 6:52:09 AM] (snemeth) YARN-9749. TestAppLogAggregatorImpl#testDFSQuotaExceeded fails on trunk. [Aug 16, 2019 7:13:20 AM] (snemeth) YARN-9100. Add tests for GpuResourceAllocator and do minor code cleanup. [Aug 16, 2019 9:36:14 AM] (snemeth) YARN-8586. Extract log aggregation related fields and methods from [Aug 16, 2019 10:31:58 AM] (snemeth) YARN-9461. [Aug 16, 2019 3:00:51 PM] (weichiu) HDFS-14678. Allow triggerBlockReport to a specific namenode. (#1252). [Aug 16, 2019 4:01:44 PM] (xkrogen) HADOOP-16391 Add a prefix to the metric names for [Aug 16, 2019 8:22:03 PM] (github) HDDS-1969. Implement OM GetDelegationToken request to use Cache and [Aug 16, 2019 9:53:06 PM] (weichiu) HDFS-14456:HAState#prepareToEnterState neednt a lock (#770) Contributed [Aug 16, 2019 10:11:11 PM] (github) HDDS-1911. Support Prefix ACL operations for OM HA. (#1275) [Aug 16, 2019 11:39:49 PM] (github) HDDS-1913. Fix OzoneBucket and RpcClient APIS for acl. (#1257) [Aug 17, 2019 12:19:58 AM] (weichiu) HDFS-14523. Remove excess read lock for NetworkToplogy. Contributed by [Aug 17, 2019 12:26:09 AM] (weichiu) HADOOP-16351. Change ":" to ApplicationConstants.CLASS_PATH_SEPARATOR. -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] Failed CTEST tests : test_test_libhdfs_ops_hdfs_static test_test_libhdfs_threaded_hdfs_static test_test_libhdfs_zerocopy_hdfs_static test_test_native_mini_dfs test_libhdfs_threaded_hdfspp_test_shim_static test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static libhdfs_mini_stress_valgrind_hdfspp_test_static memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static test_libhdfs_mini_stress_hdfspp_test_shim_static test_hdfs_ext_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption hadoop.yarn.client.api.impl.TestAMRMClient cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/diff-patch-pylint.txt [220K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1231/artifact/out/diff-patch-shellcheck.txt [20K]
[jira] [Created] (HDDS-1977) Fix checkstyle issues introduced by HDDS-1894
Nanda kumar created HDDS-1977: - Summary: Fix checkstyle issues introduced by HDDS-1894 Key: HDDS-1977 URL: https://issues.apache.org/jira/browse/HDDS-1977 Project: Hadoop Distributed Data Store Issue Type: Bug Components: SCM Client Reporter: Nanda kumar Fix the checkstyle issues introduced by HDDS-1894 {noformat} [INFO] There are 6 errors reported by Checkstyle 8.8 with checkstyle/checkstyle.xml ruleset. [ERROR] src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[41,23] (whitespace) ParenPad: '(' is followed by whitespace. [ERROR] src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[42] (sizes) LineLength: Line is longer than 80 characters (found 88). [ERROR] src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[46,23] (whitespace) ParenPad: '(' is followed by whitespace. [ERROR] src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[47] (sizes) LineLength: Line is longer than 80 characters (found 90). [ERROR] src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[59] (sizes) LineLength: Line is longer than 80 characters (found 116). [ERROR] src/main/java/org/apache/hadoop/hdds/scm/cli/pipeline/ListPipelinesSubcommand.java:[60] (sizes) LineLength: Line is longer than 80 characters (found 120). {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/ [Aug 17, 2019 12:53:45 AM] (weichiu) HDFS-14725. Backport HDFS-12914 to branch-2 (Block report leases cause -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.fs.sftp.TestSFTPFileSystem hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.namenode.TestDecommissioningStatus hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.client.cli.TestRMAdminCLI hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/whitespace-tabs.txt [1.2M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [292K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/416/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [96K] https://builds.apache.
[jira] [Reopened] (HDFS-13101) Yet another fsimage corruption related to snapshot
[ https://issues.apache.org/jira/browse/HDFS-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reopened HDFS-13101: > Yet another fsimage corruption related to snapshot > -- > > Key: HDFS-13101 > URL: https://issues.apache.org/jira/browse/HDFS-13101 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Yongjun Zhang >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.3.0, 3.2.1, 3.1.3 > > Attachments: HDFS-13101.001.patch, HDFS-13101.002.patch, > HDFS-13101.003.patch, HDFS-13101.004.patch, HDFS-13101.branch-2.001.patch, > HDFS-13101.corruption_repro.patch, > HDFS-13101.corruption_repro_simplified.patch > > > Lately we saw case similar to HDFS-9406, even though HDFS-9406 fix is > present, so it's likely another case not covered by the fix. We are currently > trying to collect good fsimage + editlogs to replay to reproduce it and > investigate. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org