[jira] [Created] (HADOOP-16912) Emit per priority rpc queue time and processing time from DecayRpcScheduler
Fengnan Li created HADOOP-16912: --- Summary: Emit per priority rpc queue time and processing time from DecayRpcScheduler Key: HADOOP-16912 URL: https://issues.apache.org/jira/browse/HADOOP-16912 Project: Hadoop Common Issue Type: New Feature Reporter: Fengnan Li Assignee: Fengnan Li At ipc Server level we have the overall rpc queue time and processing time for the whole CallQueueManager. In the case of using FairCallQueue, it will be great to know the per queue/priority level rpc queue time since many times we want to keep certain queues to meet some queue time SLA for customers. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16905) Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches
[ https://issues.apache.org/jira/browse/HADOOP-16905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki resolved HADOOP-16905. --- Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed > Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches > > > Key: HADOOP-16905 > URL: https://issues.apache.org/jira/browse/HADOOP-16905 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > Fix For: 3.3.0 > > > Jackson-databind 2.10 should relieve us from the endless CVE patches > according to > https://medium.com/@cowtowncoder/jackson-2-10-features-cd880674d8a2 > Not sure if this is an easy update, but i think we should do this in the > Hadoop 3.3.0 and before removing jackson-databind entirely. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[VOTE] Apache Hadoop Ozone 0.5.0-beta RC1
Hi Folks, We have put together RC1 for Apache Hadoop Ozone 0.5.0-beta. The RC artifacts are at: https://home.apache.org/~dineshc/ozone-0.5.0-rc1/ The public key used for signing the artifacts can be found at: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS The maven artifacts are staged at: https://repository.apache.org/content/repositories/orgapachehadoop-1260 The RC tag in git is at: https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC1 This release contains 800+ fixes/improvements [1]. Thanks to everyone who put in the effort to make this happen. *The vote will run for 7 days, ending on March 13th 2020 at 11:59 pm PST.* Note: This release is beta quality, it’s not recommended to use in production but we believe that it’s stable enough to try out the feature set and collect feedback. [1] https://s.apache.org/ozone-0.5.0-fixed-issues Thanks, Dinesh Chitlangia
[VOTE] Apache Hadoop Ozone 0.5.0-beta RC1
Hi Folks, We have put together RC1 for Apache Hadoop Ozone 0.5.0-beta. The RC artifacts are at: https://home.apache.org/~dineshc/ozone-0.5.0-rc1/ The public key used for signing the artifacts can be found at: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS The maven artifacts are staged at: https://repository.apache.org/content/repositories/orgapachehadoop-1260 The RC tag in git is at: https://github.com/apache/hadoop-ozone/tree/ozone-0.5.0-beta-RC1 This release contains 800+ fixes/improvements [1]. Thanks to everyone who put in the effort to make this happen. *The vote will run for 7 days, ending on March 13th 2020 at 11:59 pm PST.* Note: This release is beta quality, it’s not recommended to use in production but we believe that it’s stable enough to try out the feature set and collect feedback. [1] https://s.apache.org/ozone-0.5.0-fixed-issues Thanks, Dinesh Chitlangia
[jira] [Reopened] (HADOOP-16885) Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped stream
[ https://issues.apache.org/jira/browse/HADOOP-16885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-16885: - > Encryption zone file copy failure leaks temp file ._COPYING_ and wrapped > stream > --- > > Key: HADOOP-16885 > URL: https://issues.apache.org/jira/browse/HADOOP-16885 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: 3.3.0 > > > Copy file into encryption on trunk with HADOOP-16490 caused a leaking temp > file _COPYING_ left and potential wrapped stream unclosed. This ticked is > opened to track the fix for it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/ [Mar 5, 2020 8:56:42 AM] (snemeth) YARN-10167. FS-CS Converter: Need to validate c-s.xml after converting. -1 overall The following subsystems voted -1: asflicense findbugs pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml FindBugs : module:hadoop-cloud-storage-project/hadoop-cos Redundant nullcheck of dir, which is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:is known to be non-null in org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at BufferPool.java:[line 66] org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may expose internal representation by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At CosNInputStream.java:[line 87] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] Found reliance on default encoding in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, InputStream, byte[], long): new String(byte[]) At CosNativeFileSystemStore.java:[line 178] org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, String, String, int) may fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:fail to clean up java.io.InputStream Obligation to clean up resource created at CosNativeFileSystemStore.java:[line 252] is not discharged Failed junit tests : hadoop.hdfs.TestEncryptionZonesWithKMS hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport hadoop.hdfs.TestEncryptionZones hadoop.hdfs.server.namenode.TestNamenodeCapacityReport hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.yarn.sls.appmaster.TestAMSimulator hadoop.yarn.sls.TestSLSStreamAMSynth cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/diff-compile-cc-root.txt [8.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/diff-compile-javac-root.txt [424K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/diff-checkstyle-root.txt [16M] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/diff-patch-shellcheck.txt [16K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/whitespace-eol.txt [9.9M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/whitespace-tabs.txt [1.1M] xml: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/xml.txt [20K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1430/artifact/out/branch-findbugs-hadoop-cloud-s
Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.ipc.TestRPC hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints hadoop.registry.secure.TestSecureLogins hadoop.yarn.client.api.impl.TestAMRMProxy cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-patch-shellcheck.txt [56K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [160K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/616/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/h
Re: Hadoop & TLS 1.3
sorry, just checked the release notes of a JIRA, its 1.1.1 which breaks wildfly-1.0.4.Final "...to make abfs and adl connectors compatible with alpine linux and other platforms which have libssl1.1-1.1.1b-r1 as their native openssl implementation. see: HADOOP-16460. HADOOP-16438" As well as editing the hadoop wildfly version, you need to move to a version of azure-datalake-storage.jar which doesn' t have an unshaded copy of the wildfly 1.0.4 classes. On Thu, 5 Mar 2020 at 18:09, Wei-Chiu Chuang wrote: > > > > > > abfs and s3a can now go via wildfly to use any native openssl 1.1 > libraries > > -if that supports TLS1.3 then maybe the stores will talk through it. No > > idea if anyone has tried it. > > > > Warning: Do not attempt to use wildfly-1.0.4-Final with openssl 1.1; you > > need to upgrade to 1.0.7 unless you like to see NPE stack traces > > > > https://wiki.openssl.org/index.php/TLS1.3 > We will need OpenSSL 1.1.1 to support TLS 1.3. > According to the wiki 1.1.1 is a drop in replacement of 1.1.0. So maybe > Hadoop already supports it. >
[jira] [Created] (HADOOP-16911) S3A mkdirs to indicate which parent path is a file
Steve Loughran created HADOOP-16911: --- Summary: S3A mkdirs to indicate which parent path is a file Key: HADOOP-16911 URL: https://issues.apache.org/jira/browse/HADOOP-16911 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.2.1 Reporter: Steve Loughran If there is a file somewhere up the path you're trying to create a directory in S3. S3A's mkdirs() Will fail was an error -but it does not indicate which path element is at fault. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [ANNOUNCE] New Apache Hadoop Committer - Stephen O'Donnell
Stephen, Congratulations! Zac Zhou Xiaoqiao He 于2020年3月5日周四 下午5:49写道: > Congratulations Stephen! > > - Hexiaoqiao > > On Thu, Mar 5, 2020 at 11:08 AM Masatake Iwasaki < > iwasak...@oss.nttdata.co.jp> wrote: > > > Congratulations! > > > > Masatake Iwasaki > > > > > > On 2020/03/04 5:11, Wei-Chiu Chuang wrote: > > > In bcc: general@ > > > > > > It's my pleasure to announce that Stephen O'Donnell has been elected as > > > committer on the Apache Hadoop project recognizing his continued > > > contributions to the > > > project. > > > > > > Please join me in congratulating him. > > > > > > Hearty Congratulations & Welcome aboard Stephen! > > > > > > Wei-Chiu Chuang > > > (On behalf of the Hadoop PMC) > > > > > > > - > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > >