[jira] [Created] (HADOOP-16569) improve TLS performance for NativeAzureFileSystem
Steven Rand created HADOOP-16569: Summary: improve TLS performance for NativeAzureFileSystem Key: HADOOP-16569 URL: https://issues.apache.org/jira/browse/HADOOP-16569 Project: Hadoop Common Issue Type: Improvement Reporter: Steven Rand Several tickets have already discussed the performance issues of GCM ciphers on jdk8 in relation to cloud connectors: * HADOOP-16050, HADOOP-16371 (s3a) * HADOOP-15669 (abfs) * HADOOP-15965 (adls) However, we haven't gotten around to fixing this for Azure Blob Store yet (the wasbs scheme). It would be helpful to reuse the {{SSLSocketFactoryEx}} class that was added in HADOOP-15669. The {{getInstrumentedContext}} method in {{AzureNativeFileSystemStore}} can already mutate the connection to the blob store and inject that socket factory, so it's not a hard change to make. This isn't necessarily blocked on HADOOP-16371, but it would be helpful if we merged that PR first, since it moves the custom {{SSLSocketFactory}} that uses wildfly-openssl out of the code that's specific to ABFS and into a place where the Azure Blob Store code can access it. Tagging [~stakiar] since you've been working on this for the other cloud connectors -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
[ https://issues.apache.org/jira/browse/HADOOP-16566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota resolved HADOOP-16566. - Resolution: Fixed > S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of > com.google.common.base.Stopwatch > -- > > Key: HADOOP-16566 > URL: https://issues.apache.org/jira/browse/HADOOP-16566 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > > Some distributions won't have the updated guava, and > {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. > Fix this issue by using the hadoop util's instead. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16568) S3A FullCredentialsTokenBinding fails if local credentials are unset
Steve Loughran created HADOOP-16568: --- Summary: S3A FullCredentialsTokenBinding fails if local credentials are unset Key: HADOOP-16568 URL: https://issues.apache.org/jira/browse/HADOOP-16568 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.0 Reporter: Steve Loughran Assignee: Steve Loughran Not sure how this slipped by the automated tests, but it is happening on my CLI. # FullCredentialsTokenBinding fails on startup if there are now AWS keys in the auth chain # because it tries to load them in serviceStart, not deployUnbonded -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/ [Sep 11, 2019 2:10:11 AM] (github) HADOOP-15184. Add GitHub pull request template. (#1419) [Sep 11, 2019 7:54:08 AM] (tasanuma) HDFS-14838. RBF: Display RPC (instead of HTTP) Port Number in RBF web [Sep 11, 2019 2:19:10 PM] (ljain) HDDS-2103. TestContainerReplication fails due to unhealthy container [Sep 11, 2019 3:46:25 PM] (stevel) HADOOP-16490. Avoid/handle cached 404s during S3A file creation. [Sep 11, 2019 6:59:01 PM] (xyao) HDDS-2075. Tracing in OzoneManager call is propagated with wrong parent [Sep 11, 2019 9:59:28 PM] (ebadger) YARN-9815 ReservationACLsTestBase fails with NPE. Contributed by Ahmed [Sep 12, 2019 12:38:41 AM] (elek) HDDS-2106. Avoid usage of hadoop projects as parent of hdds/ozone -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core Class org.apache.hadoop.applications.mawo.server.common.TaskStatus implements Cloneable but does not define or use clone method At TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 39-346] Equals method for org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument is of type WorkerId At WorkerId.java:the argument is of type WorkerId At WorkerId.java:[line 114] org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does not check for null argument At WorkerId.java:null argument At WorkerId.java:[lines 114-115] Failed CTEST tests : test_test_libhdfs_ops_hdfs_static test_test_libhdfs_threaded_hdfs_static test_test_libhdfs_zerocopy_hdfs_static test_test_native_mini_dfs test_libhdfs_threaded_hdfspp_test_shim_static test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static libhdfs_mini_stress_valgrind_hdfspp_test_static memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static test_libhdfs_mini_stress_hdfspp_test_shim_static test_hdfs_ext_hdfspp_test_shim_static Failed junit tests : hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.namenode.TestStartup hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken hadoop.fs.adl.live.TestAdlSdkConfiguration cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-compile-javac-root.txt [332K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-checkstyle-root.txt [17M] hadolint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-patch-pylint.txt [220K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-patch-shellcheck.txt [24K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/diff-patch-shelldocs.txt [44K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/whitespace-eol.txt [9.6M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1257/artifact/out/whitespace-tabs.txt [1.1M] xml:
[jira] [Resolved] (HADOOP-16423) S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log)
[ https://issues.apache.org/jira/browse/HADOOP-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota resolved HADOOP-16423. - Resolution: Fixed > S3Guarld fsck: Check metadata consistency from S3 to metadatastore (log) > > > Key: HADOOP-16423 > URL: https://issues.apache.org/jira/browse/HADOOP-16423 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > > This part is only for logging the inconsistencies. > This issue only covers the part when the walk is being done in the S3 and > compares all metadata to the MS. > There will be no part where the walk is being done in the MS and compare it > to the S3. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16567) S3A Secret access to fall back to XML if credential provider raises IOE.
Steve Loughran created HADOOP-16567: --- Summary: S3A Secret access to fall back to XML if credential provider raises IOE. Key: HADOOP-16567 URL: https://issues.apache.org/jira/browse/HADOOP-16567 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.1.2 Reporter: Steve Loughran This is hive related. Hive can put secrets into a JCEKS file which only hive may read. S3A file systems created on behalf of a user do not have access to this file. Yet it is listed as the credential provider in the hadoop.credential.providers option in core-site -and that is marked as final. When the S3 a initializre() method looks up passwords and encryption keys, the failure to open the file raises an IOE -and the FS cannot be instantiated. Proposed: {{S3AUtils.lookupPassword()}} to catch such exceptions, and fall back to using {{Configuration.get()}} and so retrieve any property in the XML. If there is one failing here, it is if the user did want to read from a credential provider, the failure to read the credential will be lost, and the filesystem will simply get the default value. There is a side issue, that permission exceptions can surface as found not found exceptions, which are then wrapped as generic IOEs in Configuration. It will be hard and brittle to attempt to only respond to permission restrictions. We could look at improving {{Configuration.getPassword()}} but that class is so widely used, I am not in a rush to break things. I think this means we have to add another option. Trying to be clever about when to fall back versus when to rethrow the exception is doomed. If this works for S3A, we will need to consider replicating it for ABFS. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16566) S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch
Gabor Bota created HADOOP-16566: --- Summary: S3Guard fsck: Use org.apache.hadoop.util.StopWatch instead of com.google.common.base.Stopwatch Key: HADOOP-16566 URL: https://issues.apache.org/jira/browse/HADOOP-16566 Project: Hadoop Common Issue Type: Sub-task Reporter: Gabor Bota Assignee: Gabor Bota Some distributions won't have the updated guava, and {{org.apache.hadoop.util.StopWatch}} is only available in the newer ones. Fix this issue by using the hadoop util's instead. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16565) Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain."
Gabor Bota created HADOOP-16565: --- Summary: Fix "com.amazonaws.SdkClientException: Unable to find a region via the region provider chain." Key: HADOOP-16565 URL: https://issues.apache.org/jira/browse/HADOOP-16565 Project: Hadoop Common Issue Type: Sub-task Reporter: Gabor Bota Assignee: Gabor Bota The error found during testing in the following tests: {noformat} [ERROR] ITestS3ATemporaryCredentials.testInvalidSTSBinding:257 ? SdkClient Unable to f... [ERROR] ITestS3ATemporaryCredentials.testSTS:130 ? SdkClient Unable to find a region v... [ERROR] ITestS3ATemporaryCredentials.testSessionRequestExceptionTranslation:441->lambda$testSessionRequestExceptionTranslation$5:442 ? SdkClient [ERROR] ITestS3ATemporaryCredentials.testSessionTokenExpiry:222 ? SdkClient Unable to ... [ERROR] ITestS3ATemporaryCredentials.testSessionTokenPropagation:193 ? SdkClient Unabl... [ERROR] ITestDelegatedMRJob.testJobSubmissionCollectsTokens:286 ? SdkClient Unable to ... [ERROR] ITestSessionDelegationInFileystem.testAddTokensFromFileSystem:235 ? SdkClient ... [ERROR] ITestSessionDelegationInFileystem.testCanRetrieveTokenFromCurrentUserCreds:260->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testDTCredentialProviderFromCurrentUserCreds:278->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testDelegatedFileSystem:308->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testDelegationBindingMismatch1:432->createDelegationTokens:292->AbstractDelegationIT.mkTokens:88 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testFileSystemBoundToCreator:681 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testGetDTfromFileSystem:212 ? SdkClient Unab... [ERROR] ITestSessionDelegationInFileystem.testHDFSFetchDTCommand:606->lambda$testHDFSFetchDTCommand$3:607 ? SdkClient [ERROR] ITestSessionDelegationInFileystem.testYarnCredentialPickup:576 ? SdkClient Una... [ERROR] ITestSessionDelegationTokens.testCreateAndUseDT:176 ? SdkClient Unable to find... [ERROR] ITestSessionDelegationTokens.testSaveLoadTokens:121 ? SdkClient Unable to find... {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/ No changes -1 overall The following subsystems voted -1: asflicense findbugs hadolint pathlen unit xml The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: XML : Parsing Error(s): hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client Boxed value is unboxed and then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:then immediately reboxed in org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result, byte[], byte[], KeyConverter, ValueConverter, boolean) At ColumnRWHelper.java:[line 335] Failed junit tests : hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.namenode.TestNameNodeHttpServerXFrame hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean hadoop.registry.secure.TestSecureLogins hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 hadoop.yarn.sls.TestSLSRunner cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt [328K] cc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-compile-cc-root-jdk1.8.0_222.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-compile-javac-root-jdk1.8.0_222.txt [308K] checkstyle: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-checkstyle-root.txt [16M] hadolint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-patch-hadolint.txt [4.0K] pathlen: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/pathlen.txt [12K] pylint: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-patch-pylint.txt [24K] shellcheck: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-patch-shellcheck.txt [72K] shelldocs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-patch-shelldocs.txt [8.0K] whitespace: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/whitespace-tabs.txt [1.3M] xml: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/xml.txt [12K] findbugs: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html [8.0K] javadoc: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt [16K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_222.txt [1.1M] unit: https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [316K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry.txt [12K] https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/442/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [20K]
[jira] [Created] (HADOOP-16564) S3Guarld fsck: Add docs to the first iteration (S3->ddbMS, -verify)
Gabor Bota created HADOOP-16564: --- Summary: S3Guarld fsck: Add docs to the first iteration (S3->ddbMS, -verify) Key: HADOOP-16564 URL: https://issues.apache.org/jira/browse/HADOOP-16564 Project: Hadoop Common Issue Type: Sub-task Reporter: Gabor Bota Followup for HADOOP-16423. Add md documentation and how to extend wtih new violations. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16563) S3Guard fsck: Detect if a directory if authoritative and highlight errors if detected in it
Gabor Bota created HADOOP-16563: --- Summary: S3Guard fsck: Detect if a directory if authoritative and highlight errors if detected in it Key: HADOOP-16563 URL: https://issues.apache.org/jira/browse/HADOOP-16563 Project: Hadoop Common Issue Type: Sub-task Reporter: Gabor Bota Followup from HADOOP-16423. One of the changes with the HADOOP-16430 PR is that we now have an S3A FS method boolean allowAuthoritative(final Path path) that takes a path and returns true iff its authoritative either by the MS being auth or the given path being marked as one of the authoritative dirs. I think the validation when an authoritative directory is consistent between the metastore and S3 should be using this when it wants to highlight an authoritative path is inconsistent. This can be a follow-on patch, because as usual it will need more tests, in the code, and someone to try out the command line. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16562) Update docker image to have 3.7.1 protoc executable
Vinayakumar B created HADOOP-16562: -- Summary: Update docker image to have 3.7.1 protoc executable Key: HADOOP-16562 URL: https://issues.apache.org/jira/browse/HADOOP-16562 Project: Hadoop Common Issue Type: Sub-task Reporter: Vinayakumar B Current docker image is installed with 2.5.0 protobuf executable. During the process of upgrading protobuf to 3.7.1, docker needs to have both versions for yetus to verify. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0
+1 > 在 2019年9月11日,下午3:26,Rohith Sharma K S 写道: > > Hi folks, > > I have put together a release candidate (RC0) for Apache Hadoop 3.2.1. > > The RC is available at: > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/ > > The RC tag in git is release-3.2.1-RC0: > https://github.com/apache/hadoop/tree/release-3.2.1-RC0 > > > The maven artifacts are staged at > https://repository.apache.org/content/repositories/orgapachehadoop-1226/ > > You can find my public key at: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > > This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59 pm > PST. > > I have done testing with a pseudo cluster and distributed shell job. My +1 > to start. > > Thanks & Regards > Rohith Sharma K S - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[VOTE] Release Hadoop-3.1.3-RC0
Hi folks, Thanks to everyone's help on this release. Special thanks to Rohith, Wei-Chiu, Akira, Sunil, Wangda! I have created a release candidate (RC0) for Apache Hadoop 3.1.3. The RC release artifacts are available at: http://home.apache.org/~ztang/hadoop-3.1.3-RC0/ The maven artifacts are staged at: https://repository.apache.org/content/repositories/orgapachehadoop-1228/ The RC tag in git is here: https://github.com/apache/hadoop/tree/release-3.1.3-RC0 And my public key is at: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS *This vote will run for 7 days, ending on Sept.19th at 11:59 pm PST.* For the testing, I have run several Spark and distributed shell jobs in my pseudo cluster. My +1 (non-binding) to start. BR, Zhankun On Wed, 4 Sep 2019 at 15:56, zhankun tang wrote: > Hi all, > > Thanks for everyone helping in resolving all the blockers targeting Hadoop > 3.1.3[1]. We've cleaned all the blockers and moved out non-blockers issues > to 3.1.4. > > I'll cut the branch today and call a release vote soon. Thanks! > > > [1]. https://s.apache.org/5hj5i > > BR, > Zhankun > > > On Wed, 21 Aug 2019 at 12:38, Zhankun Tang wrote: > >> Hi folks, >> >> We have Apache Hadoop 3.1.2 released on Feb 2019. >> >> It's been more than 6 months passed and there're >> >> 246 fixes[1]. 2 blocker and 4 critical Issues [2] >> >> (As Wei-Chiu Chuang mentioned, HDFS-13596 will be another blocker) >> >> >> I propose my plan to do a maintenance release of 3.1.3 in the next few >> (one or two) weeks. >> >> Hadoop 3.1.3 release plan: >> >> Code Freezing Date: *25th August 2019 PDT* >> >> Release Date: *31th August 2019 PDT* >> >> >> Please feel free to share your insights on this. Thanks! >> >> >> [1] https://s.apache.org/zw8l5 >> >> [2] https://s.apache.org/fjol5 >> >> >> BR, >> >> Zhankun >> >
[jira] [Created] (HADOOP-16561) [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes
Vinayakumar B created HADOOP-16561: -- Summary: [MAPREDUCE] use protobuf-maven-plugin to generate protobuf classes Key: HADOOP-16561 URL: https://issues.apache.org/jira/browse/HADOOP-16561 Project: Hadoop Common Issue Type: Sub-task Reporter: Vinayakumar B Use "protoc-maven-plugin" to dynamically download protobuf executable to generate protobuf classes from proto file -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16560) [YARN] use protobuf-maven-plugin to generate protobuf classes
Vinayakumar B created HADOOP-16560: -- Summary: [YARN] use protobuf-maven-plugin to generate protobuf classes Key: HADOOP-16560 URL: https://issues.apache.org/jira/browse/HADOOP-16560 Project: Hadoop Common Issue Type: Sub-task Reporter: Vinayakumar B Use "protoc-maven-plugin" to dynamically download protobuf executable to generate protobuf classes from proto file -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16559) [HDFS] use protobuf-maven-plugin to generate protobuf classes
Vinayakumar B created HADOOP-16559: -- Summary: [HDFS] use protobuf-maven-plugin to generate protobuf classes Key: HADOOP-16559 URL: https://issues.apache.org/jira/browse/HADOOP-16559 Project: Hadoop Common Issue Type: Sub-task Reporter: Vinayakumar B Use "protoc-maven-plugin" to dynamically download protobuf executable to generate protobuf classes from proto file -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16558) [COMMON] use protobuf-maven-plugin to generate protobuf classes
Vinayakumar B created HADOOP-16558: -- Summary: [COMMON] use protobuf-maven-plugin to generate protobuf classes Key: HADOOP-16558 URL: https://issues.apache.org/jira/browse/HADOOP-16558 Project: Hadoop Common Issue Type: Sub-task Components: common Reporter: Vinayakumar B Use "protoc-maven-plugin" to dynamically download protobuf executable to generate protobuf classes from proto files. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16557) Upgrade protobuf.version to 3.7.1
Vinayakumar B created HADOOP-16557: -- Summary: Upgrade protobuf.version to 3.7.1 Key: HADOOP-16557 URL: https://issues.apache.org/jira/browse/HADOOP-16557 Project: Hadoop Common Issue Type: Sub-task Reporter: Vinayakumar B Bump up the "protobuf.version" to 3.7.1 and ensure all compile is successful. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16556) Fix some LGTM alerts
Malcolm Taylor created HADOOP-16556: --- Summary: Fix some LGTM alerts Key: HADOOP-16556 URL: https://issues.apache.org/jira/browse/HADOOP-16556 Project: Hadoop Common Issue Type: Improvement Reporter: Malcolm Taylor LGTM analysis of Hadoop has raised some alerts ([https://lgtm.com/projects/g/apache/hadoop/?mode=tree).] This issue is to fix some of the more straightforward ones. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-dev-h...@hadoop.apache.org