[jira] [Commented] (HADOOP-19100) Fix Spotbugs warnings in the build
[ https://issues.apache.org/jira/browse/HADOOP-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823054#comment-17823054 ] Shilun Fan commented on HADOOP-19100: - [~ayushtkn] Thanks for initiating this JIRA! I will follow up by fixing some spotbugs in the yarn module. > Fix Spotbugs warnings in the build > -- > > Key: HADOOP-19100 > URL: https://issues.apache.org/jira/browse/HADOOP-19100 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > > We are getting spotbugs warnings in every PR. > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-common-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-root-warnings.html] > > Source: > https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/console -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]
anmolanmol1234 commented on code in PR #6314: URL: https://github.com/apache/hadoop/pull/6314#discussion_r1510702229 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -1411,6 +1448,97 @@ protected AccessTokenProvider getTokenProvider() { return tokenProvider; } + public AzureBlobFileSystem getMetricFilesystem() throws IOException { +if (metricFs == null) { + try { +Configuration metricConfig = abfsConfiguration.getRawConfiguration(); +String metricAccountKey = metricConfig.get(FS_AZURE_METRIC_ACCOUNT_KEY); +final String abfsMetricUrl = metricConfig.get(FS_AZURE_METRIC_URI); +if (abfsMetricUrl == null) { + return null; +} +metricConfig.set(FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, metricAccountKey); +metricConfig.set(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, "false"); +URI metricUri; +metricUri = new URI(FileSystemUriSchemes.ABFS_SCHEME, abfsMetricUrl, null, null, null); +metricFs = (AzureBlobFileSystem) FileSystem.newInstance(metricUri, metricConfig); + } catch (AzureBlobFileSystemException | URISyntaxException ex) { +//do nothing + } +} +return metricFs; + } + + private TracingContext getMetricTracingContext() { +String hostName; +try { + hostName = InetAddress.getLocalHost().getHostName(); +} catch (UnknownHostException e) { + hostName = "UnknownHost"; +} +return new TracingContext(TracingContext.validateClientCorrelationID( +abfsConfiguration.getClientCorrelationId()), +hostName, FSOperationType.GET_ATTR, true, +abfsConfiguration.getTracingHeaderFormat(), +null, abfsCounters.toString()); + } + + /** + * Synchronized method to suspend or resume timer. + * @param timerFunctionality resume or suspend. + * @param timerTask The timertask object. + * @return true or false. + */ + synchronized boolean timerOrchestrator(TimerFunctionality timerFunctionality, Review Comment: Got it, this design also looks good. Will include this change -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19100) Fix Spotbugs warnings in the build
[ https://issues.apache.org/jira/browse/HADOOP-19100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823050#comment-17823050 ] Ayush Saxena commented on HADOOP-19100: --- not sure which commit induced these, but we should fix/ignore these in the code, so as to get green builds > Fix Spotbugs warnings in the build > -- > > Key: HADOOP-19100 > URL: https://issues.apache.org/jira/browse/HADOOP-19100 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Major > > We are getting spotbugs warnings in every PR. > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-common-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html] > [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-root-warnings.html] > > Source: > https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/console -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19100) Fix Spotbugs warnings in the build
Ayush Saxena created HADOOP-19100: - Summary: Fix Spotbugs warnings in the build Key: HADOOP-19100 URL: https://issues.apache.org/jira/browse/HADOOP-19100 Project: Hadoop Common Issue Type: Bug Reporter: Ayush Saxena We are getting spotbugs warnings in every PR. [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-common-project-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-hdfs-project-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html] [https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/artifact/out/branch-spotbugs-root-warnings.html] Source: https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1517/console -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]
saxenapranav commented on code in PR #6314: URL: https://github.com/apache/hadoop/pull/6314#discussion_r1510679721 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -1411,6 +1448,97 @@ protected AccessTokenProvider getTokenProvider() { return tokenProvider; } + public AzureBlobFileSystem getMetricFilesystem() throws IOException { +if (metricFs == null) { + try { +Configuration metricConfig = abfsConfiguration.getRawConfiguration(); +String metricAccountKey = metricConfig.get(FS_AZURE_METRIC_ACCOUNT_KEY); +final String abfsMetricUrl = metricConfig.get(FS_AZURE_METRIC_URI); +if (abfsMetricUrl == null) { + return null; +} +metricConfig.set(FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, metricAccountKey); +metricConfig.set(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, "false"); +URI metricUri; +metricUri = new URI(FileSystemUriSchemes.ABFS_SCHEME, abfsMetricUrl, null, null, null); +metricFs = (AzureBlobFileSystem) FileSystem.newInstance(metricUri, metricConfig); + } catch (AzureBlobFileSystemException | URISyntaxException ex) { +//do nothing + } +} +return metricFs; + } + + private TracingContext getMetricTracingContext() { +String hostName; +try { + hostName = InetAddress.getLocalHost().getHostName(); +} catch (UnknownHostException e) { + hostName = "UnknownHost"; +} +return new TracingContext(TracingContext.validateClientCorrelationID( +abfsConfiguration.getClientCorrelationId()), +hostName, FSOperationType.GET_ATTR, true, +abfsConfiguration.getTracingHeaderFormat(), +null, abfsCounters.toString()); + } + + /** + * Synchronized method to suspend or resume timer. + * @param timerFunctionality resume or suspend. + * @param timerTask The timertask object. + * @return true or false. + */ + synchronized boolean timerOrchestrator(TimerFunctionality timerFunctionality, Review Comment: Design is good and doesn't need change. What I am suggesting is: we do not have this method as synchronized, and the actions which are taken if conditions are true shall be synchronized. This helps because, conditions are going to be true only sometimes, but we will always keep things synchronized even if there is no action that needs to be taken. What I proposing is: ``` boolean timerOrchestrator(TimerFunctionality timerFunctionality, TimerTask timerTask) { switch (timerFunctionality) { case RESUME: if (metricCollectionStopped.get()) { synchronized (this) { if(metricCollectionStopped.get()) { resumeTimer(); } } } break; case SUSPEND: long now = System.currentTimeMillis(); long lastExecutionTime = abfsCounters.getLastExecutionTime().get(); if (metricCollectionEnabled && (now - lastExecutionTime >= metricAnalysisPeriod)) { synchronized (this) { timerTask.cancel(); timer.purge(); metricCollectionStopped.set(true); return true; } } break; default: break; } return false; } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]
saxenapranav commented on code in PR #6314: URL: https://github.com/apache/hadoop/pull/6314#discussion_r1510666424 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -1411,6 +1448,97 @@ protected AccessTokenProvider getTokenProvider() { return tokenProvider; } + public AzureBlobFileSystem getMetricFilesystem() throws IOException { +if (metricFs == null) { + try { +Configuration metricConfig = abfsConfiguration.getRawConfiguration(); +String metricAccountKey = metricConfig.get(FS_AZURE_METRIC_ACCOUNT_KEY); +final String abfsMetricUrl = metricConfig.get(FS_AZURE_METRIC_URI); +if (abfsMetricUrl == null) { + return null; +} +metricConfig.set(FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, metricAccountKey); +metricConfig.set(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, "false"); +URI metricUri; +metricUri = new URI(FileSystemUriSchemes.ABFS_SCHEME, abfsMetricUrl, null, null, null); +metricFs = (AzureBlobFileSystem) FileSystem.newInstance(metricUri, metricConfig); Review Comment: Was curious on this, tried https://github.com/saxenapranav/hadoop/commit/0cced3f0d92e9248072b0c18ec1e321d7250b81f. Chaining would happen, reason being, there is no knowledge with the application, if a metricFS has been created or not for a given fs. So, what happens is: when metricFs is created, it will create an client object, which would again create a new metricFs (at this instance 2), but this chain will grow up infinitely. There is a problem further: we can not control how many fileSystem objects application want to create for same url. So, keeping 1:1 relatationship of metricFs to uri would not be the best one. Reason being, each fs object created by app has diff values of abfsCounter. I would recommend again, that we should not have fileSystem created out of client. But we just create a new method in client which understands metricUri. This method just creates an restOperation with the metricUri and execute it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19099) Add Protobuf Compatibility Notes
[ https://issues.apache.org/jira/browse/HADOOP-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19099. - Fix Version/s: 3.4.0 3.5.0 Hadoop Flags: Reviewed Resolution: Fixed > Add Protobuf Compatibility Notes > > > Key: HADOOP-19099 > URL: https://issues.apache.org/jira/browse/HADOOP-19099 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.5.0 > > > In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version > 3.21.12. This version may have compatibility issues with certain versions of > JDK8. We will document this situation in the index.md file of hadoop-3.4.0 > and inform users that we will discontinue support for JDK8 in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19099) Add Protobuf Compatibility Notes
[ https://issues.apache.org/jira/browse/HADOOP-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823035#comment-17823035 ] ASF GitHub Bot commented on HADOOP-19099: - slfan1989 commented on PR #6607: URL: https://github.com/apache/hadoop/pull/6607#issuecomment-1975752920 @Hexiaoqiao Thanks for the review! > Add Protobuf Compatibility Notes > > > Key: HADOOP-19099 > URL: https://issues.apache.org/jira/browse/HADOOP-19099 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > > In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version > 3.21.12. This version may have compatibility issues with certain versions of > JDK8. We will document this situation in the index.md file of hadoop-3.4.0 > and inform users that we will discontinue support for JDK8 in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19099. Add Protobuf Compatibility Notes [hadoop]
slfan1989 commented on PR #6607: URL: https://github.com/apache/hadoop/pull/6607#issuecomment-1975752920 @Hexiaoqiao Thanks for the review! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19099. Add Protobuf Compatibility Notes [hadoop]
slfan1989 merged PR #6607: URL: https://github.com/apache/hadoop/pull/6607 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-19091) Add support for Tez to MagicS3GuardCommitter
[ https://issues.apache.org/jira/browse/HADOOP-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823030#comment-17823030 ] Syed Shameerur Rahman edited comment on HADOOP-19091 at 3/4/24 4:31 AM: [~vnarayanan7] - Could you please share the complete error stacktrace ? As i could see from the code implementation, During commitJob operation, [listPendingUploadToCommit|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L124] method is invoked which list all the files under the jobAttemptPath with a suffix `.pendingset`. So as per the logic, My understanding is that the individual file name under the jobAttemptPath should not be a concern here. was (Author: srahman): [~vnarayanan7] - Could you please share the complete error stacktrace ? As i could see from the code implementation, During commitJob operation, [listPendingUploadToCommit|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L124] method is invoked which list all the files under the jobAttemptPath with a suffix `.pendingset`. If so what is the value returned by (getJobAttemptPath) What i understand from your comment is that, The `getJobAttemptPath` is not returning correct value (for Hive,Pig with Tez) and hence the commitJob is not able to read the commit metadata. Is my understanding correct ? > Add support for Tez to MagicS3GuardCommitter > > > Key: HADOOP-19091 > URL: https://issues.apache.org/jira/browse/HADOOP-19091 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.6 > Environment: Pig 17/Hive 3.1.3 with Hadoop 3.3.3 on AWS EMR 6-12.0 >Reporter: Venkatasubrahmanian Narayanan >Assignee: Venkatasubrahmanian Narayanan >Priority: Major > Attachments: 0001-AWS-Hive-Changes.patch, > 0002-HIVE-27698-Backport-of-HIVE-22398-Remove-legacy-code.patch, > HADOOP-19091-HIVE-WIP.patch > > > The MagicS3GuardCommitter assumes that the JobID of the task is the same as > that of the job's application master when writing/reading the .pendingset > file. This assumption is not valid when running with Tez, which creates > slightly different JobIDs for tasks and the application master. > > While the MagicS3GuardCommitter is intended only for MRv2, it mostly works > fine with an MRv1 wrapper with Hive/Pig (with some minor changes to Hive) run > in MR mode. This issue only crops up when running queries with the Tez > execution engine. I can upload a patch to Hive 3.1 to reproduce this error on > EMR if needed. > > Fixing this will probably require work from both Tez and Hadoop, wanted to > start a discussion here so we can figure out how exactly we go about this. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-19091) Add support for Tez to MagicS3GuardCommitter
[ https://issues.apache.org/jira/browse/HADOOP-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823030#comment-17823030 ] Syed Shameerur Rahman edited comment on HADOOP-19091 at 3/4/24 4:30 AM: [~vnarayanan7] - Could you please share the complete error stacktrace ? As i could see from the code implementation, During commitJob operation, [listPendingUploadToCommit|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L124] method is invoked which list all the files under the jobAttemptPath with a suffix `.pendingset`. If so what is the value returned by (getJobAttemptPath) What i understand from your comment is that, The `getJobAttemptPath` is not returning correct value (for Hive,Pig with Tez) and hence the commitJob is not able to read the commit metadata. Is my understanding correct ? was (Author: srahman): [~vnarayanan7] - Could you please share the complete error stacktrace ? As i could see from the code implementation, During commitJob operation, [listPendingUploadToCommit|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L124] method is invoked which list all the files under the jobAttemptPath with a suffix `.pendingset`. What i understand from your comment is that, The `getJobAttemptPath` is not returning correct value (for Hive,Pig with Tez) and hence the commitJob is not able to read the commit metadata. Is my understanding correct ? > Add support for Tez to MagicS3GuardCommitter > > > Key: HADOOP-19091 > URL: https://issues.apache.org/jira/browse/HADOOP-19091 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.6 > Environment: Pig 17/Hive 3.1.3 with Hadoop 3.3.3 on AWS EMR 6-12.0 >Reporter: Venkatasubrahmanian Narayanan >Assignee: Venkatasubrahmanian Narayanan >Priority: Major > Attachments: 0001-AWS-Hive-Changes.patch, > 0002-HIVE-27698-Backport-of-HIVE-22398-Remove-legacy-code.patch, > HADOOP-19091-HIVE-WIP.patch > > > The MagicS3GuardCommitter assumes that the JobID of the task is the same as > that of the job's application master when writing/reading the .pendingset > file. This assumption is not valid when running with Tez, which creates > slightly different JobIDs for tasks and the application master. > > While the MagicS3GuardCommitter is intended only for MRv2, it mostly works > fine with an MRv1 wrapper with Hive/Pig (with some minor changes to Hive) run > in MR mode. This issue only crops up when running queries with the Tez > execution engine. I can upload a patch to Hive 3.1 to reproduce this error on > EMR if needed. > > Fixing this will probably require work from both Tez and Hadoop, wanted to > start a discussion here so we can figure out how exactly we go about this. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19091) Add support for Tez to MagicS3GuardCommitter
[ https://issues.apache.org/jira/browse/HADOOP-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823030#comment-17823030 ] Syed Shameerur Rahman commented on HADOOP-19091: [~vnarayanan7] - Could you please share the complete error stacktrace ? As i could see from the code implementation, During commitJob operation, [listPendingUploadToCommit|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L124] method is invoked which list all the files under the jobAttemptPath with a suffix `.pendingset`. What i understand from your comment is that, The `getJobAttemptPath` is not returning correct value (for Hive,Pig with Tez) and hence the commitJob is not able to read the commit metadata. Is my understanding correct ? > Add support for Tez to MagicS3GuardCommitter > > > Key: HADOOP-19091 > URL: https://issues.apache.org/jira/browse/HADOOP-19091 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.6 > Environment: Pig 17/Hive 3.1.3 with Hadoop 3.3.3 on AWS EMR 6-12.0 >Reporter: Venkatasubrahmanian Narayanan >Assignee: Venkatasubrahmanian Narayanan >Priority: Major > Attachments: 0001-AWS-Hive-Changes.patch, > 0002-HIVE-27698-Backport-of-HIVE-22398-Remove-legacy-code.patch, > HADOOP-19091-HIVE-WIP.patch > > > The MagicS3GuardCommitter assumes that the JobID of the task is the same as > that of the job's application master when writing/reading the .pendingset > file. This assumption is not valid when running with Tez, which creates > slightly different JobIDs for tasks and the application master. > > While the MagicS3GuardCommitter is intended only for MRv2, it mostly works > fine with an MRv1 wrapper with Hive/Pig (with some minor changes to Hive) run > in MR mode. This issue only crops up when running queries with the Tez > execution engine. I can upload a patch to Hive 3.1 to reproduce this error on > EMR if needed. > > Fixing this will probably require work from both Tez and Hadoop, wanted to > start a discussion here so we can figure out how exactly we go about this. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.
[ https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhaobo Huang updated HADOOP-19060: -- Description: *Shield references to {{UserGroupInformation}} Class.* The current HDFS client keytab authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} This feature supports configuring keytab information in core-site.xml or hdfs site.xml. The authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} The config of core-site.xml related to authentication is as follows: {code:java} hadoop.security.authentication kerberos hadoop.client.keytab.principal foo hadoop.client.keytab.file.path /var/krb5kdc/foo.keytab {code} was: *Shield references to {{UserGroupInformation}} Class.* The current HDFS client keytab authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} This feature supports configuring keytab information in core-site.xml or hdfs site.xml. The authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} The config of core-site.xml related to authentication is as follows: {code:java} hadoop.security.authentication kerberos hadoop.client.keytab.principal foo hadoop.client.keytab.file.path /var/krb5kdc/foo.keytab {code} > Support hadoop client authentication through keytab configuration. > -- > > Key: HADOOP-19060 > URL: https://issues.apache.org/jira/browse/HADOOP-19060 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Zhaobo Huang >Assignee: Zhaobo Huang >Priority: Minor > Labels: pull-request-available > > *Shield references to {{UserGroupInformation}} Class.* > The current HDFS client keytab authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > UserGroupInformation.setConfiguration(conf); > UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > This feature supports configuring keytab information in core-site.xml or hdfs > site.xml. The authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > The
[jira] [Updated] (HADOOP-19060) Support hadoop client authentication through keytab configuration.
[ https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhaobo Huang updated HADOOP-19060: -- Description: *Shield references to {{UserGroupInformation}} Class.* The current HDFS client keytab authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} This feature supports configuring keytab information in core-site.xml or hdfs site.xml. The authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} The config of core-site.xml related to authentication is as follows: {code:java} hadoop.security.authentication kerberos hadoop.client.keytab.principal foo hadoop.client.keytab.file.path /var/krb5kdc/foo.keytab {code} was: # Shield references to {{UserGroupInformation}} Class. # In the future, we can consider supporting KDC password authentication through config file (password authentication may require encryption related processing). After password authentication, it can avoid the mutual transmission of keytab file. The current HDFS client keytab authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} This feature supports configuring keytab information in core-site.xml or hdfs site.xml. The authentication code is as follows: {code:java} Configuration conf = new Configuration(); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); conf.addResource(new Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); FileSystem fileSystem = FileSystem.get(conf); FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); for (FileStatus status : fileStatus) { System.out.println(status.getPath()); } {code} The config of core-site.xml related to authentication is as follows: {code:java} hadoop.security.authentication kerberos hadoop.client.keytab.principal foo hadoop.client.keytab.file.path /var/krb5kdc/foo.keytab {code} > Support hadoop client authentication through keytab configuration. > -- > > Key: HADOOP-19060 > URL: https://issues.apache.org/jira/browse/HADOOP-19060 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Zhaobo Huang >Assignee: Zhaobo Huang >Priority: Minor > Labels: pull-request-available > > *Shield references to {{UserGroupInformation}} Class.* > > The current HDFS client keytab authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > UserGroupInformation.setConfiguration(conf); > UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > This feature supports configuring keytab information in core-site.xml or hdfs > site.xml. The authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new >
[jira] [Assigned] (HADOOP-19060) Support hadoop client authentication through keytab configuration.
[ https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhaobo Huang reassigned HADOOP-19060: - Assignee: Zhaobo Huang > Support hadoop client authentication through keytab configuration. > -- > > Key: HADOOP-19060 > URL: https://issues.apache.org/jira/browse/HADOOP-19060 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Zhaobo Huang >Assignee: Zhaobo Huang >Priority: Minor > Labels: pull-request-available > > # Shield references to {{UserGroupInformation}} Class. > # In the future, we can consider supporting KDC password authentication > through config file (password authentication may require encryption related > processing). After password authentication, it can avoid the mutual > transmission of keytab file. > > The current HDFS client keytab authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > UserGroupInformation.setConfiguration(conf); > UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > This feature supports configuring keytab information in core-site.xml or hdfs > site.xml. The authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > The config of core-site.xml related to authentication is as follows: > {code:java} > > > hadoop.security.authentication > kerberos > > > hadoop.client.keytab.principal > foo > > > hadoop.client.keytab.file.path > /var/krb5kdc/foo.keytab > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19060) Support hadoop client authentication through keytab configuration.
[ https://issues.apache.org/jira/browse/HADOOP-19060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823026#comment-17823026 ] ASF GitHub Bot commented on HADOOP-19060: - huangzhaobo99 commented on PR #6516: URL: https://github.com/apache/hadoop/pull/6516#issuecomment-1975640822 Hi @tasanuma Can you help me review the code? Thank you. > Support hadoop client authentication through keytab configuration. > -- > > Key: HADOOP-19060 > URL: https://issues.apache.org/jira/browse/HADOOP-19060 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Zhaobo Huang >Priority: Minor > Labels: pull-request-available > > # Shield references to {{UserGroupInformation}} Class. > # In the future, we can consider supporting KDC password authentication > through config file (password authentication may require encryption related > processing). After password authentication, it can avoid the mutual > transmission of keytab file. > > The current HDFS client keytab authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > UserGroupInformation.setConfiguration(conf); > UserGroupInformation.loginUserFromKeytab("foo", "/var/krb5kdc/foo.keytab"); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > This feature supports configuring keytab information in core-site.xml or hdfs > site.xml. The authentication code is as follows: > {code:java} > Configuration conf = new Configuration(); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/hdfs-site.xml")); > conf.addResource(new > Path("/usr/local/service/hadoop/etc/hadoop/core-site.xml")); > FileSystem fileSystem = FileSystem.get(conf); > FileStatus[] fileStatus = fileSystem.listStatus(new Path("/")); > for (FileStatus status : fileStatus) { > System.out.println(status.getPath()); > } {code} > The config of core-site.xml related to authentication is as follows: > {code:java} > > > hadoop.security.authentication > kerberos > > > hadoop.client.keytab.principal > foo > > > hadoop.client.keytab.file.path > /var/krb5kdc/foo.keytab > > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19060. Support hadoop client authentication through keytab configuration. [hadoop]
huangzhaobo99 commented on PR #6516: URL: https://github.com/apache/hadoop/pull/6516#issuecomment-1975640822 Hi @tasanuma Can you help me review the code? Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17387. [FGL] Abstract the configuration locking mode [hadoop]
ferhui commented on PR #6572: URL: https://github.com/apache/hadoop/pull/6572#issuecomment-1975637977 seems the checkstyle issue still exists. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]
hadoop-yetus commented on PR #6566: URL: https://github.com/apache/hadoop/pull/6566#issuecomment-1975518985 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 56s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 36m 26s | | trunk passed | | +1 :green_heart: | compile | 6m 52s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 6m 51s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 33s | | trunk passed | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 2m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 2m 41s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/14/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html) | hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 40m 36s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 40m 59s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 59s | | the patch passed | | +1 :green_heart: | compile | 5m 59s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 5m 59s | | the patch passed | | +1 :green_heart: | compile | 5m 38s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 5m 38s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 18s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/14/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 5 new + 244 unchanged - 2 fixed = 249 total (was 246) | | +1 :green_heart: | mvnsite | 2m 5s | | the patch passed | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 5m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 249m 35s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/14/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 459m 20s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.protocol.TestBlockListAsLongs | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.datanode.TestLargeBlockReport | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6566/14/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6566 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]
ritegarg commented on code in PR #6566: URL: https://github.com/apache/hadoop/pull/6566#discussion_r1510344496 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java: ## @@ -414,6 +414,10 @@ synchronized void markFirstNodeIfNotMarked() { } synchronized void adjustState4RestartingNode() { + if (restartingNodeIndex == -1) { +return; + } + Review Comment: Thanks for your commit, resolving this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]
ritegarg commented on code in PR #6566: URL: https://github.com/apache/hadoop/pull/6566#discussion_r1510344410 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripedDataStreamer.java: ## @@ -111,6 +111,7 @@ protected LocatedBlock nextBlockOutputStream() throws IOException { final DatanodeInfo badNode = nodes[getErrorState().getBadNodeIndex()]; LOG.warn("Excluding datanode " + badNode); excludedNodes.put(badNode, badNode); + setPipeline(null, null, null); Review Comment: Updated -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]
hadoop-yetus commented on PR #6577: URL: https://github.com/apache/hadoop/pull/6577#issuecomment-1975171815 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 3m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 58s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 35s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 28s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 41s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 32s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 20s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 21m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6577/9/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 2 new + 5 unchanged - 0 fixed = 7 total (was 5) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed | | +1 :green_heart: | spotbugs | 1m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 81m 3s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 25s | | The patch does not generate ASF License warnings. | | | | 170m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6577/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6577 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux ebf28a942b8c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 97f7996875d5ca1f8ba369dcb54ed86e2429ad4c | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6577/9/testReport/ | | Max. process+thread count | 926 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6577/9/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19099) Add Protobuf Compatibility Notes
[ https://issues.apache.org/jira/browse/HADOOP-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822938#comment-17822938 ] ASF GitHub Bot commented on HADOOP-19099: - hadoop-yetus commented on PR #6607: URL: https://github.com/apache/hadoop/pull/6607#issuecomment-1975168899 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ branch-3.4 Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 18s | | branch-3.4 passed | | +1 :green_heart: | mvnsite | 0m 26s | | branch-3.4 passed | | +1 :green_heart: | shadedclient | 86m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 148m 12s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6607/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6607 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets | | uname | Linux 91649e547e73 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.4 / dd830699c1ad4b71a79fbce6a1e87533eb3c139c | | Max. process+thread count | 540 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6607/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Add Protobuf Compatibility Notes > > > Key: HADOOP-19099 > URL: https://issues.apache.org/jira/browse/HADOOP-19099 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > > In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version > 3.21.12. This version may have compatibility issues with certain versions of > JDK8. We will document this situation in the index.md file of hadoop-3.4.0 > and inform users that we will discontinue support for JDK8 in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19099. Add Protobuf Compatibility Notes [hadoop]
hadoop-yetus commented on PR #6607: URL: https://github.com/apache/hadoop/pull/6607#issuecomment-1975168899 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ branch-3.4 Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 18s | | branch-3.4 passed | | +1 :green_heart: | mvnsite | 0m 26s | | branch-3.4 passed | | +1 :green_heart: | shadedclient | 86m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 37m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 148m 12s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6607/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6607 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets | | uname | Linux 91649e547e73 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.4 / dd830699c1ad4b71a79fbce6a1e87533eb3c139c | | Max. process+thread count | 540 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6607/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19099) Add Protobuf Compatibility Notes
[ https://issues.apache.org/jira/browse/HADOOP-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822915#comment-17822915 ] ASF GitHub Bot commented on HADOOP-19099: - slfan1989 opened a new pull request, #6607: URL: https://github.com/apache/hadoop/pull/6607 ### Description of PR JIRA: HADOOP-19099. Add Protobuf Compatibility Notes. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Add Protobuf Compatibility Notes > > > Key: HADOOP-19099 > URL: https://issues.apache.org/jira/browse/HADOOP-19099 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > > In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version > 3.21.12. This version may have compatibility issues with certain versions of > JDK8. We will document this situation in the index.md file of hadoop-3.4.0 > and inform users that we will discontinue support for JDK8 in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19099) Add Protobuf Compatibility Notes
[ https://issues.apache.org/jira/browse/HADOOP-19099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-19099: Labels: pull-request-available (was: ) > Add Protobuf Compatibility Notes > > > Key: HADOOP-19099 > URL: https://issues.apache.org/jira/browse/HADOOP-19099 > Project: Hadoop Common > Issue Type: Sub-task > Components: documentation >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > > In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version > 3.21.12. This version may have compatibility issues with certain versions of > JDK8. We will document this situation in the index.md file of hadoop-3.4.0 > and inform users that we will discontinue support for JDK8 in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-19099. Add Protobuf Compatibility Notes [hadoop]
slfan1989 opened a new pull request, #6607: URL: https://github.com/apache/hadoop/pull/6607 ### Description of PR JIRA: HADOOP-19099. Add Protobuf Compatibility Notes. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Testing without restart to check unit tests [hadoop]
hadoop-yetus commented on PR #6605: URL: https://github.com/apache/hadoop/pull/6605#issuecomment-1975124763 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 27s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 44s | | trunk passed | | +1 :green_heart: | compile | 2m 56s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 2m 51s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 20s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 1m 25s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6605/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html) | hadoop-hdfs-project/hadoop-hdfs-client in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 20m 31s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 44s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 2m 50s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 2m 50s | | the patch passed | | +1 :green_heart: | compile | 2m 44s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 2m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 37s | [/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6605/4/artifact/out/results-checkstyle-hadoop-hdfs-project.txt) | hadoop-hdfs-project: The patch generated 4 new + 244 unchanged - 2 fixed = 248 total (was 246) | | +1 :green_heart: | mvnsite | 1m 10s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 14s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 49s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 199m 46s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6605/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 306m 58s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs | | | hadoop.hdfs.server.datanode.TestLargeBlockReport | | | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6605/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6605 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f52a5fc5f851
[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822914#comment-17822914 ] ASF GitHub Bot commented on HADOOP-15984: - hadoop-yetus commented on PR #6606: URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1975123133 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 22s | | https://github.com/apache/hadoop/pull/6606 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6606 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/2/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-15984. Update jersey from 1.19 to 2.x [hadoop]
hadoop-yetus commented on PR #6606: URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1975123133 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 22s | | https://github.com/apache/hadoop/pull/6606 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6606 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/2/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19099) Add Protobuf Compatibility Notes
Shilun Fan created HADOOP-19099: --- Summary: Add Protobuf Compatibility Notes Key: HADOOP-19099 URL: https://issues.apache.org/jira/browse/HADOOP-19099 Project: Hadoop Common Issue Type: Sub-task Components: documentation Affects Versions: 3.4.0 Reporter: Shilun Fan Assignee: Shilun Fan In HADOOP-18197, we upgraded the Protobuf in hadoop-thirdparty to version 3.21.12. This version may have compatibility issues with certain versions of JDK8. We will document this situation in the index.md file of hadoop-3.4.0 and inform users that we will discontinue support for JDK8 in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17302) Upgrade to jQuery 3.5.1 in hadoop-sls
[ https://issues.apache.org/jira/browse/HADOOP-17302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-17302: Component/s: build common Target Version/s: 3.3.1, 3.4.0 Affects Version/s: 3.3.1 3.4.0 > Upgrade to jQuery 3.5.1 in hadoop-sls > - > > Key: HADOOP-17302 > URL: https://issues.apache.org/jira/browse/HADOOP-17302 > Project: Hadoop Common > Issue Type: Improvement > Components: build, common >Affects Versions: 3.3.1, 3.4.0 >Reporter: Aryan Gupta >Assignee: Aryan Gupta >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > jQuery is upgraded from 3.3.1 to 3.5.1 at > hadoop/hadoop-tools/hadoop-sls/src/main/html/js/thirdparty -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-19084) prune dependency exports of hadoop-* modules
[ https://issues.apache.org/jira/browse/HADOOP-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan resolved HADOOP-19084. - Hadoop Flags: Reviewed Resolution: Fixed > prune dependency exports of hadoop-* modules > > > Key: HADOOP-19084 > URL: https://issues.apache.org/jira/browse/HADOOP-19084 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Affects Versions: 3.4.0, 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0, 3.5.0, 3.4.1 > > > this is probably caused by HADOOP-18613: > ZK is pulling in some extra transitive stuff which surfaces in applications > which import hadoop-common into their poms. It doesn't seem to show up in our > distro, but downstream you get warnings about duplicate logging stuff > {code} > | +- org.apache.zookeeper:zookeeper:jar:3.8.3:compile > | | +- org.apache.zookeeper:zookeeper-jute:jar:3.8.3:compile > | | | \- (org.apache.yetus:audience-annotations:jar:0.12.0:compile - > omitted for duplicate) > | | +- org.apache.yetus:audience-annotations:jar:0.12.0:compile > | | +- (io.netty:netty-handler:jar:4.1.94.Final:compile - omitted for > conflict with 4.1.100.Final) > | | +- (io.netty:netty-transport-native-epoll:jar:4.1.94.Final:compile - > omitted for conflict with 4.1.100.Final) > | | +- (org.slf4j:slf4j-api:jar:1.7.30:compile - omitted for duplicate) > | | +- ch.qos.logback:logback-core:jar:1.2.10:compile > | | +- ch.qos.logback:logback-classic:jar:1.2.10:compile > | | | +- (ch.qos.logback:logback-core:jar:1.2.10:compile - omitted for > duplicate) > | | | \- (org.slf4j:slf4j-api:jar:1.7.32:compile - omitted for conflict > with 1.7.30) > | | \- (commons-io:commons-io:jar:2.11.0:compile - omitted for conflict > with 2.14.0) > {code} > proposed: exclude the zk dependencies we either override outselves or don't > need. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19084) prune dependency exports of hadoop-* modules
[ https://issues.apache.org/jira/browse/HADOOP-19084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-19084: Fix Version/s: 3.4.0 > prune dependency exports of hadoop-* modules > > > Key: HADOOP-19084 > URL: https://issues.apache.org/jira/browse/HADOOP-19084 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Affects Versions: 3.4.0, 3.5.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0, 3.5.0, 3.4.1 > > > this is probably caused by HADOOP-18613: > ZK is pulling in some extra transitive stuff which surfaces in applications > which import hadoop-common into their poms. It doesn't seem to show up in our > distro, but downstream you get warnings about duplicate logging stuff > {code} > | +- org.apache.zookeeper:zookeeper:jar:3.8.3:compile > | | +- org.apache.zookeeper:zookeeper-jute:jar:3.8.3:compile > | | | \- (org.apache.yetus:audience-annotations:jar:0.12.0:compile - > omitted for duplicate) > | | +- org.apache.yetus:audience-annotations:jar:0.12.0:compile > | | +- (io.netty:netty-handler:jar:4.1.94.Final:compile - omitted for > conflict with 4.1.100.Final) > | | +- (io.netty:netty-transport-native-epoll:jar:4.1.94.Final:compile - > omitted for conflict with 4.1.100.Final) > | | +- (org.slf4j:slf4j-api:jar:1.7.30:compile - omitted for duplicate) > | | +- ch.qos.logback:logback-core:jar:1.2.10:compile > | | +- ch.qos.logback:logback-classic:jar:1.2.10:compile > | | | +- (ch.qos.logback:logback-core:jar:1.2.10:compile - omitted for > duplicate) > | | | \- (org.slf4j:slf4j-api:jar:1.7.32:compile - omitted for conflict > with 1.7.30) > | | \- (commons-io:commons-io:jar:2.11.0:compile - omitted for conflict > with 2.14.0) > {code} > proposed: exclude the zk dependencies we either override outselves or don't > need. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18088) Replace log4j 1.x with reload4j
[ https://issues.apache.org/jira/browse/HADOOP-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-18088: Target Version/s: 3.3.3, 3.3.5, 3.2.3, 2.10.2, 3.4.0 (was: 2.10.2, 3.2.3, 3.3.5, 3.3.3) > Replace log4j 1.x with reload4j > --- > > Key: HADOOP-18088 > URL: https://issues.apache.org/jira/browse/HADOOP-18088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3, 3.5.0, 3.4.1 > > Time Spent: 8h > Remaining Estimate: 0h > > As proposed in the dev mailing list > (https://lists.apache.org/thread/fdzkv80mzkf3w74z9120l0k0rc3v7kqk) let's > replace log4j 1 with reload4j in the maintenance releases (i.e. 3.3.x, 3.2.x > and 2.10.x) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18088) Replace log4j 1.x with reload4j
[ https://issues.apache.org/jira/browse/HADOOP-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated HADOOP-18088: Fix Version/s: 3.4.0 > Replace log4j 1.x with reload4j > --- > > Key: HADOOP-18088 > URL: https://issues.apache.org/jira/browse/HADOOP-18088 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3, 3.5.0, 3.4.1 > > Time Spent: 8h > Remaining Estimate: 0h > > As proposed in the dev mailing list > (https://lists.apache.org/thread/fdzkv80mzkf3w74z9120l0k0rc3v7kqk) let's > replace log4j 1 with reload4j in the maintenance releases (i.e. 3.3.x, 3.2.x > and 2.10.x) -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822909#comment-17822909 ] ASF GitHub Bot commented on HADOOP-15984: - hadoop-yetus commented on PR #6606: URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1975084415 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 21s | | https://github.com/apache/hadoop/pull/6606 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6606 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/1/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-15984. Update jersey from 1.19 to 2.x [hadoop]
hadoop-yetus commented on PR #6606: URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1975084415 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 21s | | https://github.com/apache/hadoop/pull/6606 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6606 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/1/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x
[ https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822908#comment-17822908 ] ASF GitHub Bot commented on HADOOP-15984: - slfan1989 opened a new pull request, #6606: URL: https://github.com/apache/hadoop/pull/6606 ### Description of PR JIRA: HADOOP-15984. Update jersey from 1.19 to 2.x ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Update jersey from 1.19 to 2.x > -- > > Key: HADOOP-15984 > URL: https://issues.apache.org/jira/browse/HADOOP-15984 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-15984. Update jersey from 1.19 to 2.x [hadoop]
slfan1989 opened a new pull request, #6606: URL: https://github.com/apache/hadoop/pull/6606 ### Description of PR JIRA: HADOOP-15984. Update jersey from 1.19 to 2.x ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org