[jira] [Commented] (HADOOP-12665) Document hadoop.security.token.service.use_ip
[ https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377134#comment-17377134 ] Akira Ajisaka commented on HADOOP-12665: We had to set this parameter to false when deploying multi-homed Hadoop KMS. I want to contribute. > Document hadoop.security.token.service.use_ip > - > > Key: HADOOP-12665 > URL: https://issues.apache.org/jira/browse/HADOOP-12665 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0 >Reporter: Arpit Agarwal >Assignee: Matthew Foley >Priority: Major > > {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented
[ https://issues.apache.org/jira/browse/HADOOP-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka resolved HADOOP-17792. Resolution: Duplicate > "hadoop.security.token.service.use_ip" should be documented > --- > > Key: HADOOP-17792 > URL: https://issues.apache.org/jira/browse/HADOOP-17792 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Akira Ajisaka >Priority: Major > > hadoop.security.token.service.use_ip is not documented in core-default.xml. > It should be documented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented
[ https://issues.apache.org/jira/browse/HADOOP-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17377130#comment-17377130 ] Akira Ajisaka commented on HADOOP-17792: Closing as duplicate. > "hadoop.security.token.service.use_ip" should be documented > --- > > Key: HADOOP-17792 > URL: https://issues.apache.org/jira/browse/HADOOP-17792 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Reporter: Akira Ajisaka >Priority: Major > > hadoop.security.token.service.use_ip is not documented in core-default.xml. > It should be documented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…
tomscut commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-876159910 Thanks @Hexiaoqiao for the review. Thanks @ferhui for the merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui merged pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…
ferhui merged pull request #3140: URL: https://github.com/apache/hadoop/pull/3140 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…
ferhui commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-876158083 @tomscut Thanks for contribution, @Hexiaoqiao Thanks for review! Merged to trunk -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
hadoop-yetus commented on pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#issuecomment-876143175 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 5s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 5s | | trunk passed | | +1 :green_heart: | compile | 20m 48s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 12s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 45s | | trunk passed | | +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 53s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 59s | | the patch passed | | +1 :green_heart: | compile | 20m 10s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 10s | | the patch passed | | +1 :green_heart: | compile | 18m 14s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 42s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3141/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 14 new + 1 unchanged - 0 fixed = 15 total (was 1) | | +1 :green_heart: | mvnsite | 1m 42s | | the patch passed | | +1 :green_heart: | javadoc | 1m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 53s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 6m 51s | | hadoop-federation-balance in the patch passed. | | +1 :green_heart: | unit | 22m 2s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 200m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3141/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3141 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux fae163c0d76b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2415748274dd38a0e321c627d1c99d269cbef44c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3141/6/testReport/ | | Max. process+thread count | 2734 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-federation-balance hadoop-hdfs-project/hadoop-hdfs-rbf U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3141/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT
[GitHub] [hadoop] jianghuazhu commented on pull request #3186: HDFS-16118.Improve the number of handlers that initialize NameNodeRpcServer#clientRpcServer.
jianghuazhu commented on pull request #3186: URL: https://github.com/apache/hadoop/pull/3186#issuecomment-876061665 Some unit tests failed and have nothing to do with this function. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] lipppppp commented on a change in pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
lipp commented on a change in pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#discussion_r665811829 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/rbfbalance/TestRouterDistCpProcedure.java ## @@ -0,0 +1,120 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.rbfbalance; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.ha.HAServiceProtocol; +import org.apache.hadoop.hdfs.DFSClient; +import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster; +import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder; +import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster; +import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver; +import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager; +import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys; +import org.apache.hadoop.hdfs.server.federation.router.Router; +import org.apache.hadoop.hdfs.server.federation.store.StateStoreService; +import org.apache.hadoop.hdfs.server.federation.store.impl.MountTableStoreImpl; +import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest; +import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse; +import org.apache.hadoop.hdfs.server.federation.store.records.MountTable; +import org.apache.hadoop.ipc.RemoteException; +import org.apache.hadoop.tools.fedbalance.DistCpProcedure.Stage; +import org.apache.hadoop.tools.fedbalance.FedBalanceContext; +import org.apache.hadoop.tools.fedbalance.TestDistCpProcedure; +import org.apache.hadoop.util.Time; +import org.junit.AfterClass; +import org.junit.BeforeClass; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.Collections; + +import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.createNamenodeReport; +import static org.apache.hadoop.test.LambdaTestUtils.intercept; +import static org.junit.Assert.assertTrue; + + +public class TestRouterDistCpProcedure extends TestDistCpProcedure { +private static StateStoreDFSCluster cluster; +private static MiniRouterDFSCluster.RouterContext routerContext; +private static Configuration routerConf; +private static StateStoreService stateStore; + +@BeforeClass +public static void globalSetUp() throws Exception { +cluster = new StateStoreDFSCluster(false, 1); +// Build and start a router with State Store + admin + RPC +Configuration conf = new RouterConfigBuilder() +.stateStore() +.admin() +.rpc() +.build(); +cluster.addRouterOverrides(conf); +cluster.startRouters(); +routerContext = cluster.getRandomRouter(); +Router router = routerContext.getRouter(); +stateStore = router.getStateStore(); + +// Add one name services for testing +ActiveNamenodeResolver membership = router.getNamenodeResolver(); +membership.registerNamenode(createNamenodeReport("ns0", "nn1", +HAServiceProtocol.HAServiceState.ACTIVE)); +stateStore.refreshCaches(true); + +routerConf = new Configuration(); +InetSocketAddress routerSocket = router.getAdminServerAddress(); +routerConf.setSocketAddr(RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY, +routerSocket); +} + +@Override +public void testDisableWrite() throws Exception { +// Firstly add mount entry: /test-write->{ns0,/test-write}. +String mount = "/test-write"; +MountTable newEntry = MountTable +.newInstance(mount, Collections.singletonMap("ns0", mount), +Time.now(), Time.now()); +MountTableManager mountTable = +routerContext.getAdminClient().getMountTableManager(); +AddMountTableEntryRequest addRequest = +AddMountTableEntryRequest.newInstance(newEntry); +AddMountTableEntryResponse addResponse = +mountTable.addMountTableEntry(a
[GitHub] [hadoop] lipppppp commented on pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
lipp commented on pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#issuecomment-876049463 Thanks @wojiaodoubao for your review and suggestions. I will fix them soon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…
tomscut commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-876047593 Hi @tasanuma @jojochuang @aajisaka @ayushtkn , could you please review the code? Thanks a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus commented on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-875959943 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 20 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 4s | | trunk passed | | +1 :green_heart: | compile | 22m 11s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 6s | | trunk passed | | +1 :green_heart: | javadoc | 3m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 41s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 42s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 14m 19s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 44s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 18s | | the patch passed | | +1 :green_heart: | compile | 20m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 20m 15s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/19/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1985 unchanged - 1 fixed = 1987 total (was 1986) | | +1 :green_heart: | compile | 18m 12s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 18m 12s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/19/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1858 unchanged - 1 fixed = 1860 total (was 1859) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/19/artifact/out/blanks-eol.txt) | The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 43s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/19/artifact/out/results-checkstyle-root.txt) | root: The patch generated 69 new + 0 unchanged - 0 fixed = 69 total (was 0) | | +1 :green_heart: | mvnsite | 4m 5s | | the patch passed | | +1 :green_heart: | xml | 0m 9s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 8s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 42s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 38s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 1m 45s | [/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/19/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 :green_heart: | shadedcli
[jira] [Work logged] (HADOOP-17402) Add GCS FS impl reference to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-17402?focusedWorklogId=620237&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620237 ] ASF GitHub Bot logged work on HADOOP-17402: --- Author: ASF GitHub Bot Created on: 07/Jul/21 21:46 Start Date: 07/Jul/21 21:46 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3180: URL: https://github.com/apache/hadoop/pull/3180#issuecomment-875956180 thanks, all merged in, JIRA closed, you get the credit there too. Now, as you are looking at GCS, fancy looking at #2971 which is going to be a high performance yet correct committer for ABFS and GCS. If you are writing data to GCS right now through FileOutputCommitter, neither algorithm is safe -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 620237) Time Spent: 5h (was: 4h 50m) > Add GCS FS impl reference to core-default.xml > - > > Key: HADOOP-17402 > URL: https://issues.apache.org/jira/browse/HADOOP-17402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Rafal Wojdyla >Assignee: Rafal Wojdyla >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 5h > Remaining Estimate: 0h > > Akin to current S3 default configuration add GCS configuration, specifically > to declare the GCS implementation. [GCS > connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage]. > Has this not been done since the GCS connector is not part of the hadoop/ASF > codebase, or is there any other blocker? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3180: HADOOP-17402. Add GCS config to the core-site (#2638)
steveloughran commented on pull request #3180: URL: https://github.com/apache/hadoop/pull/3180#issuecomment-875956180 thanks, all merged in, JIRA closed, you get the credit there too. Now, as you are looking at GCS, fancy looking at #2971 which is going to be a high performance yet correct committer for ABFS and GCS. If you are writing data to GCS right now through FileOutputCommitter, neither algorithm is safe -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17402) Add GCS FS impl reference to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-17402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-17402: --- Assignee: Rafal Wojdyla > Add GCS FS impl reference to core-default.xml > - > > Key: HADOOP-17402 > URL: https://issues.apache.org/jira/browse/HADOOP-17402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Rafal Wojdyla >Assignee: Rafal Wojdyla >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 4h 50m > Remaining Estimate: 0h > > Akin to current S3 default configuration add GCS configuration, specifically > to declare the GCS implementation. [GCS > connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage]. > Has this not been done since the GCS connector is not part of the hadoop/ASF > codebase, or is there any other blocker? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17402) Add GCS FS impl reference to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-17402?focusedWorklogId=620236&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620236 ] ASF GitHub Bot logged work on HADOOP-17402: --- Author: ASF GitHub Bot Created on: 07/Jul/21 21:43 Start Date: 07/Jul/21 21:43 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #3180: URL: https://github.com/apache/hadoop/pull/3180 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 620236) Time Spent: 4h 50m (was: 4h 40m) > Add GCS FS impl reference to core-default.xml > - > > Key: HADOOP-17402 > URL: https://issues.apache.org/jira/browse/HADOOP-17402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Rafal Wojdyla >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 4h 50m > Remaining Estimate: 0h > > Akin to current S3 default configuration add GCS configuration, specifically > to declare the GCS implementation. [GCS > connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage]. > Has this not been done since the GCS connector is not part of the hadoop/ASF > codebase, or is there any other blocker? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17402) Add GCS FS impl reference to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-17402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17402. - Fix Version/s: 3.3.2 Resolution: Fixed > Add GCS FS impl reference to core-default.xml > - > > Key: HADOOP-17402 > URL: https://issues.apache.org/jira/browse/HADOOP-17402 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Rafal Wojdyla >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 4h 40m > Remaining Estimate: 0h > > Akin to current S3 default configuration add GCS configuration, specifically > to declare the GCS implementation. [GCS > connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage]. > Has this not been done since the GCS connector is not part of the hadoop/ASF > codebase, or is there any other blocker? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #3180: HADOOP-17402. Add GCS config to the core-site (#2638)
steveloughran merged pull request #3180: URL: https://github.com/apache/hadoop/pull/3180 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17788) Replace IOUtils#closeQuietly usages
[ https://issues.apache.org/jira/browse/HADOOP-17788?focusedWorklogId=620235&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620235 ] ASF GitHub Bot logged work on HADOOP-17788: --- Author: ASF GitHub Bot Created on: 07/Jul/21 21:42 Start Date: 07/Jul/21 21:42 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3171: URL: https://github.com/apache/hadoop/pull/3171#issuecomment-875954548 > thanks for the example of how JDK can produce IllegalArgumentException, learnt something new today. yeah, stupid thing for them to do. wasb was being clever about caching and rethrowing the same exception in close() as in the read(), and some code using it above hadoop was failing. what a pain. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 620235) Time Spent: 2h 40m (was: 2.5h) > Replace IOUtils#closeQuietly usages > --- > > Key: HADOOP-17788 > URL: https://issues.apache.org/jira/browse/HADOOP-17788 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > IOUtils#closeQuietly is deprecated since 2.6 release of commons-io without > any replacement. Since we already have good replacement available in Hadoop's > own IOUtils, we should use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3171: HADOOP-17788. Replace IOUtils#closeQuietly usages by Hadoop's own utility
steveloughran commented on pull request #3171: URL: https://github.com/apache/hadoop/pull/3171#issuecomment-875954548 > thanks for the example of how JDK can produce IllegalArgumentException, learnt something new today. yeah, stupid thing for them to do. wasb was being clever about caching and rethrowing the same exception in close() as in the read(), and some code using it above hadoop was failing. what a pain. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus removed a comment on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-875132867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 21 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 41s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 47s | | trunk passed | | +1 :green_heart: | compile | 21m 20s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 3s | | trunk passed | | +1 :green_heart: | javadoc | 3m 14s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 41s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 42s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 14m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 16s | | the patch passed | | +1 :green_heart: | compile | 20m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 20m 15s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/18/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1985 unchanged - 1 fixed = 1987 total (was 1986) | | +1 :green_heart: | compile | 18m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 18m 25s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/18/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1858 unchanged - 1 fixed = 1860 total (was 1859) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/18/artifact/out/blanks-eol.txt) | The patch has 8 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 45s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/18/artifact/out/results-checkstyle-root.txt) | root: The patch generated 68 new + 0 unchanged - 0 fixed = 68 total (was 0) | | +1 :green_heart: | mvnsite | 4m 5s | | the patch passed | | +1 :green_heart: | xml | 0m 9s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 38s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 38s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 1m 46s | [/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/18/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 14m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 37s | | hadoop-pro
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus removed a comment on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-874362787 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 21 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 18s | | trunk passed | | +1 :green_heart: | compile | 20m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 4s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 50s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 6s | | trunk passed | | +1 :green_heart: | javadoc | 3m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 41s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 41s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 15s | | the patch passed | | +1 :green_heart: | compile | 20m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 20m 16s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/17/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1985 unchanged - 1 fixed = 1987 total (was 1986) | | +1 :green_heart: | compile | 18m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | javac | 18m 10s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/17/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1858 unchanged - 1 fixed = 1860 total (was 1859) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/17/artifact/out/blanks-eol.txt) | The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 49s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/17/artifact/out/results-checkstyle-root.txt) | root: The patch generated 59 new + 0 unchanged - 0 fixed = 59 total (was 0) | | +1 :green_heart: | mvnsite | 4m 4s | | the patch passed | | +1 :green_heart: | xml | 0m 8s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 3m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 40s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 38s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 1m 43s | [/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/17/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 14m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 37s | | hadoop-pro
[jira] [Commented] (HADOOP-17789) S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376846#comment-17376846 ] Steve Loughran commented on HADOOP-17789: - [~arghya18] & ([~gchurch] I need that cloudstore dump so I can see what settings you have. # I suspect both your readahead ranges are way too big. That's setting the minimum range of any GET, so as we seek backwards, all that data is lost. And on random IO it's read and discarded to avoid HTTPS renegotiation. This is because we assume it's small-ish values of <=1MB. # and make sure your #of http connections and thread pool size is big, make the thread pool, maybe, 2x worker thread count and then the http pool that + worker thread count. And always, always always use an S3A committer if you write to s3. that's for correctness as well as perf. On hadoop 3.3.1 the magic committer works well > S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop > > > Key: HADOOP-17789 > URL: https://issues.apache.org/jira/browse/HADOOP-17789 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1 >Reporter: Arghya Saha >Assignee: Steve Loughran >Priority: Major > > This is issue is continuation to > https://issues.apache.org/jira/browse/HADOOP-17755 > The input data reported by Spark(Hadoop 3.3.1) was almost double and read > runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same > exact amount of resource and same configuration. And this is happening with > other jobs as well which was not impacted by read fully error as stated above. > *I was having the same exact issue when I was using the workaround > fs.s3a.readahead.range = 1G with Hadoop 3.2.0* > Below is further details : > > |Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the > file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range| > |Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K| > |Hadoop 3.3.1|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}27 > min{color}*|{color:#172b4d}64K{color}| > |Hadoop 3.2.0|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}~27 > min{color}*|{color:#172b4d}1G{color}| > * *Shuffle Write* is same (95.9 GiB) for all the above three cases > I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with > read operations, please suggest how to approach this and resolve this. > I have used the default s3a config along with below and also using EKS cluster > {code:java} > spark.hadoop.fs.s3a.committer.magic.enabled: 'true' > spark.hadoop.fs.s3a.committer.name: magic > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a: > org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > spark.hadoop.fs.s3a.downgrade.syncable.exceptions: "true"{code} > * I did not use > {code:java} > spark.hadoop.fs.s3a.experimental.input.fadvise=random{code} > And as already mentioned I have used same Spark, same amount of resources and > same config. Only change is Hadoop 3.2.0 to Hadoop 3.3.1 (Built with Spark > using ./dev/make-distribution.sh --name spark-patched --pip -Pkubernetes > -Phive -Phive-thriftserver -Dhadoop.version="3.3.1") -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17755) EOF reached error reading ORC file on S3A
[ https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376844#comment-17376844 ] Steve Loughran commented on HADOOP-17755: - [~gchurch] please comment on new JIRA. I'm not replying on this one so all discussion is in one place > EOF reached error reading ORC file on S3A > - > > Key: HADOOP-17755 > URL: https://issues.apache.org/jira/browse/HADOOP-17755 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: Hadoop 3.2.0 >Reporter: Arghya Saha >Priority: Major > > Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s > and using s3a > I have around 700 GB of data to read and around 200 executors (5 vCore and > 30G each). > Its able to read most of the files in problematic stage (Scan orc => Filter > => Project) but is failing with few files at the end with below error. The > size of the file mentioned in error is around 140 MB and all other files are > of similar size. > I am able to read and rewrite the specific file mentioned which suggest the > file is not corrupted. > Let me know if further information is required > > {code:java} > java.io.IOException: Error reading file: > s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException: > Error reading file: > s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at > org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78) > at > org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96) > at > org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at > scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at > org.apache.spark.scheduler.Task.run(Task.scala:131) at > org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) at java.base/java.lang.Thread.run(Unknown Source)Caused by: > java.io.EOFException: End of file reached before reading fully. at > org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at > org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) > at > org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566) > at > org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285) > at > org.apache.orc.impl.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:1237) > at > org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1105) > at > org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1256) > at > org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1291) > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1327) > ... 20 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17789) S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-17789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-17789: --- Assignee: Steve Loughran > S3 read performance with Spark with Hadoop 3.3.1 is slower than older Hadoop > > > Key: HADOOP-17789 > URL: https://issues.apache.org/jira/browse/HADOOP-17789 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.1 >Reporter: Arghya Saha >Assignee: Steve Loughran >Priority: Major > > This is issue is continuation to > https://issues.apache.org/jira/browse/HADOOP-17755 > The input data reported by Spark(Hadoop 3.3.1) was almost double and read > runtime also increased (around 20%) compared to Spark(Hadoop 3.2.0) with same > exact amount of resource and same configuration. And this is happening with > other jobs as well which was not impacted by read fully error as stated above. > *I was having the same exact issue when I was using the workaround > fs.s3a.readahead.range = 1G with Hadoop 3.2.0* > Below is further details : > > |Hadoop Version|Actual size of the files(in SQL Tab)|Reported size of the > file(In Stages)|Time to complete the Stage|fs.s3a.readahead.range| > |Hadoop 3.2.0|29.3 GiB|29.3 GiB|23 min|64K| > |Hadoop 3.3.1|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}27 > min{color}*|{color:#172b4d}64K{color}| > |Hadoop 3.2.0|29.3 GiB|*{color:#ff}58.7 GiB{color}*|*{color:#ff}~27 > min{color}*|{color:#172b4d}1G{color}| > * *Shuffle Write* is same (95.9 GiB) for all the above three cases > I was expecting some improvement(or same as 3.2.0) with Hadoop 3.3.1 with > read operations, please suggest how to approach this and resolve this. > I have used the default s3a config along with below and also using EKS cluster > {code:java} > spark.hadoop.fs.s3a.committer.magic.enabled: 'true' > spark.hadoop.fs.s3a.committer.name: magic > spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a: > org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory > spark.hadoop.fs.s3a.downgrade.syncable.exceptions: "true"{code} > * I did not use > {code:java} > spark.hadoop.fs.s3a.experimental.input.fadvise=random{code} > And as already mentioned I have used same Spark, same amount of resources and > same config. Only change is Hadoop 3.2.0 to Hadoop 3.3.1 (Built with Spark > using ./dev/make-distribution.sh --name spark-patched --pip -Pkubernetes > -Phive -Phive-thriftserver -Dhadoop.version="3.3.1") -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3186: HDFS-16118.Improve the number of handlers that initialize NameNodeRpcServer#clientRpcServer.
hadoop-yetus commented on pull request #3186: URL: https://github.com/apache/hadoop/pull/3186#issuecomment-875926006 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 42s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 6m 0s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3186/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 21m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 11s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 16s | | trunk passed | | +1 :green_heart: | javadoc | 2m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 21s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 47s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 9s | | the patch passed | | +1 :green_heart: | compile | 20m 20s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 20s | | the patch passed | | +1 :green_heart: | compile | 18m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 40s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 10s | | the patch passed | | +1 :green_heart: | javadoc | 2m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 6m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 3s | | hadoop-common in the patch passed. | | -1 :x: | unit | 425m 25s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3186/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 12s | | The patch does not generate ASF License warnings. | | | | 629m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3186/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3186 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 174d669479d6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3a12886474549aa72370fc6cb9
[GitHub] [hadoop] hadoop-yetus commented on pull request #3164: Fix NPE in Find.java
hadoop-yetus commented on pull request #3164: URL: https://github.com/apache/hadoop/pull/3164#issuecomment-875872871 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 42s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 31s | | trunk passed | | +1 :green_heart: | compile | 21m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 11s | | trunk passed | | +1 :green_heart: | javadoc | 2m 20s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 57s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 42s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 41s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 41s | | the patch passed | | +1 :green_heart: | compile | 18m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 39s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 21 unchanged - 0 fixed = 22 total (was 21) | | +1 :green_heart: | mvnsite | 3m 10s | | the patch passed | | +1 :green_heart: | javadoc | 2m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 59s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 2m 37s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/3/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 14m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 56s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 0m 57s | | hadoop-mapreduce-examples in the patch passed. | | +1 :green_heart: | unit | 2m 18s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 215m 32s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Nullcheck of expr at line 114 of value previously dereferenced in org.apache.hadoop.fs.shell.find.Find.buildDescription(ExpressionFactory) At Find.java:114 of value previously dereferenced in org.apache.hadoop.fs.shell.find.Find.buildDescription(ExpressionFactory) At Find.java:[line 114] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3164/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3164 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 99f4d797adb0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64
[GitHub] [hadoop] hadoop-yetus commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
hadoop-yetus commented on pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#issuecomment-875865972 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 47s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 8s | | trunk passed | | +1 :green_heart: | compile | 9m 14s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 8m 3s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 49s | | trunk passed | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 56s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 9m 41s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 9m 41s | | the patch passed | | +1 :green_heart: | compile | 7m 52s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 7m 52s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 37s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 1m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 5s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 3m 2s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | | The patch does not generate ASF License warnings. | | | | 125m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3135 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux ff073060d0a7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b79c9e2b38b30586f26da8a9cec803709cb163c0 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/7/testReport/ | | Max. process+thread count | 769 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/7/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated m
[jira] [Work logged] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)
[ https://issues.apache.org/jira/browse/HADOOP-13887?focusedWorklogId=620101&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620101 ] ASF GitHub Bot logged work on HADOOP-13887: --- Author: ASF GitHub Bot Created on: 07/Jul/21 17:32 Start Date: 07/Jul/21 17:32 Worklog Time Spent: 10m Work Description: mukund-thakur commented on a change in pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#discussion_r665189654 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java ## @@ -307,29 +311,31 @@ public synchronized void write(byte[] source, int offset, int len) // of capacity // Trigger an upload then process the remainder. LOG.debug("writing more data than block has capacity -triggering upload"); - uploadCurrentBlock(); + uploadCurrentBlock(false); // tail recursion is mildly expensive, but given buffer sizes must be MB. // it's unlikely to recurse very deeply. this.write(source, offset + written, len - written); } else { - if (remainingCapacity == 0) { + if (remainingCapacity == 0 && !isCSEEnabled) { // the whole buffer is done, trigger an upload -uploadCurrentBlock(); +uploadCurrentBlock(false); } } } /** * Start an asynchronous upload of the current block. + * @param isLast true, if part being uploaded is last and client side Review comment: and or 'or'?? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java ## @@ -94,9 +105,81 @@ public AmazonS3 createS3Client( awsConf.setUserAgentSuffix(parameters.getUserAgentSuffix()); } -return buildAmazonS3Client( -awsConf, -parameters); +if (conf.get(CLIENT_SIDE_ENCRYPTION_METHOD) == null) { + return buildAmazonS3Client( + awsConf, + parameters); +} else { + return newAmazonS3EncryptionClient( + awsConf, + parameters); +} + } + + /** + * Create an {@link AmazonS3} client of type + * {@link AmazonS3EncryptionV2} if CSE is enabled. + * + * @param awsConfAWS configuration. + * @param parameters parameters + * + * @return new AmazonS3 client. + */ + protected AmazonS3 newAmazonS3EncryptionClient( + final ClientConfiguration awsConf, + final S3ClientCreationParameters parameters){ + +AmazonS3 client; +AmazonS3EncryptionClientV2Builder builder = +new AmazonS3EncryptionClientV2Builder(); +Configuration conf = getConf(); + +//CSE-KMS Method +String kmsKeyId = conf.get(CLIENT_SIDE_ENCRYPTION_KMS_KEY_ID); +// Check if kmsKeyID is not null +Preconditions.checkArgument(kmsKeyId != null, "CSE-KMS method " ++ "requires KMS key ID. Use " + CLIENT_SIDE_ENCRYPTION_KMS_KEY_ID ++ " property to set it. "); + +EncryptionMaterialsProvider materialsProvider = +new KMSEncryptionMaterialsProvider(kmsKeyId); + +builder.withEncryptionMaterialsProvider(materialsProvider); +builder.withCredentials(parameters.getCredentialSet()) +.withClientConfiguration(awsConf) +.withPathStyleAccessEnabled(parameters.isPathStyleAccess()); + +// if metrics are not null, then add in the builder. +if (parameters.getMetrics() != null) { + LOG.debug("Creating Amazon client with AWS metrics"); + builder.withMetricsCollector( + new AwsStatisticsCollector(parameters.getMetrics())); +} + +// Create cryptoConfig +CryptoConfigurationV2 cryptoConfigurationV2 = +new CryptoConfigurationV2(CryptoMode.AuthenticatedEncryption) +.withRangeGetMode(CryptoRangeGetMode.ALL); + +// Setting the endpoint and KMS region in cryptoConfig +AmazonS3EncryptionClientV2Builder.EndpointConfiguration epr += createEndpointConfiguration(parameters.getEndpoint(), awsConf, getConf().getTrimmed(AWS_REGION)); +if (epr != null) { + LOG.debug( + "Building the AmazonS3 Encryption client with endpoint configs"); + builder.withEndpointConfiguration(epr); + cryptoConfigurationV2 + .withAwsKmsRegion(RegionUtils.getRegion(epr.getSigningRegion())); + LOG.debug("KMS region used: {}", + cryptoConfigurationV2.getAwsKmsRegion()); +} else { + // forcefully look for the region; extra HEAD call required. + builder.setForceGlobalBucketAccessEnabled(true); +} +builder.withCryptoConfiguration(cryptoConfigurationV2); Review comment: Yes agree with refactoring common methods by steve. ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AClientSideEncryption.java ## @@ -0,0 +1,195 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * o
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #2706: HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK
mukund-thakur commented on a change in pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#discussion_r665189654 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java ## @@ -307,29 +311,31 @@ public synchronized void write(byte[] source, int offset, int len) // of capacity // Trigger an upload then process the remainder. LOG.debug("writing more data than block has capacity -triggering upload"); - uploadCurrentBlock(); + uploadCurrentBlock(false); // tail recursion is mildly expensive, but given buffer sizes must be MB. // it's unlikely to recurse very deeply. this.write(source, offset + written, len - written); } else { - if (remainingCapacity == 0) { + if (remainingCapacity == 0 && !isCSEEnabled) { // the whole buffer is done, trigger an upload -uploadCurrentBlock(); +uploadCurrentBlock(false); } } } /** * Start an asynchronous upload of the current block. + * @param isLast true, if part being uploaded is last and client side Review comment: and or 'or'?? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java ## @@ -94,9 +105,81 @@ public AmazonS3 createS3Client( awsConf.setUserAgentSuffix(parameters.getUserAgentSuffix()); } -return buildAmazonS3Client( -awsConf, -parameters); +if (conf.get(CLIENT_SIDE_ENCRYPTION_METHOD) == null) { + return buildAmazonS3Client( + awsConf, + parameters); +} else { + return newAmazonS3EncryptionClient( + awsConf, + parameters); +} + } + + /** + * Create an {@link AmazonS3} client of type + * {@link AmazonS3EncryptionV2} if CSE is enabled. + * + * @param awsConfAWS configuration. + * @param parameters parameters + * + * @return new AmazonS3 client. + */ + protected AmazonS3 newAmazonS3EncryptionClient( + final ClientConfiguration awsConf, + final S3ClientCreationParameters parameters){ + +AmazonS3 client; +AmazonS3EncryptionClientV2Builder builder = +new AmazonS3EncryptionClientV2Builder(); +Configuration conf = getConf(); + +//CSE-KMS Method +String kmsKeyId = conf.get(CLIENT_SIDE_ENCRYPTION_KMS_KEY_ID); +// Check if kmsKeyID is not null +Preconditions.checkArgument(kmsKeyId != null, "CSE-KMS method " ++ "requires KMS key ID. Use " + CLIENT_SIDE_ENCRYPTION_KMS_KEY_ID ++ " property to set it. "); + +EncryptionMaterialsProvider materialsProvider = +new KMSEncryptionMaterialsProvider(kmsKeyId); + +builder.withEncryptionMaterialsProvider(materialsProvider); +builder.withCredentials(parameters.getCredentialSet()) +.withClientConfiguration(awsConf) +.withPathStyleAccessEnabled(parameters.isPathStyleAccess()); + +// if metrics are not null, then add in the builder. +if (parameters.getMetrics() != null) { + LOG.debug("Creating Amazon client with AWS metrics"); + builder.withMetricsCollector( + new AwsStatisticsCollector(parameters.getMetrics())); +} + +// Create cryptoConfig +CryptoConfigurationV2 cryptoConfigurationV2 = +new CryptoConfigurationV2(CryptoMode.AuthenticatedEncryption) +.withRangeGetMode(CryptoRangeGetMode.ALL); + +// Setting the endpoint and KMS region in cryptoConfig +AmazonS3EncryptionClientV2Builder.EndpointConfiguration epr += createEndpointConfiguration(parameters.getEndpoint(), awsConf, getConf().getTrimmed(AWS_REGION)); +if (epr != null) { + LOG.debug( + "Building the AmazonS3 Encryption client with endpoint configs"); + builder.withEndpointConfiguration(epr); + cryptoConfigurationV2 + .withAwsKmsRegion(RegionUtils.getRegion(epr.getSigningRegion())); + LOG.debug("KMS region used: {}", + cryptoConfigurationV2.getAwsKmsRegion()); +} else { + // forcefully look for the region; extra HEAD call required. + builder.setForceGlobalBucketAccessEnabled(true); +} +builder.withCryptoConfiguration(cryptoConfigurationV2); Review comment: Yes agree with refactoring common methods by steve. ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AClientSideEncryption.java ## @@ -0,0 +1,195 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Un
[jira] [Work logged] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)
[ https://issues.apache.org/jira/browse/HADOOP-13887?focusedWorklogId=620094&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620094 ] ASF GitHub Bot logged work on HADOOP-13887: --- Author: ASF GitHub Bot Created on: 07/Jul/21 17:08 Start Date: 07/Jul/21 17:08 Worklog Time Spent: 10m Work Description: mehakmeet commented on a change in pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#discussion_r665558709 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java ## @@ -524,10 +525,15 @@ public static S3AFileStatus createFileStatus(Path keyPath, long blockSize, String owner, String eTag, - String versionId) { + String versionId, + boolean isCSEEnabled) { Review comment: Would require a few changes, don't know if it's worth it in this patch or should we do kind of a refactor patch separately? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 620094) Time Spent: 4h 50m (was: 4h 40m) > Encrypt S3A data client-side with AWS SDK (S3-CSE) > -- > > Key: HADOOP-13887 > URL: https://issues.apache.org/jira/browse/HADOOP-13887 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Jeeyoung Kim >Assignee: Igor Mazur >Priority: Minor > Labels: pull-request-available > Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, > HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, > HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, > HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, > HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, > HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, > HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf > > Time Spent: 4h 50m > Remaining Estimate: 0h > > Expose the client-side encryption option documented in Amazon S3 > documentation - > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html > Currently this is not exposed in Hadoop but it is exposed as an option in AWS > Java SDK, which Hadoop currently includes. It should be trivial to propagate > this as a parameter passed to the S3client used in S3AFileSystem.java -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on a change in pull request #2706: HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK
mehakmeet commented on a change in pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#discussion_r665558709 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AUtils.java ## @@ -524,10 +525,15 @@ public static S3AFileStatus createFileStatus(Path keyPath, long blockSize, String owner, String eTag, - String versionId) { + String versionId, + boolean isCSEEnabled) { Review comment: Would require a few changes, don't know if it's worth it in this patch or should we do kind of a refactor patch separately? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)
[ https://issues.apache.org/jira/browse/HADOOP-13887?focusedWorklogId=620088&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-620088 ] ASF GitHub Bot logged work on HADOOP-13887: --- Author: ASF GitHub Bot Created on: 07/Jul/21 17:05 Start Date: 07/Jul/21 17:05 Worklog Time Spent: 10m Work Description: mehakmeet commented on a change in pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#discussion_r665556483 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -3655,7 +3676,14 @@ S3AFileStatus s3GetFileStatus(final Path path, // look for the simple file ObjectMetadata meta = getObjectMetadata(key); LOG.debug("Found exact file: normal file {}", key); -return new S3AFileStatus(meta.getContentLength(), +long contentLength = meta.getContentLength(); +// check if CSE is enabled, then strip padded length. +if (isCSEEnabled +&& meta.getUserMetaDataOf(Headers.CRYPTO_CEK_ALGORITHM) != null +&& contentLength >= CSE_PADDING_LENGTH) { Review comment: The header isn't present for multipart uploads, so to be consistent, thought we should just subtract the value instead. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 620088) Time Spent: 4h 40m (was: 4.5h) > Encrypt S3A data client-side with AWS SDK (S3-CSE) > -- > > Key: HADOOP-13887 > URL: https://issues.apache.org/jira/browse/HADOOP-13887 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.0 >Reporter: Jeeyoung Kim >Assignee: Igor Mazur >Priority: Minor > Labels: pull-request-available > Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, > HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, > HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, > HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, > HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, > HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, > HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf > > Time Spent: 4h 40m > Remaining Estimate: 0h > > Expose the client-side encryption option documented in Amazon S3 > documentation - > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html > Currently this is not exposed in Hadoop but it is exposed as an option in AWS > Java SDK, which Hadoop currently includes. It should be trivial to propagate > this as a parameter passed to the S3client used in S3AFileSystem.java -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on a change in pull request #2706: HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK
mehakmeet commented on a change in pull request #2706: URL: https://github.com/apache/hadoop/pull/2706#discussion_r665556483 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -3655,7 +3676,14 @@ S3AFileStatus s3GetFileStatus(final Path path, // look for the simple file ObjectMetadata meta = getObjectMetadata(key); LOG.debug("Found exact file: normal file {}", key); -return new S3AFileStatus(meta.getContentLength(), +long contentLength = meta.getContentLength(); +// check if CSE is enabled, then strip padded length. +if (isCSEEnabled +&& meta.getUserMetaDataOf(Headers.CRYPTO_CEK_ALGORITHM) != null +&& contentLength >= CSE_PADDING_LENGTH) { Review comment: The header isn't present for multipart uploads, so to be consistent, thought we should just subtract the value instead. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] containerAnalyzer edited a comment on pull request #3164: Fix NPE in Find.java
containerAnalyzer edited a comment on pull request #3164: URL: https://github.com/apache/hadoop/pull/3164#issuecomment-875728419 This is another NPE occurring in DumpS3GuardDynamoTable.java. The patch has been submitted. 1. Return **null** to caller https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L863 2. Return the return value of function **getDirListingMetadataFromDirMetaAndList** to caller https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L844-L845 3. Function **listChildren** executes and returns the **null** value, which is added to the list **childMD**. https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L419-L420 4. The value **childMD** is passed as the 2nd parameter of **pushAll**, and it contains **null** value. https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L422 5. The list **reversed** contains the **null** value after being assigned by the return value of the function **reverse**. https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L268 6. The return value of function **iterator** is passed as the **this** pointer to function **hasNext** (the return value of function **iterator** can be **null**), which will leak to null pointer dereference https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L269 Commit: 986d0a4f1d5543fa0b4f5916729728f78b4acec9 ContainerAnalyzer -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] containerAnalyzer commented on pull request #3164: Fix NPE in Find.java
containerAnalyzer commented on pull request #3164: URL: https://github.com/apache/hadoop/pull/3164#issuecomment-875728419 This is another NPE occurring in DumpS3GuardDynamoTable.java. The patch has been submitted. 1. Return **null** to caller https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L863 2. Return the return value of function **getDirListingMetadataFromDirMetaAndList** to caller https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java#L844-L845 3. Function **listChildren** executes and returns the **null** value, which is added to the list **childMD**. https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L419-L420 4. The value **childMD** is passed as the 2nd parameter of **pushAll**, and it contains **null** value. https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L422 5. The list **reversed** contains the **null** value after being assigned by the return value of the function **reverse**. https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L268 6. The return value of function **iterator** is passed as the **this** pointer to function **hasNext** (the return value of function **iterator** can be **null**), which will leak to null pointer dereference https://github.com/apache/hadoop/blob/986d0a4f1d5543fa0b4f5916729728f78b4acec9/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java#L269 Commit: 986d0a4f1d5543fa0b4f5916729728f78b4acec9 # ContainerAnalyzer -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3185: HDFS-16119. start balancer with parameters -hotBlockTimeInterval xxx is invalid.
hadoop-yetus commented on pull request #3185: URL: https://github.com/apache/hadoop/pull/3185#issuecomment-875705835 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 2s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 37s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 5s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 57s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3185/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 489 unchanged - 0 fixed = 490 total (was 489) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 230m 48s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 326m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3185/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3185 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux d8cfccf16cd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 310a266770a55f8d86bbb9f310077360f59b682a | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3185/1/testReport/ | | Max. process+thread count | 3665 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3185/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatical
[jira] [Resolved] (HADOOP-17274) MR FileInput/Output formats to aggregate IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17274. - Resolution: Won't Fix MAPREDUCE-7341 does this in its new committer > MR FileInput/Output formats to aggregate IOStatistics > - > > Key: HADOOP-17274 > URL: https://issues.apache.org/jira/browse/HADOOP-17274 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Priority: Major > > The MR input formats are where IO takes place, so collect stats. Justifiable > if Hive/Spark use these (I think they do). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] wojiaodoubao commented on pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
wojiaodoubao commented on pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#issuecomment-87847 There is also some check-style complaint from yetus, fix them please. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] wojiaodoubao commented on a change in pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
wojiaodoubao commented on a change in pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#discussion_r661299048 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/rbfbalance/RouterDistCpProcedure.java ## @@ -44,6 +44,7 @@ protected void disableWrite(FedBalanceContext context) throws IOException { Configuration conf = context.getConf(); String mount = context.getMount(); MountTableProcedure.disableWrite(mount, conf); +updateStage(Stage.FINAL_DISTCP); Review comment: Thanks @lipp for your nice report ! The change is correct. The DISABLE_WRITE is a stage of the DistCpProcedure. In DistCpProcedure it disables write by cancel the permission. The RouterDistCpProcedure extends DistCpProcedure and disables write by set mount point read only. But RouterDistCpProcedure forgets to update the stage. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] wojiaodoubao commented on a change in pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
wojiaodoubao commented on a change in pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#discussion_r665311980 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/rbfbalance/TestRouterDistCpProcedure.java ## @@ -0,0 +1,120 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.rbfbalance; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.ha.HAServiceProtocol; +import org.apache.hadoop.hdfs.DFSClient; +import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster; +import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder; +import org.apache.hadoop.hdfs.server.federation.StateStoreDFSCluster; +import org.apache.hadoop.hdfs.server.federation.resolver.ActiveNamenodeResolver; +import org.apache.hadoop.hdfs.server.federation.resolver.MountTableManager; +import org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys; +import org.apache.hadoop.hdfs.server.federation.router.Router; +import org.apache.hadoop.hdfs.server.federation.store.StateStoreService; +import org.apache.hadoop.hdfs.server.federation.store.impl.MountTableStoreImpl; +import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryRequest; +import org.apache.hadoop.hdfs.server.federation.store.protocol.AddMountTableEntryResponse; +import org.apache.hadoop.hdfs.server.federation.store.records.MountTable; +import org.apache.hadoop.ipc.RemoteException; +import org.apache.hadoop.tools.fedbalance.DistCpProcedure.Stage; +import org.apache.hadoop.tools.fedbalance.FedBalanceContext; +import org.apache.hadoop.tools.fedbalance.TestDistCpProcedure; +import org.apache.hadoop.util.Time; +import org.junit.AfterClass; +import org.junit.BeforeClass; + +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.Collections; + +import static org.apache.hadoop.hdfs.server.federation.FederationTestUtils.createNamenodeReport; +import static org.apache.hadoop.test.LambdaTestUtils.intercept; +import static org.junit.Assert.assertTrue; + + +public class TestRouterDistCpProcedure extends TestDistCpProcedure { +private static StateStoreDFSCluster cluster; +private static MiniRouterDFSCluster.RouterContext routerContext; +private static Configuration routerConf; +private static StateStoreService stateStore; + +@BeforeClass +public static void globalSetUp() throws Exception { +cluster = new StateStoreDFSCluster(false, 1); +// Build and start a router with State Store + admin + RPC +Configuration conf = new RouterConfigBuilder() +.stateStore() +.admin() +.rpc() +.build(); +cluster.addRouterOverrides(conf); +cluster.startRouters(); +routerContext = cluster.getRandomRouter(); +Router router = routerContext.getRouter(); +stateStore = router.getStateStore(); + +// Add one name services for testing +ActiveNamenodeResolver membership = router.getNamenodeResolver(); +membership.registerNamenode(createNamenodeReport("ns0", "nn1", +HAServiceProtocol.HAServiceState.ACTIVE)); +stateStore.refreshCaches(true); + +routerConf = new Configuration(); +InetSocketAddress routerSocket = router.getAdminServerAddress(); +routerConf.setSocketAddr(RBFConfigKeys.DFS_ROUTER_ADMIN_ADDRESS_KEY, +routerSocket); +} + +@Override +public void testDisableWrite() throws Exception { +// Firstly add mount entry: /test-write->{ns0,/test-write}. +String mount = "/test-write"; +MountTable newEntry = MountTable +.newInstance(mount, Collections.singletonMap("ns0", mount), +Time.now(), Time.now()); +MountTableManager mountTable = +routerContext.getAdminClient().getMountTableManager(); +AddMountTableEntryRequest addRequest = +AddMountTableEntryRequest.newInstance(newEntry); +AddMountTableEntryResponse addResponse = +mountTable.addMountTableEnt
[jira] [Created] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented
Akira Ajisaka created HADOOP-17792: -- Summary: "hadoop.security.token.service.use_ip" should be documented Key: HADOOP-17792 URL: https://issues.apache.org/jira/browse/HADOOP-17792 Project: Hadoop Common Issue Type: Improvement Components: documentation Reporter: Akira Ajisaka hadoop.security.token.service.use_ip is not documented in core-default.xml. It should be documented. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jianghuazhu opened a new pull request #3186: HDFS-16118.Improve the number of handlers that initialize NameNodeRpcServer#clientRpcServer.
jianghuazhu opened a new pull request #3186: URL: https://github.com/apache/hadoop/pull/3186 …Server#clientRpcServer. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] JiaguodongF opened a new pull request #3185: HDFS-16119. start balancer with parameters -hotBlockTimeInterval xxx is valid.
JiaguodongF opened a new pull request #3185: URL: https://github.com/apache/hadoop/pull/3185 when start balancer with parameters -hotBlockTimeInterval xxx, it is not valid. but set hdfs-site.xml is valid. dfs.balancer.getBlocks.hot-time-interval 1000 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17788) Replace IOUtils#closeQuietly usages
[ https://issues.apache.org/jira/browse/HADOOP-17788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated HADOOP-17788: -- Target Version/s: 3.4.0 > Replace IOUtils#closeQuietly usages > --- > > Key: HADOOP-17788 > URL: https://issues.apache.org/jira/browse/HADOOP-17788 > Project: Hadoop Common > Issue Type: Task >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > IOUtils#closeQuietly is deprecated since 2.6 release of commons-io without > any replacement. Since we already have good replacement available in Hadoop's > own IOUtils, we should use it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17735) Upgrade aws-java-sdk to 1.11.1026 or later
[ https://issues.apache.org/jira/browse/HADOOP-17735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17735: Summary: Upgrade aws-java-sdk to 1.11.1026 or later (was: Upgrade aws-java-sdk to 1.11.993 or later) > Upgrade aws-java-sdk to 1.11.1026 or later > -- > > Key: HADOOP-17735 > URL: https://issues.apache.org/jira/browse/HADOOP-17735 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 1h > Remaining Estimate: 0h > > Upgrade the AWS SDK. Apparently the shaded netty jar has some CVEs, and even > though the AWS codepaths don't seem vulnerable, it's still causing scan tools > to warn -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17735) Upgrade aws-java-sdk to 1.11.1026
[ https://issues.apache.org/jira/browse/HADOOP-17735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17735: Summary: Upgrade aws-java-sdk to 1.11.1026 (was: Upgrade aws-java-sdk to 1.11.1026 or later) > Upgrade aws-java-sdk to 1.11.1026 > - > > Key: HADOOP-17735 > URL: https://issues.apache.org/jira/browse/HADOOP-17735 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 1h > Remaining Estimate: 0h > > Upgrade the AWS SDK. Apparently the shaded netty jar has some CVEs, and even > though the AWS codepaths don't seem vulnerable, it's still causing scan tools > to warn -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17775) Remove JavaScript package from Docker environment
[ https://issues.apache.org/jira/browse/HADOOP-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17775: --- Fix Version/s: 3.3.2 3.2.3 2.10.2 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) Merged. Thank you [~iwasakims] for your contribution! > Remove JavaScript package from Docker environment > - > > Key: HADOOP-17775 > URL: https://issues.apache.org/jira/browse/HADOOP-17775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2 > > Time Spent: 2.5h > Remaining Estimate: 0h > > As described in the [README of > yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md], > required javascript modules are automatically pulled by > frontend-maven-plugin. We can leverage them for local testing too. > While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using > node.js, the version of node.js does not match. JavaScript related packages > of the docker environment is not sure to work. > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212 > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17775) Remove JavaScript package from Docker environment
[ https://issues.apache.org/jira/browse/HADOOP-17775?focusedWorklogId=619852&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-619852 ] ASF GitHub Bot logged work on HADOOP-17775: --- Author: ASF GitHub Bot Created on: 07/Jul/21 08:50 Start Date: 07/Jul/21 08:50 Worklog Time Spent: 10m Work Description: aajisaka merged pull request #3184: URL: https://github.com/apache/hadoop/pull/3184 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 619852) Time Spent: 2.5h (was: 2h 20m) > Remove JavaScript package from Docker environment > - > > Key: HADOOP-17775 > URL: https://issues.apache.org/jira/browse/HADOOP-17775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2.5h > Remaining Estimate: 0h > > As described in the [README of > yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md], > required javascript modules are automatically pulled by > frontend-maven-plugin. We can leverage them for local testing too. > While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using > node.js, the version of node.js does not match. JavaScript related packages > of the docker environment is not sure to work. > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212 > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #3184: HADOOP-17775. Remove JavaScript package from Docker environment.
aajisaka merged pull request #3184: URL: https://github.com/apache/hadoop/pull/3184 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17775) Remove JavaScript package from Docker environment
[ https://issues.apache.org/jira/browse/HADOOP-17775?focusedWorklogId=619850&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-619850 ] ASF GitHub Bot logged work on HADOOP-17775: --- Author: ASF GitHub Bot Created on: 07/Jul/21 08:49 Start Date: 07/Jul/21 08:49 Worklog Time Spent: 10m Work Description: aajisaka merged pull request #3182: URL: https://github.com/apache/hadoop/pull/3182 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 619850) Time Spent: 2h 10m (was: 2h) > Remove JavaScript package from Docker environment > - > > Key: HADOOP-17775 > URL: https://issues.apache.org/jira/browse/HADOOP-17775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > As described in the [README of > yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md], > required javascript modules are automatically pulled by > frontend-maven-plugin. We can leverage them for local testing too. > While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using > node.js, the version of node.js does not match. JavaScript related packages > of the docker environment is not sure to work. > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212 > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17775) Remove JavaScript package from Docker environment
[ https://issues.apache.org/jira/browse/HADOOP-17775?focusedWorklogId=619851&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-619851 ] ASF GitHub Bot logged work on HADOOP-17775: --- Author: ASF GitHub Bot Created on: 07/Jul/21 08:49 Start Date: 07/Jul/21 08:49 Worklog Time Spent: 10m Work Description: aajisaka merged pull request #3183: URL: https://github.com/apache/hadoop/pull/3183 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 619851) Time Spent: 2h 20m (was: 2h 10m) > Remove JavaScript package from Docker environment > - > > Key: HADOOP-17775 > URL: https://issues.apache.org/jira/browse/HADOOP-17775 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > As described in the [README of > yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md], > required javascript modules are automatically pulled by > frontend-maven-plugin. We can leverage them for local testing too. > While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using > node.js, the version of node.js does not match. JavaScript related packages > of the docker environment is not sure to work. > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212 > * > https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #3183: HADOOP-17775. Remove JavaScript package from Docker environment.
aajisaka merged pull request #3183: URL: https://github.com/apache/hadoop/pull/3183 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #3182: HADOOP-17775. Remove JavaScript package from Docker environment.
aajisaka merged pull request #3182: URL: https://github.com/apache/hadoop/pull/3182 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hemanthboyina commented on pull request #3179: HDFS-16114. the balancer parameters print error
hemanthboyina commented on pull request #3179: URL: https://github.com/apache/hadoop/pull/3179#issuecomment-875413672 thanks @JiaguodongF for the contribution , thanks @tomscut for the review -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hemanthboyina merged pull request #3179: HDFS-16114. the balancer parameters print error
hemanthboyina merged pull request #3179: URL: https://github.com/apache/hadoop/pull/3179 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3181: HDFS-16116. Fix Hadoop FedBalance shell and federationBanance markdow…
hadoop-yetus commented on pull request #3181: URL: https://github.com/apache/hadoop/pull/3181#issuecomment-875377099 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 5s | | trunk passed | | -1 :x: | shadedclient | 22m 23s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 55s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 10s | | No new issues. | | +1 :green_heart: | shadedclient | 17m 23s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 45s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 0m 26s | | hadoop-federation-balance in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 89m 54s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3181/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3181 | | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck shelldocs markdownlint | | uname | Linux 0bd3bf84183c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6c1ecb48151753f0442212bfb7a4f71a66f113be | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3181/4/testReport/ | | Max. process+thread count | 719 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-federation-balance U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3181/4/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org