[GitHub] [hadoop] hadoop-yetus commented on pull request #5991: New rbf nonnavia
hadoop-yetus commented on PR #5991: URL: https://github.com/apache/hadoop/pull/5991#issuecomment-1694177727 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 58s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 21m 31s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 163m 2s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5991/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5991 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 0e4f280c9a13 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 29c98dcc951470f37ab9d9cd11f7036aa7b10484 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5991/1/testReport/ | | Max. process+thread count | 2589 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5991/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@i
[GitHub] [hadoop] hadoop-yetus commented on pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
hadoop-yetus commented on PR #5990: URL: https://github.com/apache/hadoop/pull/5990#issuecomment-1694159940 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 43s | | trunk passed | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 33s | | trunk passed | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 0m 55s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 13s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 105m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5990 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8cee938ef2d5 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 29c98dcc951470f37ab9d9cd11f7036aa7b10484 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/2/testReport/ | | Max. process+thread count | 2639 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@i
[GitHub] [hadoop] KeeProMise commented on a diff in pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
KeeProMise commented on code in PR #5990: URL: https://github.com/apache/hadoop/pull/5990#discussion_r1306302466 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -83,14 +83,12 @@ public class MembershipNamenodeResolver /** Cached lookup of NN for block pool. Invalidated on cache refresh. */ private Map> cacheBP; - Review Comment: done ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java: ## @@ -357,6 +363,72 @@ public void testNoNamenodesAvailable() throws Exception{ assertEquals(originalRouter0Failures, rpcMetrics0.getProxyOpNoNamenodes()); } + /** + * When failover occurs, the router may record that the ns has no active namenode. + * Only when the router updates the cache next time can the memory status be updated, + * causing the router to report NoNamenodesAvailableException for a long time + */ + @Test + public void testNoNamenodesAvailableLongTimeWhenNsFailover() throws Exception { +setupCluster(false, true); +transitionClusterNSToStandby(cluster); +for (RouterContext routerContext : cluster.getRouters()) { + // Manually trigger the heartbeat + Collection heartbeatServices = routerContext + .getRouter().getNamenodeHeartbeatServices(); + for (NamenodeHeartbeatService service : heartbeatServices) { +service.periodicInvoke(); + } + // Update service cache + routerContext.getRouter().getStateStore().refreshCaches(true); +} +// Record the time after the router first updated the cache +long firstLoadTime = Time.now(); +List namenodes = cluster.getNamenodes(); + +// Make sure all namenodes are in standby state +for (MiniRouterDFSCluster.NamenodeContext namenodeContext : namenodes) { + assertTrue(namenodeContext.getNamenode().getNameNodeState() == STANDBY.ordinal()); +} + +Configuration conf = cluster.getRouterClientConf(); +// Set dfs.client.failover.random.order false, to pick 1st router at first +conf.setBoolean("dfs.client.failover.random.order", false); + +DFSClient routerClient = new DFSClient(new URI("hdfs://fed"), conf); + +for (RouterContext routerContext : cluster.getRouters()) { + // Get the second namenode in the router cache and make it active + List ns0 = routerContext.getRouter().getNamenodeResolver().getNamenodesForNameserviceId("ns0", false); Review Comment: done ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java: ## @@ -357,6 +363,72 @@ public void testNoNamenodesAvailable() throws Exception{ assertEquals(originalRouter0Failures, rpcMetrics0.getProxyOpNoNamenodes()); } + /** + * When failover occurs, the router may record that the ns has no active namenode. + * Only when the router updates the cache next time can the memory status be updated, + * causing the router to report NoNamenodesAvailableException for a long time + */ + @Test + public void testNoNamenodesAvailableLongTimeWhenNsFailover() throws Exception { +setupCluster(false, true); +transitionClusterNSToStandby(cluster); +for (RouterContext routerContext : cluster.getRouters()) { + // Manually trigger the heartbeat + Collection heartbeatServices = routerContext + .getRouter().getNamenodeHeartbeatServices(); + for (NamenodeHeartbeatService service : heartbeatServices) { +service.periodicInvoke(); + } + // Update service cache + routerContext.getRouter().getStateStore().refreshCaches(true); +} +// Record the time after the router first updated the cache +long firstLoadTime = Time.now(); +List namenodes = cluster.getNamenodes(); + +// Make sure all namenodes are in standby state +for (MiniRouterDFSCluster.NamenodeContext namenodeContext : namenodes) { + assertTrue(namenodeContext.getNamenode().getNameNodeState() == STANDBY.ordinal()); +} + +Configuration conf = cluster.getRouterClientConf(); +// Set dfs.client.failover.random.order false, to pick 1st router at first +conf.setBoolean("dfs.client.failover.random.order", false); + +DFSClient routerClient = new DFSClient(new URI("hdfs://fed"), conf); + +for (RouterContext routerContext : cluster.getRouters()) { + // Get the second namenode in the router cache and make it active + List ns0 = routerContext.getRouter().getNamenodeResolver().getNamenodesForNameserviceId("ns0", false); + String nsId = ns0.get(1).getNamenodeId(); + cluster.switchToActive("ns0", nsId); + // Manually trigger the heartbeat, but the router does not manually load the cache + Collection heartbeatServices = routerContext + .getRouter().getNamenodeHeart
[GitHub] [hadoop] KeeProMise closed pull request #5991: New rbf nonnavia
KeeProMise closed pull request #5991: New rbf nonnavia URL: https://github.com/apache/hadoop/pull/5991 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] KeeProMise opened a new pull request, #5991: New rbf nonnavia
KeeProMise opened a new pull request, #5991: URL: https://github.com/apache/hadoop/pull/5991 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] KeeProMise commented on a diff in pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
KeeProMise commented on code in PR #5990: URL: https://github.com/apache/hadoop/pull/5990#discussion_r1306286205 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -478,4 +476,27 @@ private List getRecentRegistrationForQuery( public void setRouterId(String router) { this.routerId = router; } + + /** + * Shuffle cache, to ensure that the current nn will not be accessed first next time. + * + * + * @param nsId name service id + * @param namenode namenode contexts + */ + @Override + public synchronized void shuffleCache(String nsId, FederationNamenodeContext namenode) { +cacheNS.compute(Pair.of(nsId, false), (ns, namenodeContexts) -> { + if (namenodeContexts != null + && namenodeContexts.size() > 0 Review Comment: It is true that >1 should be judged here instead of >0 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] KeeProMise commented on a diff in pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
KeeProMise commented on code in PR #5990: URL: https://github.com/apache/hadoop/pull/5990#discussion_r1306285491 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -478,4 +476,27 @@ private List getRecentRegistrationForQuery( public void setRouterId(String router) { this.routerId = router; } + + /** + * Shuffle cache, to ensure that the current nn will not be accessed first next time. + * + * + * @param nsId name service id + * @param namenode namenode contexts + */ + @Override + public synchronized void shuffleCache(String nsId, FederationNamenodeContext namenode) { +cacheNS.compute(Pair.of(nsId, false), (ns, namenodeContexts) -> { + if (namenodeContexts != null + && namenodeContexts.size() > 0 Review Comment: The reason for not judging outside here is that we should ensure that get cache and modify cache are atomic, considering the following situation: 1. After **Thread1** judges that the cache is not empty, it gets the ns no active in the cache 2. **Thread2** (loadCache thread )clear the cache 3. **Thread3** processes the client request and finds that the cache is empty, and updates the cache. At this time, the ns - in the cache has active nn 4. **Thread1** rotates the previously acquired data of non-active nn and writes it into the cache, causing ns in the cache to have no active nn -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] KeeProMise commented on a diff in pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
KeeProMise commented on code in PR #5990: URL: https://github.com/apache/hadoop/pull/5990#discussion_r1306272889 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -478,4 +476,27 @@ private List getRecentRegistrationForQuery( public void setRouterId(String router) { this.routerId = router; } + + /** + * Shuffle cache, to ensure that the current nn will not be accessed first next time. + * + * + * @param nsId name service id + * @param namenode namenode contexts + */ + @Override + public synchronized void shuffleCache(String nsId, FederationNamenodeContext namenode) { +cacheNS.compute(Pair.of(nsId, false), (ns, namenodeContexts) -> { + if (namenodeContexts != null + && namenodeContexts.size() > 0 + && !namenodeContexts.get(0).getState().equals(ACTIVE) + && namenodeContexts.get(0).getRpcAddress().equals(namenode.getRpcAddress())) { +List rotatedNnContexts = new ArrayList<>(namenodeContexts); +Collections.rotate(rotatedNnContexts, -1); Review Comment: The purpose of this method is to reduce the priority of the next access to the wrong nn in this request. In the default implementation of MembershipNamenodeResolver, we can directly rotate the cache to the left, because the cache of ns in the MembershipNamenodeResolver is a list, but considering the implementation of other NamenodeResolver (Maybe there will be in the future), so this name may be changed to a more intuitive method name. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5967: YARN-11435. [Router] FederationStateStoreFacade is not reinitialized with Router conf.
goiri commented on code in PR #5967: URL: https://github.com/apache/hadoop/pull/5967#discussion_r1306269910 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java: ## @@ -195,11 +194,23 @@ public static RetryPolicy createRetryPolicy(Configuration conf) { /** * Returns the singleton instance of the FederationStateStoreFacade object. + * @param conf Configuration. * * @return the singleton {@link FederationStateStoreFacade} instance */ - public static FederationStateStoreFacade getInstance() { -return FACADE; + public static FederationStateStoreFacade getInstance(Configuration conf) { +if(facade == null) { Review Comment: Can we exit early? ``` if (facade != null) { return facade; } ``` ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/FederationPoliciesTestUtil.java: ## @@ -244,7 +245,7 @@ public static FederationStateStoreFacade initFacade( List subClusterInfos, SubClusterPolicyConfiguration policyConfiguration) throws YarnException { FederationStateStoreFacade goodFacade = FederationStateStoreFacade -.getInstance(); +.getInstance(new Configuration()); Review Comment: Can we have an empty getInstance() that does the new Configuration? ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/rmadmin/TestFederationRMAdminInterceptor.java: ## @@ -120,7 +120,7 @@ public void setUp() { // Initialize facade & stateSore stateStore = new MemoryFederationStateStore(); stateStore.init(this.getConf()); -facade = FederationStateStoreFacade.getInstance(); +facade = FederationStateStoreFacade.getInstance(this.getConf()); Review Comment: Let's be consistent with using this or not. ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java: ## @@ -195,11 +194,23 @@ public static RetryPolicy createRetryPolicy(Configuration conf) { /** * Returns the singleton instance of the FederationStateStoreFacade object. + * @param conf Configuration. * * @return the singleton {@link FederationStateStoreFacade} instance */ - public static FederationStateStoreFacade getInstance() { -return FACADE; + public static FederationStateStoreFacade getInstance(Configuration conf) { +if(facade == null) { Review Comment: Fix spaces too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #5966: HDFS-17148. RBF: SQLDelegationTokenSecretManager must cleanup expired tokens in SQL
goiri merged PR #5966: URL: https://github.com/apache/hadoop/pull/5966 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
goiri commented on code in PR #5990: URL: https://github.com/apache/hadoop/pull/5990#discussion_r1306244904 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -83,14 +83,12 @@ public class MembershipNamenodeResolver /** Cached lookup of NN for block pool. Invalidated on cache refresh. */ private Map> cacheBP; - Review Comment: Let's avoid these unrelated changes. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -478,4 +476,27 @@ private List getRecentRegistrationForQuery( public void setRouterId(String router) { this.routerId = router; } + + /** + * Shuffle cache, to ensure that the current nn will not be accessed first next time. + * + * + * @param nsId name service id + * @param namenode namenode contexts + */ + @Override + public synchronized void shuffleCache(String nsId, FederationNamenodeContext namenode) { +cacheNS.compute(Pair.of(nsId, false), (ns, namenodeContexts) -> { + if (namenodeContexts != null + && namenodeContexts.size() > 0 + && !namenodeContexts.get(0).getState().equals(ACTIVE) + && namenodeContexts.get(0).getRpcAddress().equals(namenode.getRpcAddress())) { +List rotatedNnContexts = new ArrayList<>(namenodeContexts); +Collections.rotate(rotatedNnContexts, -1); Review Comment: We are not really shuffling right? ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MembershipNamenodeResolver.java: ## @@ -478,4 +476,27 @@ private List getRecentRegistrationForQuery( public void setRouterId(String router) { this.routerId = router; } + + /** + * Shuffle cache, to ensure that the current nn will not be accessed first next time. + * + * + * @param nsId name service id + * @param namenode namenode contexts + */ + @Override + public synchronized void shuffleCache(String nsId, FederationNamenodeContext namenode) { +cacheNS.compute(Pair.of(nsId, false), (ns, namenodeContexts) -> { + if (namenodeContexts != null + && namenodeContexts.size() > 0 Review Comment: isEmpty() and probably we want to check for > 1 as there's nothing to rotate. We should actually do that outside, if there's 1 or less, don't do anything. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java: ## @@ -357,6 +363,72 @@ public void testNoNamenodesAvailable() throws Exception{ assertEquals(originalRouter0Failures, rpcMetrics0.getProxyOpNoNamenodes()); } + /** + * When failover occurs, the router may record that the ns has no active namenode. + * Only when the router updates the cache next time can the memory status be updated, + * causing the router to report NoNamenodesAvailableException for a long time + */ + @Test + public void testNoNamenodesAvailableLongTimeWhenNsFailover() throws Exception { +setupCluster(false, true); +transitionClusterNSToStandby(cluster); +for (RouterContext routerContext : cluster.getRouters()) { + // Manually trigger the heartbeat + Collection heartbeatServices = routerContext + .getRouter().getNamenodeHeartbeatServices(); + for (NamenodeHeartbeatService service : heartbeatServices) { +service.periodicInvoke(); + } + // Update service cache + routerContext.getRouter().getStateStore().refreshCaches(true); +} +// Record the time after the router first updated the cache +long firstLoadTime = Time.now(); +List namenodes = cluster.getNamenodes(); + +// Make sure all namenodes are in standby state +for (MiniRouterDFSCluster.NamenodeContext namenodeContext : namenodes) { + assertTrue(namenodeContext.getNamenode().getNameNodeState() == STANDBY.ordinal()); +} + +Configuration conf = cluster.getRouterClientConf(); +// Set dfs.client.failover.random.order false, to pick 1st router at first +conf.setBoolean("dfs.client.failover.random.order", false); + +DFSClient routerClient = new DFSClient(new URI("hdfs://fed"), conf); + +for (RouterContext routerContext : cluster.getRouters()) { + // Get the second namenode in the router cache and make it active + List ns0 = routerContext.getRouter().getNamenodeResolver().getNamenodesForNameserviceId("ns0", false); Review Comment: Pretty sure this line is being flagged by checkstyle. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterClientRejectOverload.java: ## @@ -357,6 +363,72 @@ public void testNoNamenodesAvailable() throws Exception{ assertEquals(originalRouter0Failures, rpcMetrics0.ge
[GitHub] [hadoop] hadoop-yetus commented on pull request #5989: YARN-11514. Extend SchedulerResponse with capacityVector
hadoop-yetus commented on PR #5989: URL: https://github.com/apache/hadoop/pull/5989#issuecomment-1693881690 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | jsonlint | 0m 1s | | jsonlint was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 41 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 54s | | trunk passed | | +1 :green_heart: | compile | 1m 1s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 53s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 57s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 46s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 2m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 47s | | the patch passed | | +1 :green_heart: | compile | 0m 54s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 0m 46s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 46s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 44s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5989/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 27 new + 91 unchanged - 1 fixed = 118 total (was 92) | | +1 :green_heart: | mvnsite | 0m 49s | | the patch passed | | +1 :green_heart: | javadoc | 0m 43s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 40s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | -1 :x: | spotbugs | 2m 0s | [/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5989/1/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | +1 :green_heart: | shadedclient | 39m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 103m 45s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5989/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | -1 :x: | asflicense | 0m 34s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5989/1/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 265m 51s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Unread field:QueueCapacityVectorEntryInfo.java:[line 17] | | | Unread field:QueueCapacityVectorEntryInfo.java:[line 18] | | | Unread field:QueueCapacityVectorInfo.java:[line 23] | | Failed junit tests | hadoop.y
[jira] [Commented] (HADOOP-18487) protobuf-2.5.0 dependencies => provided
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759141#comment-17759141 ] ASF GitHub Bot commented on HADOOP-18487: - ayushtkn commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1693846161 Ohh, I forgot some: Additional to previous ones I think you should update the Building.txt file (https://github.com/apache/hadoop/blob/trunk/BUILDING.txt) and mention about this. Second question: Do you want to keep providing scope optional or should we wrap this up under a profile? Third: We still expect protobuf-2.5.0 to be packaged even if the scope is provided, right? > protobuf-2.5.0 dependencies => provided > --- > > Key: HADOOP-18487 > URL: https://issues.apache.org/jira/browse/HADOOP-18487 > Project: Hadoop Common > Issue Type: Improvement > Components: build, ipc >Affects Versions: 3.3.4 >Reporter: Steve Loughran >Priority: Major > Labels: pull-request-available > > uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in > HADOOP-17046 > while still keeping those files around (for a long time...), how about we > make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, > rather than *compile* > that way, if apps want it for their own apis, they have to explicitly ask for > it, but at least our own scans don't break. > i have no idea what will happen to the rest of the stack at this point, it > will be "interesting" to see -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #4996: HADOOP-18487. protobuf 2.5.0 marked as provided.
ayushtkn commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1693846161 Ohh, I forgot some: Additional to previous ones I think you should update the Building.txt file (https://github.com/apache/hadoop/blob/trunk/BUILDING.txt) and mention about this. Second question: Do you want to keep providing scope optional or should we wrap this up under a profile? Third: We still expect protobuf-2.5.0 to be packaged even if the scope is provided, right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tonyPan123 commented on pull request #5956: HDFS-17161: Adding test for StripedBlockReader#createBlockReader leak…
tonyPan123 commented on PR #5956: URL: https://github.com/apache/hadoop/pull/5956#issuecomment-1693815051 Thanks a lot for replying. Yeah, IOException could be thrown in BlockReaderRemote.newBlockReader in StripedBlockReader as illustrated in case HDFS-13039. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
hadoop-yetus commented on PR #5990: URL: https://github.com/apache/hadoop/pull/5990#issuecomment-1693804418 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 31s | | trunk passed | | +1 :green_heart: | compile | 0m 33s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 15s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 5 new + 1 unchanged - 0 fixed = 6 total (was 1) | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 0m 55s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 26s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 29m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 120m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc | | | hadoop.hdfs.server.federation.router.TestRouterQuota | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5990 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4bac7adc94c4 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / da69e7f403c95c8daf7305eea60432f4fc57d9bb | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5990/1/testReport/ | | Max. process+thread count | 2646 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-pr
[jira] [Resolved] (HADOOP-18845) Add ability to configure ConnectionTTL of http connections while creating S3 Client.
[ https://issues.apache.org/jira/browse/HADOOP-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukund Thakur resolved HADOOP-18845. Resolution: Fixed > Add ability to configure ConnectionTTL of http connections while creating S3 > Client. > > > Key: HADOOP-18845 > URL: https://issues.apache.org/jira/browse/HADOOP-18845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.6 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Fix For: 3.3.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18845) Add ability to configure ConnectionTTL of http connections while creating S3 Client.
[ https://issues.apache.org/jira/browse/HADOOP-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759122#comment-17759122 ] ASF GitHub Bot commented on HADOOP-18845: - mukund-thakur commented on PR #5948: URL: https://github.com/apache/hadoop/pull/5948#issuecomment-1693734062 merged to trunk and branch-3.3. Thanks for reviews. > Add ability to configure ConnectionTTL of http connections while creating S3 > Client. > > > Key: HADOOP-18845 > URL: https://issues.apache.org/jira/browse/HADOOP-18845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.6 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Fix For: 3.3.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18842) Support Overwrite Directory On Commit For S3A Committers
[ https://issues.apache.org/jira/browse/HADOOP-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759123#comment-17759123 ] Steve Loughran commented on HADOOP-18842: - ok, so you are proposing we split the output files by dest directory, for parallelised reading and better scale there/ good * you can switch from memory storage to disk storage once some threshold is reached. * many readers can read files independently * if a job commit fails, more partitions are likely to be preserved or updated * bad: lots of files to create and open * bad: complexit when reading in the manifest of a task to determine which file to update. I suppose a tactic would be to generate a map of (dir -> accumulator), and the accumulator is updated with the list of files from that TA. if the accumulator gets above a certain size, then the switch to saving to files kicks in. You could probably avoid the need for the cross-thread queue /async record write by just having whichever thread is trying to update the accumulator acquire a lock to it, then do the create (if needed), plus the record writes. Another thing to consider is: how efficient is the current SinglePendingCommit structure; we do use the file format as the record format, don't we? a more efficient design for any accumulator would be possible, wouldn't it? something of (path, uploadID, array[part-info]). in the manifest committer I hadn't worried about the preservation of dirs until commit; having a single file listing all commits was just a way to avoid running OOM and rely on file buffering/caching to keep cost of building the file low. we did hit memory problems without it though. the big issue is on a spark driver with many active jobs: the memory requirement of multiple job commits going on at the same time was causing oom failures not seen with the older committer, even though the entry size for each file to commit was much smaller (src, dest path, etag). > Support Overwrite Directory On Commit For S3A Committers > > > Key: HADOOP-18842 > URL: https://issues.apache.org/jira/browse/HADOOP-18842 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The goal is to add a new kind of commit mechanism in which the destination > directory is cleared off before committing the file. > *Use Case* > In case of dynamicPartition insert overwrite queries, The destination > directory which needs to be overwritten are not known before the execution > and hence it becomes a challenge to clear off the destination directory. > > One approach to handle this is, The underlying engines/client will clear off > all the destination directories before calling the commitJob operation but > the issue with this approach is that, In case of failures while committing > the files, We might end up with the whole of previous data being deleted > making the recovery process difficult or time consuming. > > *Solution* > Based on mode of commit operation either *INSERT* or *OVERWRITE* , During > commitJob operations, The committer will map each destination directory with > the commits which needs to be added in the directory and if the mode is > *OVERWRITE* , The committer will delete the directory recursively and then > commit each of the files in the directory. So in case of failures (worst > case) The number of destination directory which will be deleted will be equal > to the number of threads if we do it in multi-threaded way as compared to the > whole data if it was done in the engine side. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on pull request #5948: HADOOP-18845. Add ability to configure s3 connection ttl
mukund-thakur commented on PR #5948: URL: https://github.com/apache/hadoop/pull/5948#issuecomment-1693734062 merged to trunk and branch-3.3. Thanks for reviews. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18797) S3A committer fix lost data on concurrent jobs
[ https://issues.apache.org/jira/browse/HADOOP-18797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18797: Affects Version/s: 3.3.6 > S3A committer fix lost data on concurrent jobs > -- > > Key: HADOOP-18797 > URL: https://issues.apache.org/jira/browse/HADOOP-18797 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.6 >Reporter: Emanuel Velzi >Priority: Major > > There is a failure in the commit process when multiple jobs are writing to a > s3 directory *concurrently* using {*}magic committers{*}. > This issue is closely related HADOOP-17318. > When multiple Spark jobs write to the same S3A directory, they upload files > simultaneously using "__magic" as the base directory for staging. Inside this > directory, there are multiple "/job-some-uuid" directories, each representing > a concurrently running job. > To fix some preoblems related to concunrrency a property was introduced in > the previous fix: "spark.hadoop.fs.s3a.committer.abort.pending.uploads". When > set to false, it ensures that during the cleanup stage, finalizing jobs do > not abort pending uploads from other jobs. So we see in logs this line: > {code:java} > DEBUG [main] o.a.h.fs.s3a.commit.AbstractS3ACommitter (819): Not cleanup up > pending uploads to s3a ...{code} > (from > [AbstractS3ACommitter.java#L952|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L952]) > However, in the next step, the {*}"__magic" directory is recursively > deleted{*}: > {code:java} > INFO [main] o.a.h.fs.s3a.commit.magic.MagicS3GuardCommitter (98): Deleting > magic directory s3a://my-bucket/my-table/__magic: duration 0:00.560s {code} > (from [AbstractS3ACommitter.java#L1112 > |https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L1112]and > > [MagicS3GuardCommitter.java#L137)|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L137)] > This deletion operation *affects the second job* that is still running > because it loses pending uploads (i.e., ".pendingset" and ".pending" files). > The consequences can range from an exception in the best case to a silent > loss of data in the worst case. The latter occurs when Job_1 deletes files > just before Job_2 executes "listPendingUploadsToCommit" to list ".pendingset" > files in the job attempt directory previous to complete the uploads with POST > requests. > To resolve this issue, it's important {*}to ensure that only the prefix > associated with the job currently finalizing is cleaned{*}. > Here's a possible solution: > {code:java} > /** > * Delete the magic directory. > */ > public void cleanupStagingDirs() { > final Path out = getOutputPath(); > //Path path = magicSubdir(getOutputPath()); > Path path = new Path(magicSubdir(out), formatJobDir(getUUID())); > try(DurationInfo ignored = new DurationInfo(LOG, true, > "Deleting magic directory %s", path)) { > Invoker.ignoreIOExceptions(LOG, "cleanup magic directory", > path.toString(), > () -> deleteWithWarning(getDestFS(), path, true)); > } > } {code} > > The side effect of this issue is that the "__magic" directory is never > cleaned up. However, I believe this is a minor concern, even considering that > other folders such as "_SUCCESS" also persist after jobs end. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18797) S3A committer fix lost data on concurrent jobs
[ https://issues.apache.org/jira/browse/HADOOP-18797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759118#comment-17759118 ] Steve Loughran commented on HADOOP-18797: - bq. unning multiple jobs writing into the same dir is always pretty risky if they are generating new files with uuids in their names, and you want all jobs to add to the existing dataset, should be safe. > S3A committer fix lost data on concurrent jobs > -- > > Key: HADOOP-18797 > URL: https://issues.apache.org/jira/browse/HADOOP-18797 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Emanuel Velzi >Priority: Major > > There is a failure in the commit process when multiple jobs are writing to a > s3 directory *concurrently* using {*}magic committers{*}. > This issue is closely related HADOOP-17318. > When multiple Spark jobs write to the same S3A directory, they upload files > simultaneously using "__magic" as the base directory for staging. Inside this > directory, there are multiple "/job-some-uuid" directories, each representing > a concurrently running job. > To fix some preoblems related to concunrrency a property was introduced in > the previous fix: "spark.hadoop.fs.s3a.committer.abort.pending.uploads". When > set to false, it ensures that during the cleanup stage, finalizing jobs do > not abort pending uploads from other jobs. So we see in logs this line: > {code:java} > DEBUG [main] o.a.h.fs.s3a.commit.AbstractS3ACommitter (819): Not cleanup up > pending uploads to s3a ...{code} > (from > [AbstractS3ACommitter.java#L952|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L952]) > However, in the next step, the {*}"__magic" directory is recursively > deleted{*}: > {code:java} > INFO [main] o.a.h.fs.s3a.commit.magic.MagicS3GuardCommitter (98): Deleting > magic directory s3a://my-bucket/my-table/__magic: duration 0:00.560s {code} > (from [AbstractS3ACommitter.java#L1112 > |https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L1112]and > > [MagicS3GuardCommitter.java#L137)|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L137)] > This deletion operation *affects the second job* that is still running > because it loses pending uploads (i.e., ".pendingset" and ".pending" files). > The consequences can range from an exception in the best case to a silent > loss of data in the worst case. The latter occurs when Job_1 deletes files > just before Job_2 executes "listPendingUploadsToCommit" to list ".pendingset" > files in the job attempt directory previous to complete the uploads with POST > requests. > To resolve this issue, it's important {*}to ensure that only the prefix > associated with the job currently finalizing is cleaned{*}. > Here's a possible solution: > {code:java} > /** > * Delete the magic directory. > */ > public void cleanupStagingDirs() { > final Path out = getOutputPath(); > //Path path = magicSubdir(getOutputPath()); > Path path = new Path(magicSubdir(out), formatJobDir(getUUID())); > try(DurationInfo ignored = new DurationInfo(LOG, true, > "Deleting magic directory %s", path)) { > Invoker.ignoreIOExceptions(LOG, "cleanup magic directory", > path.toString(), > () -> deleteWithWarning(getDestFS(), path, true)); > } > } {code} > > The side effect of this issue is that the "__magic" directory is never > cleaned up. However, I believe this is a minor concern, even considering that > other folders such as "_SUCCESS" also persist after jobs end. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18866) Refactor @Test(expected) with assertThrows
[ https://issues.apache.org/jira/browse/HADOOP-18866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759117#comment-17759117 ] Steve Loughran commented on HADOOP-18866: - because they are existing tests, they find regressions and rewriting test code just because that the existing style is out of fashion is hard to justify. why bother? doesn't improve test coverage or diagnostics; get it wrong and either you have a false positive (test failure) or false negative (misses regressions). it is stable code. # new tests, no; there'd we want intercept() and assertj. using assertj over junit5 asserts helps us to backport things to older branches without reworking the tests. # as part of ongoing changes to existing tests -yes. # a bulk replace of (expected = to intercept()? well, we are always scared of big changes. look at the commit history of moving to junit5 > Refactor @Test(expected) with assertThrows > -- > > Key: HADOOP-18866 > URL: https://issues.apache.org/jira/browse/HADOOP-18866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Taher Ghaleb >Priority: Minor > Labels: pull-request-available > > I am working on research that investigates test smell refactoring in which we > identify alternative implementations of test cases, study how commonly used > these refactorings are, and assess how acceptable they are in practice. > The smell occurs when exception handling can alternatively be implemented > using assertion rather than annotation: using {{assertThrows(Exception.class, > () -> \{...});}} instead of {{{}@Test(expected = Exception.class){}}}. > While there are many cases like this, we aim in this pull request to get your > feedback on this particular test smell and its refactoring. Thanks in advance > for your input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18845) Add ability to configure ConnectionTTL of http connections while creating S3 Client.
[ https://issues.apache.org/jira/browse/HADOOP-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759116#comment-17759116 ] ASF GitHub Bot commented on HADOOP-18845: - mukund-thakur merged PR #5948: URL: https://github.com/apache/hadoop/pull/5948 > Add ability to configure ConnectionTTL of http connections while creating S3 > Client. > > > Key: HADOOP-18845 > URL: https://issues.apache.org/jira/browse/HADOOP-18845 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.6 >Reporter: Mukund Thakur >Assignee: Mukund Thakur >Priority: Major > Labels: pull-request-available > Fix For: 3.3.9 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur merged pull request #5948: HADOOP-18845. Add ability to configure s3 connection ttl
mukund-thakur merged PR #5948: URL: https://github.com/apache/hadoop/pull/5948 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18842) Support Overwrite Directory On Commit For S3A Committers
[ https://issues.apache.org/jira/browse/HADOOP-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-18842: --- Assignee: Syed Shameerur Rahman > Support Overwrite Directory On Commit For S3A Committers > > > Key: HADOOP-18842 > URL: https://issues.apache.org/jira/browse/HADOOP-18842 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Syed Shameerur Rahman >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The goal is to add a new kind of commit mechanism in which the destination > directory is cleared off before committing the file. > *Use Case* > In case of dynamicPartition insert overwrite queries, The destination > directory which needs to be overwritten are not known before the execution > and hence it becomes a challenge to clear off the destination directory. > > One approach to handle this is, The underlying engines/client will clear off > all the destination directories before calling the commitJob operation but > the issue with this approach is that, In case of failures while committing > the files, We might end up with the whole of previous data being deleted > making the recovery process difficult or time consuming. > > *Solution* > Based on mode of commit operation either *INSERT* or *OVERWRITE* , During > commitJob operations, The committer will map each destination directory with > the commits which needs to be added in the directory and if the mode is > *OVERWRITE* , The committer will delete the directory recursively and then > commit each of the files in the directory. So in case of failures (worst > case) The number of destination directory which will be deleted will be equal > to the number of threads if we do it in multi-threaded way as compared to the > whole data if it was done in the engine side. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18842) Support Overwrite Directory On Commit For S3A Committers
[ https://issues.apache.org/jira/browse/HADOOP-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18842: Parent: HADOOP-18477 Issue Type: Sub-task (was: New Feature) > Support Overwrite Directory On Commit For S3A Committers > > > Key: HADOOP-18842 > URL: https://issues.apache.org/jira/browse/HADOOP-18842 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > > The goal is to add a new kind of commit mechanism in which the destination > directory is cleared off before committing the file. > *Use Case* > In case of dynamicPartition insert overwrite queries, The destination > directory which needs to be overwritten are not known before the execution > and hence it becomes a challenge to clear off the destination directory. > > One approach to handle this is, The underlying engines/client will clear off > all the destination directories before calling the commitJob operation but > the issue with this approach is that, In case of failures while committing > the files, We might end up with the whole of previous data being deleted > making the recovery process difficult or time consuming. > > *Solution* > Based on mode of commit operation either *INSERT* or *OVERWRITE* , During > commitJob operations, The committer will map each destination directory with > the commits which needs to be added in the directory and if the mode is > *OVERWRITE* , The committer will delete the directory recursively and then > commit each of the files in the directory. So in case of failures (worst > case) The number of destination directory which will be deleted will be equal > to the number of threads if we do it in multi-threaded way as compared to the > whole data if it was done in the engine side. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5944: YARN-11537. [Federation] Router CLI Supports List SubClusterPolicyConfiguration Of Queues.
goiri commented on code in PR #5944: URL: https://github.com/apache/hadoop/pull/5944#discussion_r1305924569 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/QueryFederationQueuePoliciesResponse.java: ## @@ -58,18 +70,38 @@ public static QueryFederationQueuePoliciesResponse newInstance( */ public abstract void setTotalSize(int totalSize); + /** + * Returns the page. + * + * @return page. + */ @Public @Unstable - public abstract int getPageSize(); + public abstract int getPage(); Review Comment: Doesn't size make more sense? ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java: ## @@ -141,44 +141,11 @@ import org.apache.hadoop.yarn.exceptions.YarnException; import org.apache.hadoop.yarn.security.AMRMTokenIdentifier; import org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocol; -import org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.CheckForDecommissioningNodesRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.CheckForDecommissioningNodesResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshAdminAclsRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshAdminAclsResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshClusterMaxPriorityRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshClusterMaxPriorityResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesResourcesRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesResourcesResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshNodesResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshQueuesResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshServiceAclsRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshServiceAclsResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshSuperUserGroupsConfigurationResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RefreshUserToGroupsMappingsResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.RemoveFromClusterNodeLabelsRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.RemoveFromClusterNodeLabelsResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.ReplaceLabelsOnNodeRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.ReplaceLabelsOnNodeResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.UpdateNodeResourceRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.UpdateNodeResourceResponse; +import org.apache.hadoop.yarn.server.api.protocolrecords.*; Review Comment: Avoid ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/rmadmin/FederationRMAdminInterceptor.java: ## @@ -995,6 +999,195 @@ public BatchSaveFederationQueuePoliciesResponse batchSaveFederationQueuePolicies throw new YarnException("Unable to batchSaveFederationQueuePolicies."); } + /** + * List the Queue Policies for the Federation. + * + * @param request QueryFederationQueuePoliciesRequest Request. + * @return Review Comment: Complete all these. ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/server/api/ResourceManagerAdministrationProtocol.java: ## @@ -28,40 +28,7 @@ import org.apache.hadoop.yarn.api.records.NodeId; import org.apache.hadoop.yarn.api.records.ResourceOption; import org.apache.hadoop.yarn.exceptions.YarnException; -import org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.NodesToAttributesMappingRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.NodesToAttributesMappingResponse; -import org.apache.hadoop.yarn.server.api.protocolrecords.CheckForDecommissioningNodesRequest; -import org.apache.hadoop.yarn.server.api.protocolrecords.CheckForDecommissioningNodesResponse; -import org.apache.hadoop.yarn.server.api.protocol
[GitHub] [hadoop] goiri commented on a diff in pull request #5934: YARN-7599. [BackPort][GPG] ApplicationCleaner in Global Policy Generator.
goiri commented on code in PR #5934: URL: https://github.com/apache/hadoop/pull/5934#discussion_r1305919907 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationStateStoreFacade.java: ## @@ -51,38 +51,7 @@ import org.apache.hadoop.yarn.server.federation.resolver.SubClusterResolver; import org.apache.hadoop.yarn.server.federation.store.FederationStateStore; import org.apache.hadoop.yarn.server.federation.store.exception.FederationStateStoreRetriableException; -import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.AddApplicationHomeSubClusterResponse; -import org.apache.hadoop.yarn.server.federation.store.records.AddReservationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.AddReservationHomeSubClusterResponse; -import org.apache.hadoop.yarn.server.federation.store.records.ApplicationHomeSubCluster; -import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.GetApplicationHomeSubClusterResponse; -import org.apache.hadoop.yarn.server.federation.store.records.GetReservationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.GetReservationHomeSubClusterResponse; -import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoRequest; -import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterInfoResponse; -import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPoliciesConfigurationsRequest; -import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationRequest; -import org.apache.hadoop.yarn.server.federation.store.records.GetSubClusterPolicyConfigurationResponse; -import org.apache.hadoop.yarn.server.federation.store.records.SetSubClusterPolicyConfigurationRequest; -import org.apache.hadoop.yarn.server.federation.store.records.GetSubClustersInfoRequest; -import org.apache.hadoop.yarn.server.federation.store.records.ReservationHomeSubCluster; -import org.apache.hadoop.yarn.server.federation.store.records.SubClusterId; -import org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo; -import org.apache.hadoop.yarn.server.federation.store.records.SubClusterPolicyConfiguration; -import org.apache.hadoop.yarn.server.federation.store.records.UpdateApplicationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.UpdateReservationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.DeleteReservationHomeSubClusterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyRequest; -import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKeyResponse; -import org.apache.hadoop.yarn.server.federation.store.records.RouterMasterKey; -import org.apache.hadoop.yarn.server.federation.store.records.RouterStoreToken; -import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenRequest; -import org.apache.hadoop.yarn.server.federation.store.records.RouterRMTokenResponse; -import org.apache.hadoop.yarn.server.federation.store.records.SubClusterState; -import org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterRequest; -import org.apache.hadoop.yarn.server.federation.store.records.SubClusterDeregisterResponse; +import org.apache.hadoop.yarn.server.federation.store.records.*; Review Comment: Avoid ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GlobalPolicyGenerator.java: ## @@ -155,6 +163,16 @@ protected void serviceStart() throws Exception { DurationFormatUtils.formatDurationISO(scCleanerIntervalMs)); } +// Schedule ApplicationCleaner service +long appCleanerIntervalMs = config.getLong(YarnConfiguration.GPG_APPCLEANER_INTERVAL_MS, Review Comment: I know is a backport but... Can we use getTimeDuration? ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/applicationcleaner/TestDefaultApplicationCleaner.java: ## @@ -0,0 +1,130 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.a
[jira] [Commented] (HADOOP-18867) Upgrade ZooKeeper to 3.6.4
[ https://issues.apache.org/jira/browse/HADOOP-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759110#comment-17759110 ] ASF GitHub Bot commented on HADOOP-18867: - hadoop-yetus commented on PR #5988: URL: https://github.com/apache/hadoop/pull/5988#issuecomment-1693672579 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 54m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 13s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 21m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 16s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 81m 31s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5988/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5988 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux c362d03ad77a 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8d9d112deff321ac701e35a22133dd5b0799b66e | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5988/1/testReport/ | | Max. process+thread count | 555 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5988/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Upgrade ZooKeeper to 3.6.4 > -- > > Key: HADOOP-18867 > URL: https://issues.apache.org/jira/browse/HADOOP-18867 > Project: Hadoop Comm
[GitHub] [hadoop] hadoop-yetus commented on pull request #5988: HADOOP-18867. Upgrade ZooKeeper to 3.6.4.
hadoop-yetus commented on PR #5988: URL: https://github.com/apache/hadoop/pull/5988#issuecomment-1693672579 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | mvnsite | 0m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 54m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 13s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | compile | 0m 12s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 12s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 12s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 13s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 21m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 16s | | hadoop-project in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 81m 31s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5988/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5988 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux c362d03ad77a 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8d9d112deff321ac701e35a22133dd5b0799b66e | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5988/1/testReport/ | | Max. process+thread count | 555 (vs. ulimit of 5500) | | modules | C: hadoop-project U: hadoop-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5988/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org Fo
[GitHub] [hadoop] KeeProMise opened a new pull request, #5990: HDFS-17166. RBF: Throwing NoNamenodesAvailableException for a long time, when failover
KeeProMise opened a new pull request, #5990: URL: https://github.com/apache/hadoop/pull/5990 ### Description of PR https://issues.apache.org/jira/browse/HDFS-17166 ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on pull request #5959: HDFS-17162. RBF: Add missing comments in StateStoreService
goiri commented on PR #5959: URL: https://github.com/apache/hadoop/pull/5959#issuecomment-1693658633 > hey, remember to include the JIRA ID in the commit title; when i first saw this in the commit message i thought it was a CVE fix going in. > > how about we revert and then recommit with the same title as this PR, _and_ credit the author? if it's @flaming-archer's first commit, they deserve to have their name in the commit log... My bad, I should've tuned the PR description before merging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18073) Upgrade AWS SDK to v2
[ https://issues.apache.org/jira/browse/HADOOP-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759103#comment-17759103 ] ASF GitHub Bot commented on HADOOP-18073: - steveloughran commented on PR #5981: URL: https://github.com/apache/hadoop/pull/5981#issuecomment-1693632603 "interesting" > Upgrade AWS SDK to v2 > - > > Key: HADOOP-18073 > URL: https://issues.apache.org/jira/browse/HADOOP-18073 > Project: Hadoop Common > Issue Type: Task > Components: auth, fs/s3 >Affects Versions: 3.3.1 >Reporter: xiaowei sun >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Attachments: Upgrading S3A to SDKV2.pdf > > > This task tracks upgrading Hadoop's AWS connector S3A from AWS SDK for Java > V1 to AWS SDK for Java V2. > Original use case: > {quote}We would like to access s3 with AWS SSO, which is supported in > software.amazon.awssdk:sdk-core:2.*. > In particular, from > [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html], > when to set 'fs.s3a.aws.credentials.provider', it must be > "com.amazonaws.auth.AWSCredentialsProvider". We would like to support > "software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider" which > supports AWS SSO, so users only need to authenticate once. > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5981: HADOOP-18073. Upgrade AWS SDK to v2 in S3A
steveloughran commented on PR #5981: URL: https://github.com/apache/hadoop/pull/5981#issuecomment-1693632603 "interesting" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18487) protobuf-2.5.0 dependencies => provided
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759096#comment-17759096 ] ASF GitHub Bot commented on HADOOP-18487: - ayushtkn commented on code in PR #4996: URL: https://github.com/apache/hadoop/pull/4996#discussion_r1305758799 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufWrapperLegacy.java: ## @@ -0,0 +1,125 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ipc; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.concurrent.atomic.AtomicBoolean; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.util.Preconditions; + +/** + * A RpcWritable wrapper for unshaded protobuf messages. + * This class isolates unshaded protobuf classes from + * the rest of the RPC codebase, so it can operate without + * needing that on the classpath at runtime. + * The classes are needed at compile time; and if + * unshaded protobuf messages are to be marshalled, they + * will need to be on the classpath then. + * That is implicit: it is impossible to pass in a class + * which is a protobuf message unless that condition is met. + */ +@InterfaceAudience.Private Review Comment: misses ```InterfaceStability``` ## hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml: ## @@ -451,8 +451,7 @@ - - + Review Comment: you want to exclude the entire class? not just one method ``getFixedByteString`` ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufHelper.java: ## @@ -30,31 +30,36 @@ import org.apache.hadoop.thirdparty.protobuf.ServiceException; /** - * Helper methods for protobuf related RPC implementation + * Helper methods for protobuf related RPC implementation. + * This is deprecated because it references protobuf 2.5 classes + * as well as the shaded ones -and so needs an unshaded protobuf-2.5 + * JAR on the classpath during execution. + * It MUST NOT be used internally; it is retained in case existing, + * external applications already use it. */ @InterfaceAudience.Private -public class ProtobufHelper { +@Deprecated Review Comment: It is deprecated, can we mention somewhere above like use ``ShadedProtobufHelper`` instead? ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml: ## @@ -36,6 +36,10 @@ org.apache.hadoop hadoop-hdfs-client + + org.apache.hadoop + hadoop-common + Review Comment: why? ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/internal/ShadedProtobufHelper.java: ## @@ -0,0 +1,150 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.ipc.internal; + +import java.io.IOException; +import java.util.concurrent.ConcurrentHashMap; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.security.proto.SecurityProtos.TokenProto; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenIdentifier; +import org.apache.hadoop.thirdparty.protobuf.ByteString; +import org.ap
[GitHub] [hadoop] ayushtkn commented on a diff in pull request #4996: HADOOP-18487. protobuf 2.5.0 marked as provided.
ayushtkn commented on code in PR #4996: URL: https://github.com/apache/hadoop/pull/4996#discussion_r1305758799 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufWrapperLegacy.java: ## @@ -0,0 +1,125 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.ipc; + +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.concurrent.atomic.AtomicBoolean; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.util.Preconditions; + +/** + * A RpcWritable wrapper for unshaded protobuf messages. + * This class isolates unshaded protobuf classes from + * the rest of the RPC codebase, so it can operate without + * needing that on the classpath at runtime. + * The classes are needed at compile time; and if + * unshaded protobuf messages are to be marshalled, they + * will need to be on the classpath then. + * That is implicit: it is impossible to pass in a class + * which is a protobuf message unless that condition is met. + */ +@InterfaceAudience.Private Review Comment: misses ```InterfaceStability``` ## hadoop-common-project/hadoop-common/dev-support/findbugsExcludeFile.xml: ## @@ -451,8 +451,7 @@ - - + Review Comment: you want to exclude the entire class? not just one method ``getFixedByteString`` ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufHelper.java: ## @@ -30,31 +30,36 @@ import org.apache.hadoop.thirdparty.protobuf.ServiceException; /** - * Helper methods for protobuf related RPC implementation + * Helper methods for protobuf related RPC implementation. + * This is deprecated because it references protobuf 2.5 classes + * as well as the shaded ones -and so needs an unshaded protobuf-2.5 + * JAR on the classpath during execution. + * It MUST NOT be used internally; it is retained in case existing, + * external applications already use it. */ @InterfaceAudience.Private -public class ProtobufHelper { +@Deprecated Review Comment: It is deprecated, can we mention somewhere above like use ``ShadedProtobufHelper`` instead? ## hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/pom.xml: ## @@ -36,6 +36,10 @@ org.apache.hadoop hadoop-hdfs-client + + org.apache.hadoop + hadoop-common + Review Comment: why? ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/internal/ShadedProtobufHelper.java: ## @@ -0,0 +1,150 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.ipc.internal; + +import java.io.IOException; +import java.util.concurrent.ConcurrentHashMap; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.io.Text; +import org.apache.hadoop.security.proto.SecurityProtos.TokenProto; +import org.apache.hadoop.security.token.Token; +import org.apache.hadoop.security.token.TokenIdentifier; +import org.apache.hadoop.thirdparty.protobuf.ByteString; +import org.apache.hadoop.thirdparty.protobuf.ServiceException; + +/** + * Helper methods for protobuf related RPC implementation using the + * hadoop {@code org.apache.hadoop.thirdparty.protobuf} shaded version. + * This is absolutely private to hadoop-* modules. + */ +@Int
[GitHub] [hadoop] brumi1024 opened a new pull request, #5989: YARN-11514. Extend SchedulerResponse with capacityVector
brumi1024 opened a new pull request, #5989: URL: https://github.com/apache/hadoop/pull/5989 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18867) Upgrade ZooKeeper to 3.6.4
[ https://issues.apache.org/jira/browse/HADOOP-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18867: Labels: pull-request-available (was: ) > Upgrade ZooKeeper to 3.6.4 > -- > > Key: HADOOP-18867 > URL: https://issues.apache.org/jira/browse/HADOOP-18867 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > Labels: pull-request-available > > While ZooKeeper 3.6 is already EOL, we can upgrade to the final release of > the ZooKeeper 3.6 as short-term fix until bumping to ZooKeeper 3.7 or later. > Dependency convergence error must be addressed on {{-Dhbase.profile=2.0}}. > {noformat} > $ mvn clean install -Dzookeeper.version=3.6.4 -Dhbase.profile=2.0 -DskipTests > clean install > Dependency convergence error for > org.apache.yetus:audience-annotations:jar:0.13.0:compile paths to dependency > are: > +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT > +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test > +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile > +-org.apache.zookeeper:zookeeper-jute:jar:3.6.4:compile > +-org.apache.yetus:audience-annotations:jar:0.13.0:compile > and > +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT > +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test > +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile > +-org.apache.yetus:audience-annotations:jar:0.13.0:compile > and > +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT > +-org.apache.hbase:hbase-common:jar:2.2.4:compile > +-org.apache.yetus:audience-annotations:jar:0.5.0:compile > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18867) Upgrade ZooKeeper to 3.6.4
[ https://issues.apache.org/jira/browse/HADOOP-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759084#comment-17759084 ] ASF GitHub Bot commented on HADOOP-18867: - iwasakims opened a new pull request, #5988: URL: https://github.com/apache/hadoop/pull/5988 https://issues.apache.org/jira/browse/HADOOP-18867 While ZooKeeper 3.6 is already EOL, we can upgrade to the final release of the ZooKeeper 3.6 as short-term fix until bumping to ZooKeeper 3.7 or later. Dependency convergence error must be addressed on `-Dhbase.profile=2.0`. ``` $ mvn clean install -Dzookeeper.version=3.6.4 -Dhbase.profile=2.0 -DskipTests clean install Dependency convergence error for org.apache.yetus:audience-annotations:jar:0.13.0:compile paths to dependency are: +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile +-org.apache.zookeeper:zookeeper-jute:jar:3.6.4:compile +-org.apache.yetus:audience-annotations:jar:0.13.0:compile and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile +-org.apache.yetus:audience-annotations:jar:0.13.0:compile and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hbase:hbase-common:jar:2.2.4:compile +-org.apache.yetus:audience-annotations:jar:0.5.0:compile ``` > Upgrade ZooKeeper to 3.6.4 > -- > > Key: HADOOP-18867 > URL: https://issues.apache.org/jira/browse/HADOOP-18867 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Minor > > While ZooKeeper 3.6 is already EOL, we can upgrade to the final release of > the ZooKeeper 3.6 as short-term fix until bumping to ZooKeeper 3.7 or later. > Dependency convergence error must be addressed on {{-Dhbase.profile=2.0}}. > {noformat} > $ mvn clean install -Dzookeeper.version=3.6.4 -Dhbase.profile=2.0 -DskipTests > clean install > Dependency convergence error for > org.apache.yetus:audience-annotations:jar:0.13.0:compile paths to dependency > are: > +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT > +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test > +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile > +-org.apache.zookeeper:zookeeper-jute:jar:3.6.4:compile > +-org.apache.yetus:audience-annotations:jar:0.13.0:compile > and > +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT > +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test > +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile > +-org.apache.yetus:audience-annotations:jar:0.13.0:compile > and > +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT > +-org.apache.hbase:hbase-common:jar:2.2.4:compile > +-org.apache.yetus:audience-annotations:jar:0.5.0:compile > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims opened a new pull request, #5988: HADOOP-18867. Upgrade ZooKeeper to 3.6.4.
iwasakims opened a new pull request, #5988: URL: https://github.com/apache/hadoop/pull/5988 https://issues.apache.org/jira/browse/HADOOP-18867 While ZooKeeper 3.6 is already EOL, we can upgrade to the final release of the ZooKeeper 3.6 as short-term fix until bumping to ZooKeeper 3.7 or later. Dependency convergence error must be addressed on `-Dhbase.profile=2.0`. ``` $ mvn clean install -Dzookeeper.version=3.6.4 -Dhbase.profile=2.0 -DskipTests clean install Dependency convergence error for org.apache.yetus:audience-annotations:jar:0.13.0:compile paths to dependency are: +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile +-org.apache.zookeeper:zookeeper-jute:jar:3.6.4:compile +-org.apache.yetus:audience-annotations:jar:0.13.0:compile and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile +-org.apache.yetus:audience-annotations:jar:0.13.0:compile and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hbase:hbase-common:jar:2.2.4:compile +-org.apache.yetus:audience-annotations:jar:0.5.0:compile ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5947: HDFS-17158. Show the rate of metrics in EC recovery task.
hadoop-yetus commented on PR #5947: URL: https://github.com/apache/hadoop/pull/5947#issuecomment-1693464142 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 39s | | trunk passed | | +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | | trunk passed | | +1 :green_heart: | javadoc | 1m 14s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 39s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 35s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 28s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 1m 28s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 6s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 33s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 34s | | the patch passed | | +1 :green_heart: | shadedclient | 41m 9s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 247m 40s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5947/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 403m 39s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5947/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5947 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 6be338225488 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 326fe3e28d38052529c366b509a9cb1ce9f077dc | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5947/4/testReport/ | | Max. process+thread count | 2694 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5947/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated m
[jira] [Commented] (HADOOP-18865) ABFS: Adding 100 continue in userAgent String and dynamically removing it if retry is without the header enabled.
[ https://issues.apache.org/jira/browse/HADOOP-18865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759068#comment-17759068 ] ASF GitHub Bot commented on HADOOP-18865: - steveloughran commented on code in PR #5987: URL: https://github.com/apache/hadoop/pull/5987#discussion_r1305728584 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -751,6 +752,15 @@ public AbfsRestOperation append(final String path, final byte[] buffer, } } +// Check if the retry is with "Expect: 100-continue" header being present in the previous request. Review Comment: not sure about this strategy of patching/replacing the ua header ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsClient.java: ## @@ -86,6 +86,7 @@ public final class ITestAbfsClient extends AbstractAbfsIntegrationTest { private static final String ACCOUNT_NAME = "bogusAccountName.dfs.core.windows.net"; private static final String FS_AZURE_USER_AGENT_PREFIX = "Partner Service"; + private static final String HUNDRED_CONTINUE_USER_AGENT = SINGLE_WHITE_SPACE + HUNDRED_CONTINUE + SEMICOLON; Review Comment: normally i'd say "use the AbfsClient const", but here we have regression testing that the client constant doesn't diverge from what the server expects, so I`m happy > ABFS: Adding 100 continue in userAgent String and dynamically removing it if > retry is without the header enabled. > - > > Key: HADOOP-18865 > URL: https://issues.apache.org/jira/browse/HADOOP-18865 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Affects Versions: 3.3.6 >Reporter: Anmol Asrani >Assignee: Anmol Asrani >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.6 > > > Adding 100 continue in userAgent String if enabled in AbfsConfiguration and > dynamically removing it if retry is without the header enabled. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5987: HADOOP-18865 ABFS: Adding "100-continue" in userAgent String if enabled
steveloughran commented on code in PR #5987: URL: https://github.com/apache/hadoop/pull/5987#discussion_r1305728584 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java: ## @@ -751,6 +752,15 @@ public AbfsRestOperation append(final String path, final byte[] buffer, } } +// Check if the retry is with "Expect: 100-continue" header being present in the previous request. Review Comment: not sure about this strategy of patching/replacing the ua header ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsClient.java: ## @@ -86,6 +86,7 @@ public final class ITestAbfsClient extends AbstractAbfsIntegrationTest { private static final String ACCOUNT_NAME = "bogusAccountName.dfs.core.windows.net"; private static final String FS_AZURE_USER_AGENT_PREFIX = "Partner Service"; + private static final String HUNDRED_CONTINUE_USER_AGENT = SINGLE_WHITE_SPACE + HUNDRED_CONTINUE + SEMICOLON; Review Comment: normally i'd say "use the AbfsClient const", but here we have regression testing that the client constant doesn't diverge from what the server expects, so I`m happy -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18867) Upgrade ZooKeeper to 3.6.4
Masatake Iwasaki created HADOOP-18867: - Summary: Upgrade ZooKeeper to 3.6.4 Key: HADOOP-18867 URL: https://issues.apache.org/jira/browse/HADOOP-18867 Project: Hadoop Common Issue Type: Improvement Reporter: Masatake Iwasaki Assignee: Masatake Iwasaki While ZooKeeper 3.6 is already EOL, we can upgrade to the final release of the ZooKeeper 3.6 as short-term fix until bumping to ZooKeeper 3.7 or later. Dependency convergence error must be addressed on {{-Dhbase.profile=2.0}}. {noformat} $ mvn clean install -Dzookeeper.version=3.6.4 -Dhbase.profile=2.0 -DskipTests clean install Dependency convergence error for org.apache.yetus:audience-annotations:jar:0.13.0:compile paths to dependency are: +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile +-org.apache.zookeeper:zookeeper-jute:jar:3.6.4:compile +-org.apache.yetus:audience-annotations:jar:0.13.0:compile and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hadoop:hadoop-common:test-jar:tests:3.4.0-SNAPSHOT:test +-org.apache.zookeeper:zookeeper:jar:3.6.4:compile +-org.apache.yetus:audience-annotations:jar:0.13.0:compile and +-org.apache.hadoop:hadoop-yarn-server-timelineservice-hbase-common:jar:3.4.0-SNAPSHOT +-org.apache.hbase:hbase-common:jar:2.2.4:compile +-org.apache.yetus:audience-annotations:jar:0.5.0:compile {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18866) Refactor @Test(expected) with assertThrows
[ https://issues.apache.org/jira/browse/HADOOP-18866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18866: Labels: pull-request-available (was: ) > Refactor @Test(expected) with assertThrows > -- > > Key: HADOOP-18866 > URL: https://issues.apache.org/jira/browse/HADOOP-18866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Taher Ghaleb >Priority: Minor > Labels: pull-request-available > > I am working on research that investigates test smell refactoring in which we > identify alternative implementations of test cases, study how commonly used > these refactorings are, and assess how acceptable they are in practice. > The smell occurs when exception handling can alternatively be implemented > using assertion rather than annotation: using {{assertThrows(Exception.class, > () -> \{...});}} instead of {{{}@Test(expected = Exception.class){}}}. > While there are many cases like this, we aim in this pull request to get your > feedback on this particular test smell and its refactoring. Thanks in advance > for your input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18866) Refactor @Test(expected) with assertThrows
[ https://issues.apache.org/jira/browse/HADOOP-18866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759054#comment-17759054 ] ASF GitHub Bot commented on HADOOP-18866: - Taher-Ghaleb commented on PR #5982: URL: https://github.com/apache/hadoop/pull/5982#issuecomment-1693391933 Thanks @steveloughran for your response. I get your point, but I would like to get your input on the refactorings performed in this PR, and how such a practice is acceptable in general. In your opinion, why are those test cases still using `@Test(expected)` instead of the better alternative using `assertThrows`? I have created a Jira report and prefixed its id to the PR title. Thanks. > Refactor @Test(expected) with assertThrows > -- > > Key: HADOOP-18866 > URL: https://issues.apache.org/jira/browse/HADOOP-18866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Taher Ghaleb >Priority: Minor > > I am working on research that investigates test smell refactoring in which we > identify alternative implementations of test cases, study how commonly used > these refactorings are, and assess how acceptable they are in practice. > The smell occurs when exception handling can alternatively be implemented > using assertion rather than annotation: using {{assertThrows(Exception.class, > () -> \{...});}} instead of {{{}@Test(expected = Exception.class){}}}. > While there are many cases like this, we aim in this pull request to get your > feedback on this particular test smell and its refactoring. Thanks in advance > for your input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Taher-Ghaleb commented on pull request #5982: HADOOP-18866. Refactor @Test(expected) with assertThrows
Taher-Ghaleb commented on PR #5982: URL: https://github.com/apache/hadoop/pull/5982#issuecomment-1693391933 Thanks @steveloughran for your response. I get your point, but I would like to get your input on the refactorings performed in this PR, and how such a practice is acceptable in general. In your opinion, why are those test cases still using `@Test(expected)` instead of the better alternative using `assertThrows`? I have created a Jira report and prefixed its id to the PR title. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18866) Refactor @Test(expected) with assertThrows
Taher Ghaleb created HADOOP-18866: - Summary: Refactor @Test(expected) with assertThrows Key: HADOOP-18866 URL: https://issues.apache.org/jira/browse/HADOOP-18866 Project: Hadoop Common Issue Type: Improvement Reporter: Taher Ghaleb I am working on research that investigates test smell refactoring in which we identify alternative implementations of test cases, study how commonly used these refactorings are, and assess how acceptable they are in practice. The smell occurs when exception handling can alternatively be implemented using assertion rather than annotation: using {{assertThrows(Exception.class, () -> \{...});}} instead of {{{}@Test(expected = Exception.class){}}}. While there are many cases like this, we aim in this pull request to get your feedback on this particular test smell and its refactoring. Thanks in advance for your input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5975: YARN-8980. Mapreduce application container start fail after AM restart.
hadoop-yetus commented on PR #5975: URL: https://github.com/apache/hadoop/pull/5975#issuecomment-1693371460 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 11s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 36m 6s | | trunk passed | | +1 :green_heart: | compile | 2m 46s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 2m 30s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 44s | | trunk passed | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 33s | | the patch passed | | +1 :green_heart: | compile | 2m 39s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 2m 39s | | the patch passed | | +1 :green_heart: | compile | 2m 26s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 2m 26s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 41s | | the patch passed | | +1 :green_heart: | javadoc | 1m 34s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 4m 7s | | the patch passed | | +1 :green_heart: | shadedclient | 41m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 109m 32s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 25m 13s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 303m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5975/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5975 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8e8419904a3c 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcefe058c926eda05c294168f4b62d5d3e28d373 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5975/3/testReport/ | | Max. process+thread count | 908 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5975/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | |
[GitHub] [hadoop] steveloughran commented on pull request #5959: HDFS-17162. RBF: Add missing comments in StateStoreService
steveloughran commented on PR #5959: URL: https://github.com/apache/hadoop/pull/5959#issuecomment-1693337002 final bit of credit; is `TIsNotT` your jira account id? as we need to credit you on jira too -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5959: HDFS-17162. RBF: Add missing comments in StateStoreService
steveloughran commented on PR #5959: URL: https://github.com/apache/hadoop/pull/5959#issuecomment-1693329070 ...no need to do that; I have command line access too. updated. ``` HDFS-17162. RBF: Add missing comments in StateStoreService #5959 Contributed by tian bao ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5975: YARN-8980. Mapreduce application container start fail after AM restart.
hadoop-yetus commented on PR #5975: URL: https://github.com/apache/hadoop/pull/5975#issuecomment-1693327592 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 8s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 16s | | trunk passed | | +1 :green_heart: | compile | 2m 25s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 2m 13s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 44s | | trunk passed | | +1 :green_heart: | javadoc | 1m 41s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 20s | | the patch passed | | +1 :green_heart: | compile | 2m 15s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 2m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 6s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 2m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 11s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 27s | | the patch passed | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 100m 43s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 24m 23s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 269m 53s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5975/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5975 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d9d17e4b84ba 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / dcefe058c926eda05c294168f4b62d5d3e28d373 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5975/4/testReport/ | | Max. process+thread count | 937 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5975/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | |
[jira] [Commented] (HADOOP-18860) Upgrade mockito to 4.11.0
[ https://issues.apache.org/jira/browse/HADOOP-18860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17759034#comment-17759034 ] ASF GitHub Bot commented on HADOOP-18860: - steveloughran commented on PR #5977: URL: https://github.com/apache/hadoop/pull/5977#issuecomment-1693310782 looking at the source of TestTimelineAuthFilterForV2 i can see mockito references, so was worried it was a regression -but my own pr #5981 has the same failure, so its not a regression. TestDockerContainerRuntime does look new though; it only surfaced in #5273 after you updated mockito everywhere. Afraid you *do* get to work out what's changed. sorry. This is why i'm not a fan of Mockito; tests are too brittle to internal code changes; risk of not wanting to make a change to avoid failures, or ignoring failure when it's an actual regression. > Upgrade mockito to 4.11.0 > - > > Key: HADOOP-18860 > URL: https://issues.apache.org/jira/browse/HADOOP-18860 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.3.6 >Reporter: Anmol Asrani >Assignee: Anmol Asrani >Priority: Major > Labels: pull-request-available > > Upgrading mockito in hadoop-project -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5977: HADOOP-18860: Upgrade mockito version to 4.11.0
steveloughran commented on PR #5977: URL: https://github.com/apache/hadoop/pull/5977#issuecomment-1693310782 looking at the source of TestTimelineAuthFilterForV2 i can see mockito references, so was worried it was a regression -but my own pr #5981 has the same failure, so its not a regression. TestDockerContainerRuntime does look new though; it only surfaced in #5273 after you updated mockito everywhere. Afraid you *do* get to work out what's changed. sorry. This is why i'm not a fan of Mockito; tests are too brittle to internal code changes; risk of not wanting to make a change to avoid failures, or ignoring failure when it's an actual regression. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5982: Refactor @Test(expected) with assertThrows
steveloughran commented on PR #5982: URL: https://github.com/apache/hadoop/pull/5982#issuecomment-1693298196 1. afraid you need to create a jira at issues.apache.org for this, if you need to create an account say you have a pending github pr 2. We use our LambdaTestUtils.intercept() here as it is better. really. in particular, if nothing was thrown, the assertion raised includes the toString() value of whatever was returned from the lambda-expression; really helps interpreting test failures. you might want to look at LambdaTestUtils, compare it to ScalaTest and see how it is being extended to handle futures and the like. outside that, we are moving towards AssertJ as our assertion language because it is better *and* extensible (see IOStatisticAssertions) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5985: refactor: Clean up Ember.js resolver configuration
steveloughran commented on PR #5985: URL: https://github.com/apache/hadoop/pull/5985#issuecomment-1693286500 and this one you need to create a jira for. if you don't have an account at issues.apache.org then fill in the form and say you have an open github pr you need a jira for -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on pull request #5956: HDFS-17161: Adding test for StripedBlockReader#createBlockReader leak…
Hexiaoqiao commented on PR #5956: URL: https://github.com/apache/hadoop/pull/5956#issuecomment-1693273115 Thanks @tonyPan123 for your contribution. But I am confused if this could be happen one production cluster. Here you depend on inject logic, do you mean that build connection will throw exceptions? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5855: HDFS-17093. In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incomplete bloc
hadoop-yetus commented on PR #5855: URL: https://github.com/apache/hadoop/pull/5855#issuecomment-1693152182 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 39s | | trunk passed | | +1 :green_heart: | compile | 0m 56s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 51s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 44s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 56s | | trunk passed | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 14s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 2m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 48s | | the patch passed | | +1 :green_heart: | compile | 0m 47s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 47s | | the patch passed | | +1 :green_heart: | compile | 0m 42s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 42s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 34s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 47s | | the patch passed | | +1 :green_heart: | javadoc | 0m 40s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 4s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 53s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 199m 2s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 293m 15s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5855/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5855 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 427a9bd7c294 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bc401c2443fea407dd1d93ee50b076d1b0787fd5 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5855/15/testReport/ | | Max. process+thread count | 3372 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5855/15/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apac
[jira] [Updated] (HADOOP-18865) ABFS: Adding 100 continue in userAgent String and dynamically removing it if retry is without the header enabled.
[ https://issues.apache.org/jira/browse/HADOOP-18865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18865: Labels: pull-request-available (was: ) > ABFS: Adding 100 continue in userAgent String and dynamically removing it if > retry is without the header enabled. > - > > Key: HADOOP-18865 > URL: https://issues.apache.org/jira/browse/HADOOP-18865 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Affects Versions: 3.3.6 >Reporter: Anmol Asrani >Assignee: Anmol Asrani >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.6 > > > Adding 100 continue in userAgent String if enabled in AbfsConfiguration and > dynamically removing it if retry is without the header enabled. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18865) ABFS: Adding 100 continue in userAgent String and dynamically removing it if retry is without the header enabled.
[ https://issues.apache.org/jira/browse/HADOOP-18865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17758976#comment-17758976 ] ASF GitHub Bot commented on HADOOP-18865: - hadoop-yetus commented on PR #5987: URL: https://github.com/apache/hadoop/pull/5987#issuecomment-1693031750 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 36s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 35m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 19s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 133m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5987/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5987 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 3e8319e7438a 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0f558515dce7598841b8cb258410761189dfaa6b | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5987/1/testReport/ | | Max. process+thread count | 677 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5987/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > ABFS: Adding 100 continue in userAgent String and dynamically removin
[GitHub] [hadoop] hadoop-yetus commented on pull request #5987: HADOOP-18865 ABFS: Adding "100-continue" in userAgent String if enabled
hadoop-yetus commented on PR #5987: URL: https://github.com/apache/hadoop/pull/5987#issuecomment-1693031750 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 36s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 35m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 4s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 19s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 133m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5987/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5987 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 3e8319e7438a 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0f558515dce7598841b8cb258410761189dfaa6b | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5987/1/testReport/ | | Max. process+thread count | 677 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5987/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org ---
[jira] [Assigned] (HADOOP-18257) Analyzing S3A Audit Logs
[ https://issues.apache.org/jira/browse/HADOOP-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mehakmeet Singh reassigned HADOOP-18257: Assignee: Mehakmeet Singh (was: Sravani Gadey) > Analyzing S3A Audit Logs > - > > Key: HADOOP-18257 > URL: https://issues.apache.org/jira/browse/HADOOP-18257 > Project: Hadoop Common > Issue Type: Task > Components: fs/s3 >Reporter: Sravani Gadey >Assignee: Mehakmeet Singh >Priority: Major > > The main aim is to analyze S3A Audit logs to give better insights in Hive and > Spark jobs. > Steps involved are: > * Merging audit log files containing huge number of audit logs collected > from a job containing various S3 requests. > * Parsing audit logs using regular expressions i.e., dividing them into key > value pairs. > * Converting the key value pairs into CSV file and AVRO file formats. > * Querying on data which would give better insights for different jobs. > * Visualizing the audit logs on Zeppelin or Jupyter notebook with graphs. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org