[jira] [Commented] (HDFS-15765) Add support for Kerberos and Basic Auth in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699295#comment-17699295 ] ASF GitHub Bot commented on HDFS-15765: --- hadoop-yetus commented on PR #5447: URL: https://github.com/apache/hadoop/pull/5447#issuecomment-1465061535 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 33s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 36s | | trunk passed | | +1 :green_heart: | compile | 6m 8s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 5m 42s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | checkstyle | 1m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 29s | | trunk passed | | +1 :green_heart: | javadoc | 1m 51s | | trunk passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 15s | | trunk passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 5m 56s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 10s | | the patch passed | | +1 :green_heart: | compile | 5m 54s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 5m 54s | | the patch passed | | +1 :green_heart: | compile | 5m 40s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | javac | 5m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 4s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 16s | | the patch passed | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 2m 2s | | the patch passed with JDK Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | +1 :green_heart: | spotbugs | 5m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 30s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 206m 2s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 346m 49s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5447/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5447 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 91f6bdec7132 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 87e39011908cd63627beeab0b68ef89a2ac72454 | | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5447/4/testReport/ | | Max. process+thread count | 3674 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdf
[jira] [Commented] (HDFS-15765) Add support for Kerberos and Basic Auth in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699255#comment-17699255 ] ASF GitHub Bot commented on HDFS-15765: --- trakos commented on PR #5447: URL: https://github.com/apache/hadoop/pull/5447#issuecomment-1464989614 Thanks for the review @saxenapranav, I've applied the suggested changes. > Add support for Kerberos and Basic Auth in webhdfs > -- > > Key: HDFS-15765 > URL: https://issues.apache.org/jira/browse/HDFS-15765 > Project: Hadoop HDFS > Issue Type: Bug > Components: hadoop-client >Reporter: Pushpendra Singh >Priority: Minor > Labels: pull-request-available > > webhdfs's HTTP operation like get ( GetOpParam.java) operation and other HTTP > operation has 'requireAuth' set to false and expected to work with Delegation > token only. However, when working with webhdfs over Apache Knox, delegation > token authentication is not supported, we should support Kerberos > authentication (SPNEGO) or Basic authentication for WebHdfsFileSystem if user > turns on a configuration. > Further webhdfs (WebHDFSFileSystem.java) is calling 'public URLConnection > openConnection(URL url)' and providing no way to use the kerberos > authentication, if configured. > Even after setting the UserGroupInformation with user name and keytab, > openConnection is not using the keytab for authentication. > Also WebHdfsFileSystem doesn't provide any support for HTTP BASIC > authentication (username/password). Provide support to read the password via > environment variable. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15765) Add support for Kerberos and Basic Auth in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699251#comment-17699251 ] ASF GitHub Bot commented on HDFS-15765: --- trakos commented on code in PR #5447: URL: https://github.com/apache/hadoop/pull/5447#discussion_r1133136119 ## hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/web/TestBasicAuthConfigurator.java: ## @@ -0,0 +1,67 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.web; + +import org.apache.hadoop.security.authentication.client.ConnectionConfigurator; +import org.junit.Test; +import org.mockito.Mockito; + +import java.io.IOException; +import java.net.HttpURLConnection; + +public class TestBasicAuthConfigurator { + @Test + public void testNullCredentials() throws IOException { +ConnectionConfigurator conf = new BasicAuthConfigurator(null, null); +HttpURLConnection conn = Mockito.mock(HttpURLConnection.class); +conf.configure(conn); +Mockito.verify(conn, Mockito.never()).setRequestProperty(Mockito.any(), Mockito.any()); + } + + @Test + public void testEmptyCredentials() throws IOException { +ConnectionConfigurator conf = new BasicAuthConfigurator(null, ""); +HttpURLConnection conn = Mockito.mock(HttpURLConnection.class); +conf.configure(conn); +Mockito.verify(conn, Mockito.never()).setRequestProperty(Mockito.any(), Mockito.any()); + } + + @Test + public void testCredentialsSet() throws IOException { +ConnectionConfigurator conf = new BasicAuthConfigurator(null, "user:pass"); +HttpURLConnection conn = Mockito.mock(HttpURLConnection.class); +conf.configure(conn); +Mockito.verify(conn, Mockito.times(1)).setRequestProperty( +"AUTHORIZATION", +"Basic dXNlcjpwYXNz" +); + } + + @Test + public void testParentConfigurator() throws IOException { +ConnectionConfigurator parent = Mockito.mock(ConnectionConfigurator.class); +ConnectionConfigurator conf = new BasicAuthConfigurator(parent, "user:pass"); +HttpURLConnection conn = Mockito.mock(HttpURLConnection.class); +conf.configure(conn); +Mockito.verify(conn, Mockito.times(1)).setRequestProperty( +"AUTHORIZATION", +"Basic dXNlcjpwYXNz" Review Comment: I've taken that string from testing a basic auth implementation in cURL, but I can see how it's confusing here. I've applied the suggested change. > Add support for Kerberos and Basic Auth in webhdfs > -- > > Key: HDFS-15765 > URL: https://issues.apache.org/jira/browse/HDFS-15765 > Project: Hadoop HDFS > Issue Type: Bug > Components: hadoop-client >Reporter: Pushpendra Singh >Priority: Minor > Labels: pull-request-available > > webhdfs's HTTP operation like get ( GetOpParam.java) operation and other HTTP > operation has 'requireAuth' set to false and expected to work with Delegation > token only. However, when working with webhdfs over Apache Knox, delegation > token authentication is not supported, we should support Kerberos > authentication (SPNEGO) or Basic authentication for WebHdfsFileSystem if user > turns on a configuration. > Further webhdfs (WebHDFSFileSystem.java) is calling 'public URLConnection > openConnection(URL url)' and providing no way to use the kerberos > authentication, if configured. > Even after setting the UserGroupInformation with user name and keytab, > openConnection is not using the keytab for authentication. > Also WebHdfsFileSystem doesn't provide any support for HTTP BASIC > authentication (username/password). Provide support to read the password via > environment variable. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15765) Add support for Kerberos and Basic Auth in webhdfs
[ https://issues.apache.org/jira/browse/HDFS-15765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699249#comment-17699249 ] ASF GitHub Bot commented on HDFS-15765: --- trakos commented on code in PR #5447: URL: https://github.com/apache/hadoop/pull/5447#discussion_r1133135729 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/URLConnectionFactory.java: ## @@ -120,7 +136,7 @@ public HttpURLConnection configure(HttpURLConnection connection) } } -return conn; +return new BasicAuthConfigurator(conn, basicAuthCredentials); Review Comment: Sounds reasonable! I've made the change. > Add support for Kerberos and Basic Auth in webhdfs > -- > > Key: HDFS-15765 > URL: https://issues.apache.org/jira/browse/HDFS-15765 > Project: Hadoop HDFS > Issue Type: Bug > Components: hadoop-client >Reporter: Pushpendra Singh >Priority: Minor > Labels: pull-request-available > > webhdfs's HTTP operation like get ( GetOpParam.java) operation and other HTTP > operation has 'requireAuth' set to false and expected to work with Delegation > token only. However, when working with webhdfs over Apache Knox, delegation > token authentication is not supported, we should support Kerberos > authentication (SPNEGO) or Basic authentication for WebHdfsFileSystem if user > turns on a configuration. > Further webhdfs (WebHDFSFileSystem.java) is calling 'public URLConnection > openConnection(URL url)' and providing no way to use the kerberos > authentication, if configured. > Even after setting the UserGroupInformation with user name and keytab, > openConnection is not using the keytab for authentication. > Also WebHdfsFileSystem doesn't provide any support for HTTP BASIC > authentication (username/password). Provide support to read the password via > environment variable. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16942) Send error to datanode if FBR is rejected due to bad lease
[ https://issues.apache.org/jira/browse/HDFS-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell resolved HDFS-16942. -- Resolution: Fixed > Send error to datanode if FBR is rejected due to bad lease > -- > > Key: HDFS-16942 > URL: https://issues.apache.org/jira/browse/HDFS-16942 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.5, 3.3.6 > > > When a datanode sends a FBR to the namenode, it requires a lease to send it. > On a couple of busy clusters, we have seen an issue where the DN is somehow > delayed in sending the FBR after requesting the least. Then the NN rejects > the FBR and logs a message to that effect, but from the Datanodes point of > view, it thinks the report was successful and does not try to send another > report until the 6 hour default interval has passed. > If this happens to a few DNs, there can be missing and under replicated > blocks, further adding to the cluster load. Even worse, I have see the DNs > join the cluster with zero blocks, so it is not obvious the under replication > is caused by lost a FBR, as all DNs appear to be up and running. > I believe we should propagate an error back to the DN if the FBR is rejected, > that way, the DN can request a new lease and try again. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16942) Send error to datanode if FBR is rejected due to bad lease
[ https://issues.apache.org/jira/browse/HDFS-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-16942: - Fix Version/s: 3.2.5 3.3.6 > Send error to datanode if FBR is rejected due to bad lease > -- > > Key: HDFS-16942 > URL: https://issues.apache.org/jira/browse/HDFS-16942 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.5, 3.3.6 > > > When a datanode sends a FBR to the namenode, it requires a lease to send it. > On a couple of busy clusters, we have seen an issue where the DN is somehow > delayed in sending the FBR after requesting the least. Then the NN rejects > the FBR and logs a message to that effect, but from the Datanodes point of > view, it thinks the report was successful and does not try to send another > report until the 6 hour default interval has passed. > If this happens to a few DNs, there can be missing and under replicated > blocks, further adding to the cluster load. Even worse, I have see the DNs > join the cluster with zero blocks, so it is not obvious the under replication > is caused by lost a FBR, as all DNs appear to be up and running. > I believe we should propagate an error back to the DN if the FBR is rejected, > that way, the DN can request a new lease and try again. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16942) Send error to datanode if FBR is rejected due to bad lease
[ https://issues.apache.org/jira/browse/HDFS-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-16942: - Fix Version/s: 3.4.0 > Send error to datanode if FBR is rejected due to bad lease > -- > > Key: HDFS-16942 > URL: https://issues.apache.org/jira/browse/HDFS-16942 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > When a datanode sends a FBR to the namenode, it requires a lease to send it. > On a couple of busy clusters, we have seen an issue where the DN is somehow > delayed in sending the FBR after requesting the least. Then the NN rejects > the FBR and logs a message to that effect, but from the Datanodes point of > view, it thinks the report was successful and does not try to send another > report until the 6 hour default interval has passed. > If this happens to a few DNs, there can be missing and under replicated > blocks, further adding to the cluster load. Even worse, I have see the DNs > join the cluster with zero blocks, so it is not obvious the under replication > is caused by lost a FBR, as all DNs appear to be up and running. > I believe we should propagate an error back to the DN if the FBR is rejected, > that way, the DN can request a new lease and try again. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16942) Send error to datanode if FBR is rejected due to bad lease
[ https://issues.apache.org/jira/browse/HDFS-16942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699239#comment-17699239 ] ASF GitHub Bot commented on HDFS-16942: --- sodonnel merged PR #5460: URL: https://github.com/apache/hadoop/pull/5460 > Send error to datanode if FBR is rejected due to bad lease > -- > > Key: HDFS-16942 > URL: https://issues.apache.org/jira/browse/HDFS-16942 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode, namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Labels: pull-request-available > > When a datanode sends a FBR to the namenode, it requires a lease to send it. > On a couple of busy clusters, we have seen an issue where the DN is somehow > delayed in sending the FBR after requesting the least. Then the NN rejects > the FBR and logs a message to that effect, but from the Datanodes point of > view, it thinks the report was successful and does not try to send another > report until the 6 hour default interval has passed. > If this happens to a few DNs, there can be missing and under replicated > blocks, further adding to the cluster load. Even worse, I have see the DNs > join the cluster with zero blocks, so it is not obvious the under replication > is caused by lost a FBR, as all DNs appear to be up and running. > I believe we should propagate an error back to the DN if the FBR is rejected, > that way, the DN can request a new lease and try again. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16939) Fix the thread safety bug in LowRedundancyBlocks
[ https://issues.apache.org/jira/browse/HDFS-16939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699191#comment-17699191 ] Xiaoqiao He commented on HDFS-16939: [~ste...@apache.org] This PR has been committed to branch-3.3 and branch-3.3.5. FYI. > Fix the thread safety bug in LowRedundancyBlocks > > > Key: HDFS-16939 > URL: https://issues.apache.org/jira/browse/HDFS-16939 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.5 > > > The remove method in LowRedundancyBlocks is not protected by synchronized. > This method is private and is called by BlockManager. As a result, > priorityQueues has the risk of being accessed concurrently by multiple > threads. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16939) Fix the thread safety bug in LowRedundancyBlocks
[ https://issues.apache.org/jira/browse/HDFS-16939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-16939: --- Fix Version/s: 3.3.5 > Fix the thread safety bug in LowRedundancyBlocks > > > Key: HDFS-16939 > URL: https://issues.apache.org/jira/browse/HDFS-16939 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.5 > > > The remove method in LowRedundancyBlocks is not protected by synchronized. > This method is private and is called by BlockManager. As a result, > priorityQueues has the risk of being accessed concurrently by multiple > threads. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16939) Fix the thread safety bug in LowRedundancyBlocks
[ https://issues.apache.org/jira/browse/HDFS-16939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699190#comment-17699190 ] ASF GitHub Bot commented on HDFS-16939: --- Hexiaoqiao commented on PR #5471: URL: https://github.com/apache/hadoop/pull/5471#issuecomment-1464859249 Committed to branch-3.3 and branch-3.3.5 > Fix the thread safety bug in LowRedundancyBlocks > > > Key: HDFS-16939 > URL: https://issues.apache.org/jira/browse/HDFS-16939 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > The remove method in LowRedundancyBlocks is not protected by synchronized. > This method is private and is called by BlockManager. As a result, > priorityQueues has the risk of being accessed concurrently by multiple > threads. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16939) Fix the thread safety bug in LowRedundancyBlocks
[ https://issues.apache.org/jira/browse/HDFS-16939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17699187#comment-17699187 ] ASF GitHub Bot commented on HDFS-16939: --- Hexiaoqiao merged PR #5471: URL: https://github.com/apache/hadoop/pull/5471 > Fix the thread safety bug in LowRedundancyBlocks > > > Key: HDFS-16939 > URL: https://issues.apache.org/jira/browse/HDFS-16939 > Project: Hadoop HDFS > Issue Type: Bug > Components: namanode >Reporter: Shuyan Zhang >Assignee: Shuyan Zhang >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > The remove method in LowRedundancyBlocks is not protected by synchronized. > This method is private and is called by BlockManager. As a result, > priorityQueues has the risk of being accessed concurrently by multiple > threads. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org