[jira] [Work logged] (HDFS-15759) EC: Verify EC reconstruction correctness on DataNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15759?focusedWorklogId=570244=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570244
 ]

ASF GitHub Bot logged work on HDFS-15759:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 05:35
Start Date: 23/Mar/21 05:35
Worklog Time Spent: 10m 
  Work Description: runitao commented on a change in pull request #2585:
URL: https://github.com/apache/hadoop/pull/2585#discussion_r599281959



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestReconstructStripedFileWithValidator.java
##
@@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs;
+
+import org.apache.hadoop.hdfs.server.datanode.DataNode;
+import org.apache.hadoop.hdfs.server.datanode.DataNodeFaultInjector;
+import org.apache.hadoop.hdfs.server.datanode.metrics.DataNodeMetrics;
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.nio.ByteBuffer;
+import java.util.concurrent.atomic.AtomicBoolean;
+
+/**
+ * This test extends {@link TestReconstructStripedFile} to test
+ * ec reconstruction validation.
+ */
+public class TestReconstructStripedFileWithValidator
+extends TestReconstructStripedFile {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestReconstructStripedFileWithValidator.class);
+
+  public TestReconstructStripedFileWithValidator() {
+LOG.info("run {} with validator.",
+TestReconstructStripedFileWithValidator.class.getSuperclass()
+.getSimpleName());
+  }
+
+  /**
+   * This test injects data pollution into decoded outputs once.
+   * When validation enabled, the first reconstruction task should fail
+   * in the validation, but the data will be recovered correctly
+   * by the next task.
+   * On the other hand, when validation disabled, the first reconstruction task
+   * will succeed and then lead to data corruption.
+   */
+  @Test(timeout = 12)
+  public void testValidatorWithBadDecoding()
+  throws Exception {
+MiniDFSCluster cluster = getCluster();
+
+cluster.getDataNodes().stream()
+.map(DataNode::getMetrics)
+.map(DataNodeMetrics::getECInvalidReconstructionTasks)
+.forEach(n -> Assert.assertEquals(0, (long) n));
+
+DataNodeFaultInjector oldInjector = DataNodeFaultInjector.get();
+DataNodeFaultInjector badDecodingInjector = new DataNodeFaultInjector() {
+  private final AtomicBoolean flag = new AtomicBoolean(false);
+
+  @Override
+  public void badDecoding(ByteBuffer[] outputs) {
+if (!flag.get()) {
+  for (ByteBuffer output : outputs) {
+output.mark();
+output.put((byte) (output.get(output.position()) + 1));
+output.reset();
+  }
+}
+flag.set(true);
+  }
+};
+DataNodeFaultInjector.set(badDecodingInjector);
+
+int fileLen =
+(getEcPolicy().getNumDataUnits() + getEcPolicy().getNumParityUnits())
+* getBlockSize() + getBlockSize() / 10;
+try {
+  assertFileBlocksReconstruction(
+  "/testValidatorWithBadDecoding",
+  fileLen,
+  ReconstructionType.DataOnly,
+  getEcPolicy().getNumParityUnits());
+
+  long count = cluster.getDataNodes().stream()
+  .map(DataNode::getMetrics)
+  .map(DataNodeMetrics::getECInvalidReconstructionTasks)
+  .filter(n -> n == 1).count();

Review comment:
   I think we should use `.sum()` instead of `.filter(n -> n == 1).count();`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570244)
Time Spent: 7h  (was: 6h 50m)

> EC: Verify EC reconstruction correctness on DataNode
> 
>
> Key: HDFS-15759
> 

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=570219=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570219
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 04:26
Start Date: 23/Mar/21 04:26
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #2794:
URL: https://github.com/apache/hadoop/pull/2794#issuecomment-804607379


   > Thanx @virajjasani, Can you take care of the checkstyle complains.
   
   Done, Thanks.
   @ayushtkn @liuml07 would you like to take a look? I have raised backport PRs 
too.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570219)
Time Spent: 2h 50m  (was: 2h 40m)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=570202=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570202
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 03:05
Start Date: 23/Mar/21 03:05
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2800:
URL: https://github.com/apache/hadoop/pull/2800#issuecomment-804547668


   Thanks @tasanuma for your test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570202)
Time Spent: 3h  (was: 2h 50m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570195=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570195
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 02:35
Start Date: 23/Mar/21 02:35
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#issuecomment-804537584


   The failed tests succeeded in my local environment and seem not to related.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570195)
Time Spent: 3h  (was: 2h 50m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=570194=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570194
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 02:24
Start Date: 23/Mar/21 02:24
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2800:
URL: https://github.com/apache/hadoop/pull/2800#issuecomment-804533980


   The failed tests succeeded in my local environment.
   I think the spotbugs warnings are not related. I will file them later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570194)
Time Spent: 2h 50m  (was: 2h 40m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570191=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570191
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 02:20
Start Date: 23/Mar/21 02:20
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#issuecomment-804532574


   @hdaikoku Could you fix the checkstyle warnings in the CI report?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570191)
Time Spent: 2h 50m  (was: 2h 40m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570190=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570190
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 02:10
Start Date: 23/Mar/21 02:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#issuecomment-804529045


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2787/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2787/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2787/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2787 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 03adef16d5ff 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d07a0b02c29fa37c40b5e6cd1de729acb895bad3 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions 

[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570182=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570182
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 01:55
Start Date: 23/Mar/21 01:55
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on a change in pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#discussion_r599190892



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
##
@@ -473,6 +478,53 @@ public void testRegistrationExpiredAndDeletion()
 }, 100, 3000);
   }
 
+  @Test
+  public void testNamespaceInfoWithUnavailableNameNodeRegistration() throws 
IOException {
+// Populate the state store with one ACTIVE NameNode entry and one 
UNAVAILABLE NameNode entry
+// 1) ns0:nn0 - ACTIVE
+// 2) ns0:nn1 - UNAVAILABLE
+List registrationList = new ArrayList<>();
+String router = ROUTERS[0];
+String ns = NAMESERVICES[0];
+String rpcAddress = "testrpcaddress";
+String serviceAddress = "testserviceaddress";
+String lifelineAddress = "testlifelineaddress";
+String blockPoolId = "testblockpool";
+String clusterId = "testcluster";
+String webScheme = "http";
+String webAddress = "testwebaddress";
+boolean safemode = false;
+
+MembershipState record = MembershipState.newInstance(
+router, ns, NAMENODES[0], clusterId, blockPoolId,
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.ACTIVE, safemode);
+registrationList.add(record);
+
+// Set empty clusterId and blockPoolId for UNAVAILABLE NameNode
+record = MembershipState.newInstance(
+router, ns, NAMENODES[1], "", "",
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.UNAVAILABLE, safemode);
+registrationList.add(record);
+
+registerAndLoadRegistrations(registrationList);
+
+GetNamespaceInfoRequest request = GetNamespaceInfoRequest.newInstance();
+GetNamespaceInfoResponse response = 
membershipStore.getNamespaceInfo(request);
+Set namespaces = response.getNamespaceInfo();
+
+// Verify only one namespace is registered
+assertThat(namespaces).hasSize(1);

Review comment:
   > Is the error message friendlier?
   
   Yes. If the expected size is set to 2, the error message will be:
   ```
   Expected size:<2> but was:<1> in:
   <[ns0->testblockpool:nn0]>
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570182)
Time Spent: 2.5h  (was: 2h 20m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570178=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570178
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 01:15
Start Date: 23/Mar/21 01:15
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#issuecomment-804510471


   +1, pending jenkins.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570178)
Time Spent: 2h 20m  (was: 2h 10m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570171=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570171
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 00:54
Start Date: 23/Mar/21 00:54
Worklog Time Spent: 10m 
  Work Description: hdaikoku commented on a change in pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#discussion_r599170545



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
##
@@ -473,6 +478,53 @@ public void testRegistrationExpiredAndDeletion()
 }, 100, 3000);
   }
 
+  @Test
+  public void testNamespaceInfoWithUnavailableNameNodeRegistration() throws 
IOException {
+// Populate the state store with one ACTIVE NameNode entry and one 
UNAVAILABLE NameNode entry
+// 1) ns0:nn0 - ACTIVE
+// 2) ns0:nn1 - UNAVAILABLE
+List registrationList = new ArrayList<>();
+String router = ROUTERS[0];
+String ns = NAMESERVICES[0];
+String rpcAddress = "testrpcaddress";
+String serviceAddress = "testserviceaddress";
+String lifelineAddress = "testlifelineaddress";
+String blockPoolId = "testblockpool";
+String clusterId = "testcluster";
+String webScheme = "http";
+String webAddress = "testwebaddress";
+boolean safemode = false;
+
+MembershipState record = MembershipState.newInstance(
+router, ns, NAMENODES[0], clusterId, blockPoolId,
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.ACTIVE, safemode);
+registrationList.add(record);
+
+// Set empty clusterId and blockPoolId for UNAVAILABLE NameNode
+record = MembershipState.newInstance(
+router, ns, NAMENODES[1], "", "",
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.UNAVAILABLE, safemode);
+registrationList.add(record);
+
+registerAndLoadRegistrations(registrationList);
+
+GetNamespaceInfoRequest request = GetNamespaceInfoRequest.newInstance();
+GetNamespaceInfoResponse response = 
membershipStore.getNamespaceInfo(request);
+Set namespaces = response.getNamespaceInfo();
+
+// Verify only one namespace is registered
+assertThat(namespaces).hasSize(1);

Review comment:
   Didn't have any particular reason or preference so I switched it to 
assertEquals: d07a0b0




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570171)
Time Spent: 2h 10m  (was: 2h)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570168=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570168
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 00:52
Start Date: 23/Mar/21 00:52
Worklog Time Spent: 10m 
  Work Description: hdaikoku commented on a change in pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#discussion_r599170545



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
##
@@ -473,6 +478,53 @@ public void testRegistrationExpiredAndDeletion()
 }, 100, 3000);
   }
 
+  @Test
+  public void testNamespaceInfoWithUnavailableNameNodeRegistration() throws 
IOException {
+// Populate the state store with one ACTIVE NameNode entry and one 
UNAVAILABLE NameNode entry
+// 1) ns0:nn0 - ACTIVE
+// 2) ns0:nn1 - UNAVAILABLE
+List registrationList = new ArrayList<>();
+String router = ROUTERS[0];
+String ns = NAMESERVICES[0];
+String rpcAddress = "testrpcaddress";
+String serviceAddress = "testserviceaddress";
+String lifelineAddress = "testlifelineaddress";
+String blockPoolId = "testblockpool";
+String clusterId = "testcluster";
+String webScheme = "http";
+String webAddress = "testwebaddress";
+boolean safemode = false;
+
+MembershipState record = MembershipState.newInstance(
+router, ns, NAMENODES[0], clusterId, blockPoolId,
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.ACTIVE, safemode);
+registrationList.add(record);
+
+// Set empty clusterId and blockPoolId for UNAVAILABLE NameNode
+record = MembershipState.newInstance(
+router, ns, NAMENODES[1], "", "",
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.UNAVAILABLE, safemode);
+registrationList.add(record);
+
+registerAndLoadRegistrations(registrationList);
+
+GetNamespaceInfoRequest request = GetNamespaceInfoRequest.newInstance();
+GetNamespaceInfoResponse response = 
membershipStore.getNamespaceInfo(request);
+Set namespaces = response.getNamespaceInfo();
+
+// Verify only one namespace is registered
+assertThat(namespaces).hasSize(1);

Review comment:
   Didn't have any particular reason so I switched it to assertEquals: 
d07a0b0




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570168)
Time Spent: 2h  (was: 1h 50m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=570164=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570164
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 00:42
Start Date: 23/Mar/21 00:42
Worklog Time Spent: 10m 
  Work Description: hdaikoku commented on a change in pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#discussion_r599167124



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
##
@@ -473,6 +478,53 @@ public void testRegistrationExpiredAndDeletion()
 }, 100, 3000);
   }
 
+  @Test
+  public void testNamespaceInfoWithUnavailableNameNodeRegistration() throws 
IOException {
+// Populate the state store with one ACTIVE NameNode entry and one 
UNAVAILABLE NameNode entry
+// 1) ns0:nn0 - ACTIVE
+// 2) ns0:nn1 - UNAVAILABLE
+List registrationList = new ArrayList<>();
+String router = ROUTERS[0];
+String ns = NAMESERVICES[0];
+String rpcAddress = "testrpcaddress";
+String serviceAddress = "testserviceaddress";
+String lifelineAddress = "testlifelineaddress";
+String blockPoolId = "testblockpool";
+String clusterId = "testcluster";
+String webScheme = "http";
+String webAddress = "testwebaddress";
+boolean safemode = false;
+
+MembershipState record = MembershipState.newInstance(
+router, ns, NAMENODES[0], clusterId, blockPoolId,
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.ACTIVE, safemode);
+registrationList.add(record);
+
+// Set empty clusterId and blockPoolId for UNAVAILABLE NameNode
+record = MembershipState.newInstance(
+router, ns, NAMENODES[1], "", "",
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.UNAVAILABLE, safemode);
+registrationList.add(record);
+
+registerAndLoadRegistrations(registrationList);
+
+GetNamespaceInfoRequest request = GetNamespaceInfoRequest.newInstance();
+GetNamespaceInfoResponse response = 
membershipStore.getNamespaceInfo(request);
+Set namespaces = response.getNamespaceInfo();
+
+// Verify only one namespace is registered
+assertThat(namespaces).hasSize(1);
+
+// Verify the registered namespace has a valid pair of clusterId and 
blockPoolId derived from ACTIVE NameNode
+for (FederationNamespaceInfo namespace : namespaces) {

Review comment:
   Agreed. Modified to verify only the first element from the iterator: 
04ab61c




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570164)
Time Spent: 1h 50m  (was: 1h 40m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=570159=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570159
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 00:24
Start Date: 23/Mar/21 00:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2794:
URL: https://github.com/apache/hadoop/pull/2794#issuecomment-804491621


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 326m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 434m 24s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2794 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 

[jira] [Work logged] (HDFS-15879) Exclude slow nodes when choose targets for blocks

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15879?focusedWorklogId=570156=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570156
 ]

ASF GitHub Bot logged work on HDFS-15879:
-

Author: ASF GitHub Bot
Created on: 23/Mar/21 00:17
Start Date: 23/Mar/21 00:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2748:
URL: https://github.com/apache/hadoop/pull/2748#issuecomment-804487762


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 17s |  |  
https://github.com/apache/hadoop/pull/2748 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2748 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/6/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570156)
Time Spent: 2h  (was: 1h 50m)

> Exclude slow nodes when choose targets for blocks
> -
>
> Key: HDFS-15879
> URL: https://issues.apache.org/jira/browse/HDFS-15879
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Previously, we have monitored the slow nodes, related to 
> [HDFS-11194|https://issues.apache.org/jira/browse/HDFS-11194].
> We can use a thread to periodically collect these slow nodes into a set. Then 
> use the set to filter out slow nodes when choose targets for blocks.
> This feature can be configured to be turned on when needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=570082=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570082
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 22:02
Start Date: 22/Mar/21 22:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2796:
URL: https://github.com/apache/hadoop/pull/2796#issuecomment-804425039


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m  1s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  17m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 226 unchanged - 4 
fixed = 226 total (was 230)  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 184m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 288m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2796 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d9aed36e6dfc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / dfb8d367bc307a4ad7d2f3bd676c4897b00bf7cf |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/3/testReport/ |
   | Max. process+thread count | 3145 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/3/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570082)
Time Spent: 2.5h  (was: 2h 20m)

> Provide blocks moved count in Balancer iteration result
> 

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=570058=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570058
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 21:33
Start Date: 22/Mar/21 21:33
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2797:
URL: https://github.com/apache/hadoop/pull/2797#issuecomment-804410150


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m  1s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  branch-3.2 passed  |
   | -1 :x: |  spotbugs  |   2m 48s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in branch-3.2 has 4 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  15m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 229 unchanged - 4 
fixed = 229 total (was 233)  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 174m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 254m 44s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.server.namenode.TestRedudantBlocks |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2797 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ee63a798434f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 72544e6273b377f41008706a3ef861e5ef5cc13e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/2/testReport/ |
   | Max. process+thread count | 3241 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond 

[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=570033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570033
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 20:40
Start Date: 22/Mar/21 20:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2800:
URL: https://github.com/apache/hadoop/pull/2800#issuecomment-804379198


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m 26s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 22s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  branch-3.2 passed  |
   | -1 :x: |  spotbugs  |   2m 47s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2800/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in branch-3.2 has 4 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 227m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2800/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 307m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | hadoop.hdfs.TestEncryptedTransfer |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.server.datanode.TestCachingStrategy |
   |   | hadoop.hdfs.TestPread |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDatanodeDeath |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.TestSetrepDecreasing |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.datanode.TestBatchIbr |
   |   | hadoop.hdfs.server.namenode.TestEditLogRace |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.TestAppendSnapshotTruncate |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFile |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2800/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2800 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 2e6acec7df56 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | 

[jira] [Work logged] (HDFS-15850) Superuser actions should be reported to external enforcers

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15850?focusedWorklogId=570021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570021
 ]

ASF GitHub Bot logged work on HDFS-15850:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 20:20
Start Date: 22/Mar/21 20:20
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on a change in pull request #2784:
URL: https://github.com/apache/hadoop/pull/2784#discussion_r599041257



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -7665,7 +7670,7 @@ void addCachePool(CachePoolInfo req, boolean 
logRetryCache)
 checkOperation(OperationCategory.WRITE);
 String poolInfoStr = null;
 try {
-  checkSuperuserPrivilege();
+  checkSuperuserPrivilege(operationName);

Review comment:
   Can we get it from other calls after Line 7670?
   String poolNameStr = "{poolName: " +
   (req == null ? null : req.getPoolName()) + "}";




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 570021)
Time Spent: 1.5h  (was: 1h 20m)

> Superuser actions should be reported to external enforcers
> --
>
> Key: HDFS-15850
> URL: https://issues.apache.org/jira/browse/HDFS-15850
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: security
>Affects Versions: 3.3.0
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-15850.v1.patch, HDFS-15850.v2.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently, HDFS superuser checks or actions are not reported to external 
> enforcers like Ranger and the audit report provided by such external enforces 
> are not complete and are missing the superuser actions. To fix this, add a 
> new method to "AccessControlEnforcer" for all superuser checks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=570007=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-570007
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 20:08
Start Date: 22/Mar/21 20:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2796:
URL: https://github.com/apache/hadoop/pull/2796#issuecomment-804359684


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 35s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 43s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 225 unchanged 
- 4 fixed = 226 total (was 229)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 244m 28s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 335m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints 
|
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2796 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 50370699abf4 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 4e28e30c0777e75a8a05008d619c6034d52b5f8a |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/2/testReport/ |
   | Max. process+thread count | 2918 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue 

[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306524#comment-17306524
 ] 

Hadoop QA commented on HDFS-15907:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 
54s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
42s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
19s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 24s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 24m 
49s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
11s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 12s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 {color} |
| 

[jira] [Created] (HDFS-15912) Allow ProtobufRpcEngine to be extensible

2021-03-22 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-15912:
---

 Summary: Allow ProtobufRpcEngine to be extensible
 Key: HDFS-15912
 URL: https://issues.apache.org/jira/browse/HDFS-15912
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs
Reporter: Hector Sandoval Chaverri


The ProtobufRpcEngine class doesn't allow for new RpcEngine implementations to 
extend some of its inner classes (e.g. Invoker and Server.ProtoBufRpcInvoker). 
Also, some of its methods are long enough such that overriding them would 
result in a lot of code duplication (e.g. Invoker#invoke and 
Server.ProtoBufRpcInvoker#call).

When implementing a new RpcEngine, it would be helpful to reuse most of the 
code already in ProtobufRpcEngine. This would allow new fields to be added to 
the RPC header or message with minimal code changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569923=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569923
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 18:37
Start Date: 22/Mar/21 18:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2796:
URL: https://github.com/apache/hadoop/pull/2796#issuecomment-804300239


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  25m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 35s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m  3s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  18m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 41s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 225 unchanged 
- 4 fixed = 226 total (was 229)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 241m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 350m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.TestBpServiceActorScheduler |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2796 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 03baacb2655f 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 19aa3e0f3945e68bbb88c25501a1d7b071d2ad29 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/1/testReport/ |
   | Max. process+thread count | 2875 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2796/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact 

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569921=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569921
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 18:30
Start Date: 22/Mar/21 18:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2797:
URL: https://github.com/apache/hadoop/pull/2797#issuecomment-804295328


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 40s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  branch-3.2 passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  branch-3.2 passed  |
   | -1 :x: |  spotbugs  |   2m 47s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in branch-3.2 has 4 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  14m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 40s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 229 unchanged 
- 4 fixed = 230 total (was 233)  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 193m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 273m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.cli.TestHDFSCLI |
   |   | hadoop.fs.TestSymlinkHdfsFileContext |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2797 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 77c47ab0fc97 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / f75d967c7f3eed620922d179987e78008b6b9c2e |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/1/testReport/ |
   | Max. process+thread count | 3166 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2797/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by 

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569906=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569906
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 17:55
Start Date: 22/Mar/21 17:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2799:
URL: https://github.com/apache/hadoop/pull/2799#issuecomment-804271262


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  22m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.1 Compile Tests _ |
   | -1 :x: |  mvninstall  |   4m 44s |  root in branch-3.1 failed.  |
   | -1 :x: |  compile  |   0m 29s |  hadoop-hdfs in branch-3.1 failed.  |
   | -0 :warning: |  checkstyle  |   0m 14s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 10s |  hadoop-hdfs in branch-3.1 failed.  |
   | -1 :x: |  shadedclient  |   1m 15s |  branch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 18s |  hadoop-hdfs in branch-3.1 failed.  |
   | +0 :ok: |  spotbugs  |   1m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 10s |  hadoop-hdfs in branch-3.1 failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 11s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 10s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  javac  |   0m 10s |  hadoop-hdfs in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m  8s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 10s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   0m 55s |  patch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 10s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  34m 50s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2799 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 754b90d93b18 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.1 / c9c471e |
   | Default Java | Oracle 
Corporation-9-internal+0-2016-04-14-195246.buildd.src |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569890=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569890
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 17:34
Start Date: 22/Mar/21 17:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2799:
URL: https://github.com/apache/hadoop/pull/2799#issuecomment-804257123


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  18m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.1 Compile Tests _ |
   | -1 :x: |  mvninstall  |   4m 41s |  root in branch-3.1 failed.  |
   | -1 :x: |  compile  |   0m 29s |  hadoop-hdfs in branch-3.1 failed.  |
   | -0 :warning: |  checkstyle  |   0m 17s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 12s |  hadoop-hdfs in branch-3.1 failed.  |
   | -1 :x: |  shadedclient  |   1m 17s |  branch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 21s |  hadoop-hdfs in branch-3.1 failed.  |
   | +0 :ok: |  spotbugs  |   1m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 13s |  hadoop-hdfs in branch-3.1 failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  javac  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 10s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   0m 49s |  patch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 13s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 13s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  31m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2799 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 33a9860df7b5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.1 / c9c471e |
   | Default Java | Oracle 
Corporation-9-internal+0-2016-04-14-195246.buildd.src |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 

[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569887
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 17:31
Start Date: 22/Mar/21 17:31
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2800:
URL: https://github.com/apache/hadoop/pull/2800#issuecomment-804254973


   +1, pending Jenkins.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569887)
Time Spent: 2.5h  (was: 2h 20m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15908) Possible Resource Leak in org.apache.hadoop.hdfs.qjournal.server.Journal

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15908?focusedWorklogId=569884=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569884
 ]

ASF GitHub Bot logged work on HDFS-15908:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 17:30
Start Date: 22/Mar/21 17:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2790:
URL: https://github.com/apache/hadoop/pull/2790#issuecomment-804254342


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 340m  5s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2790/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 432m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.namenode.ha.TestHASafeMode |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2790/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2790 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell 

[jira] [Work logged] (HDFS-15759) EC: Verify EC reconstruction correctness on DataNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15759?focusedWorklogId=569879=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569879
 ]

ASF GitHub Bot logged work on HDFS-15759:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 17:27
Start Date: 22/Mar/21 17:27
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on a change in pull request #2585:
URL: https://github.com/apache/hadoop/pull/2585#discussion_r598925928



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/DecodingValidator.java
##
@@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.io.erasurecode.ECChunk;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+/**
+ * A utility class to validate decoding.
+ */
+@InterfaceAudience.Private
+public class DecodingValidator {
+
+  private final RawErasureDecoder decoder;
+  private ByteBuffer buffer;
+  private int[] newValidIndexes;
+  private int newErasedIndex;
+
+  public DecodingValidator(RawErasureDecoder decoder) {
+this.decoder = decoder;
+  }
+
+  /**
+   * Validate outputs decoded from inputs, by decoding an input back from
+   * the outputs and comparing it with the original one.
+   *
+   * For instance, in RS (6, 3), let (d0, d1, d2, d3, d4, d5) be sources
+   * and (p0, p1, p2) be parities, and assume
+   *  inputs = [d0, null (d1), d2, d3, d4, d5, null (p0), p1, null (p2)];
+   *  erasedIndexes = [1, 6];
+   *  outputs = [d1, p1].

Review comment:
   ```suggestion
  *  outputs = [d1, p0].
   ```

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/DecodingValidator.java
##
@@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.io.erasurecode.rawcoder;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import 
org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting;
+import org.apache.hadoop.io.erasurecode.ECChunk;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+/**
+ * A utility class to validate decoding.
+ */
+@InterfaceAudience.Private
+public class DecodingValidator {
+
+  private final RawErasureDecoder decoder;
+  private ByteBuffer buffer;
+  private int[] newValidIndexes;
+  private int newErasedIndex;
+
+  public DecodingValidator(RawErasureDecoder decoder) {
+this.decoder = decoder;
+  }
+
+  /**
+   * Validate outputs decoded from inputs, by decoding an input back from
+   * the outputs and comparing it with the original one.
+   *
+   * For instance, in RS (6, 3), let (d0, d1, d2, d3, d4, d5) be sources
+   * and (p0, p1, p2) be parities, and assume
+   *  inputs = [d0, null (d1), d2, d3, d4, d5, null (p0), p1, null (p2)];
+   *  erasedIndexes = [1, 6];
+   *  outputs = [d1, p1].
+   * Then
+   *  1. Create new inputs, erasedIndexes and outputs for validation so that
+   * the inputs could contain the decoded outputs, and decode them:
+   *  newInputs = [d1, d2, d3, d4, d5, p1]

Review comment:
   ```suggestion
  *  

[jira] [Work logged] (HDFS-15879) Exclude slow nodes when choose targets for blocks

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15879?focusedWorklogId=569845=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569845
 ]

ASF GitHub Bot logged work on HDFS-15879:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 16:45
Start Date: 22/Mar/21 16:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2748:
URL: https://github.com/apache/hadoop/pull/2748#issuecomment-804220194


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  
https://github.com/apache/hadoop/pull/2748 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2748 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2748/5/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569845)
Time Spent: 1h 50m  (was: 1h 40m)

> Exclude slow nodes when choose targets for blocks
> -
>
> Key: HDFS-15879
> URL: https://issues.apache.org/jira/browse/HDFS-15879
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Previously, we have monitored the slow nodes, related to 
> [HDFS-11194|https://issues.apache.org/jira/browse/HDFS-11194].
> We can use a thread to periodically collect these slow nodes into a set. Then 
> use the set to filter out slow nodes when choose targets for blocks.
> This feature can be configured to be turned on when needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569841
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 16:36
Start Date: 22/Mar/21 16:36
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2794:
URL: https://github.com/apache/hadoop/pull/2794#issuecomment-804213263


   Thanx @virajjasani, Can you take care of the checkstyle complains.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569841)
Time Spent: 1h 20m  (was: 1h 10m)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569824=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569824
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 16:06
Start Date: 22/Mar/21 16:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2794:
URL: https://github.com/apache/hadoop/pull/2794#issuecomment-804190199


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 234 unchanged 
- 0 fixed = 237 total (was 234)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 338m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 431m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 

[jira] [Work logged] (HDFS-15908) Possible Resource Leak in org.apache.hadoop.hdfs.qjournal.server.Journal

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15908?focusedWorklogId=569814=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569814
 ]

ASF GitHub Bot logged work on HDFS-15908:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:54
Start Date: 22/Mar/21 15:54
Worklog Time Spent: 10m 
  Work Description: Nargeshdb commented on a change in pull request #2790:
URL: https://github.com/apache/hadoop/pull/2790#discussion_r598850886



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
##
@@ -264,9 +264,9 @@ void format(NamespaceInfo nsInfo, boolean force) throws 
IOException {
*/
   @Override // Closeable
   public void close() throws IOException {
-storage.close();

Review comment:
   Yes, exactly. If it throws an exception, then committedTxnId and 
curSegment are never closed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569814)
Time Spent: 40m  (was: 0.5h)

> Possible Resource Leak in org.apache.hadoop.hdfs.qjournal.server.Journal
> 
>
> Key: HDFS-15908
> URL: https://issues.apache.org/jira/browse/HDFS-15908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Narges Shadab
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We noticed a possible resource leak 
> [here|https://github.com/apache/hadoop/blob/cd44e917d0b331a2d1e1fa63fdd498eac01ae323/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java#L266].
>  The call to close on {{storage}} at line 267 can throw an exception. If it 
> occurs, then {{committedTxnId}} and {{curSegment}} are never closed.
> I'll submit a pull request to fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=569809=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569809
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:48
Start Date: 22/Mar/21 15:48
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#discussion_r598845045



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
##
@@ -473,6 +478,53 @@ public void testRegistrationExpiredAndDeletion()
 }, 100, 3000);
   }
 
+  @Test
+  public void testNamespaceInfoWithUnavailableNameNodeRegistration() throws 
IOException {
+// Populate the state store with one ACTIVE NameNode entry and one 
UNAVAILABLE NameNode entry
+// 1) ns0:nn0 - ACTIVE
+// 2) ns0:nn1 - UNAVAILABLE
+List registrationList = new ArrayList<>();
+String router = ROUTERS[0];
+String ns = NAMESERVICES[0];
+String rpcAddress = "testrpcaddress";
+String serviceAddress = "testserviceaddress";
+String lifelineAddress = "testlifelineaddress";
+String blockPoolId = "testblockpool";
+String clusterId = "testcluster";
+String webScheme = "http";
+String webAddress = "testwebaddress";
+boolean safemode = false;
+
+MembershipState record = MembershipState.newInstance(
+router, ns, NAMENODES[0], clusterId, blockPoolId,
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.ACTIVE, safemode);
+registrationList.add(record);
+
+// Set empty clusterId and blockPoolId for UNAVAILABLE NameNode
+record = MembershipState.newInstance(
+router, ns, NAMENODES[1], "", "",
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.UNAVAILABLE, safemode);
+registrationList.add(record);
+
+registerAndLoadRegistrations(registrationList);
+
+GetNamespaceInfoRequest request = GetNamespaceInfoRequest.newInstance();
+GetNamespaceInfoResponse response = 
membershipStore.getNamespaceInfo(request);
+Set namespaces = response.getNamespaceInfo();
+
+// Verify only one namespace is registered
+assertThat(namespaces).hasSize(1);
+
+// Verify the registered namespace has a valid pair of clusterId and 
blockPoolId derived from ACTIVE NameNode
+for (FederationNamespaceInfo namespace : namespaces) {

Review comment:
   Given that we've already checked for the size to be 1, could we just get 
the element first?
   It is a little counter intuitive to see this for loop checking for all 
elements.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569809)
Time Spent: 1h 40m  (was: 1.5h)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, 

[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=569808=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569808
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:47
Start Date: 22/Mar/21 15:47
Worklog Time Spent: 10m 
  Work Description: goiri commented on a change in pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#discussion_r598843870



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/TestStateStoreMembershipState.java
##
@@ -473,6 +478,53 @@ public void testRegistrationExpiredAndDeletion()
 }, 100, 3000);
   }
 
+  @Test
+  public void testNamespaceInfoWithUnavailableNameNodeRegistration() throws 
IOException {
+// Populate the state store with one ACTIVE NameNode entry and one 
UNAVAILABLE NameNode entry
+// 1) ns0:nn0 - ACTIVE
+// 2) ns0:nn1 - UNAVAILABLE
+List registrationList = new ArrayList<>();
+String router = ROUTERS[0];
+String ns = NAMESERVICES[0];
+String rpcAddress = "testrpcaddress";
+String serviceAddress = "testserviceaddress";
+String lifelineAddress = "testlifelineaddress";
+String blockPoolId = "testblockpool";
+String clusterId = "testcluster";
+String webScheme = "http";
+String webAddress = "testwebaddress";
+boolean safemode = false;
+
+MembershipState record = MembershipState.newInstance(
+router, ns, NAMENODES[0], clusterId, blockPoolId,
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.ACTIVE, safemode);
+registrationList.add(record);
+
+// Set empty clusterId and blockPoolId for UNAVAILABLE NameNode
+record = MembershipState.newInstance(
+router, ns, NAMENODES[1], "", "",
+rpcAddress, serviceAddress, lifelineAddress, webScheme,
+webAddress, FederationNamenodeServiceState.UNAVAILABLE, safemode);
+registrationList.add(record);
+
+registerAndLoadRegistrations(registrationList);
+
+GetNamespaceInfoRequest request = GetNamespaceInfoRequest.newInstance();
+GetNamespaceInfoResponse response = 
membershipStore.getNamespaceInfo(request);
+Set namespaces = response.getNamespaceInfo();
+
+// Verify only one namespace is registered
+assertThat(namespaces).hasSize(1);

Review comment:
   Why is this better than assertEquals(1, namespaces.size())?
   Is the error message friendlier?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569808)
Time Spent: 1.5h  (was: 1h 20m)

> RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode
> ---
>
> Key: HDFS-15900
> URL: https://issues.apache.org/jira/browse/HDFS-15900
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.3.0
>Reporter: Harunobu Daikoku
>Assignee: Harunobu Daikoku
>Priority: Major
>  Labels: pull-request-available
> Attachments: image.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We observed that when a NameNode becomes UNAVAILABLE, the corresponding 
> blockpool id in MembershipStoreImpl#activeNamespaces on dfsrouter 
> unintentionally sets to empty, its initial value.
>  !image.png|height=250!
> As a result of this, concat operations through dfsrouter fail with the 
> following error as it cannot resolve the block id in the recognized active 
> namespaces.
> {noformat}
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RemoteException): 
> Cannot locate a nameservice for block pool BP-...
> {noformat}
> A possible fix is to ignore UNAVAILABLE NameNode registrations, and set 
> proper namespace information obtained from available NameNode registrations 
> when constructing the cache of active namespaces.
>  
> [https://github.com/apache/hadoop/blob/rel/release-3.3.0/hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/impl/MembershipStoreImpl.java#L207-L221]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569806=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569806
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:45
Start Date: 22/Mar/21 15:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2799:
URL: https://github.com/apache/hadoop/pull/2799#issuecomment-804169499


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  13m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.1 Compile Tests _ |
   | -1 :x: |  mvninstall  |   4m 39s |  root in branch-3.1 failed.  |
   | -1 :x: |  compile  |   0m 29s |  hadoop-hdfs in branch-3.1 failed.  |
   | -0 :warning: |  checkstyle  |   0m 18s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 13s |  hadoop-hdfs in branch-3.1 failed.  |
   | -1 :x: |  shadedclient  |   1m 20s |  branch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-hdfs in branch-3.1 failed.  |
   | +0 :ok: |  spotbugs  |   1m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 13s |  hadoop-hdfs in branch-3.1 failed.  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  javac  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 10s |  The patch fails to run 
checkstyle in hadoop-hdfs  |
   | -1 :x: |  mvnsite  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  shadedclient  |   0m 50s |  patch has errors when building and 
testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 12s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 13s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 13s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  25m 56s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2799 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bbbfed1f0615 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.1 / c9c471e |
   | Default Java | Oracle 
Corporation-9-internal+0-2016-04-14-195246.buildd.src |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/branch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2799/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   | checkstyle | 

[jira] [Commented] (HDFS-15903) Refactor X-Platform library

2021-03-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-15903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306300#comment-17306300
 ] 

Íñigo Goiri commented on HDFS-15903:


Merged PR https://github.com/apache/hadoop/pull/2783
Thanks [~gautham] for the refactor.

> Refactor X-Platform library
> ---
>
> Key: HDFS-15903
> URL: https://issues.apache.org/jira/browse/HDFS-15903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.2.2
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> X-Platform started out as a utility to help in writing cross platform code in 
> Hadoop. As its scope expanding to cover various scenarios, it is necessary to 
> refactor it in early stages to provide proper organization and growth of the 
> X-Platform library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15903) Refactor X-Platform library

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15903?focusedWorklogId=569802=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569802
 ]

ASF GitHub Bot logged work on HDFS-15903:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:41
Start Date: 22/Mar/21 15:41
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #2783:
URL: https://github.com/apache/hadoop/pull/2783


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569802)
Time Spent: 1h  (was: 50m)

> Refactor X-Platform library
> ---
>
> Key: HDFS-15903
> URL: https://issues.apache.org/jira/browse/HDFS-15903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.2.2
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> X-Platform started out as a utility to help in writing cross platform code in 
> Hadoop. As its scope expanding to cover various scenarios, it is necessary to 
> refactor it in early stages to provide proper organization and growth of the 
> X-Platform library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15903) Refactor X-Platform library

2021-03-22 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-15903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HDFS-15903.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Refactor X-Platform library
> ---
>
> Key: HDFS-15903
> URL: https://issues.apache.org/jira/browse/HDFS-15903
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: libhdfs++
>Affects Versions: 3.2.2
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> X-Platform started out as a utility to help in writing cross platform code in 
> Hadoop. As its scope expanding to cover various scenarios, it is necessary to 
> refactor it in early stages to provide proper organization and growth of the 
> X-Platform library.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569797=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569797
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:34
Start Date: 22/Mar/21 15:34
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2788:
URL: https://github.com/apache/hadoop/pull/2788#issuecomment-804157399


   Thanks @tasanuma for your merge. I submitted a 
[PR](https://github.com/apache/hadoop/pull/2800) for branch-3.2.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569797)
Time Spent: 2h 20m  (was: 2h 10m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569795=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569795
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:32
Start Date: 22/Mar/21 15:32
Worklog Time Spent: 10m 
  Work Description: tomscut opened a new pull request #2800:
URL: https://github.com/apache/hadoop/pull/2800


   Close FSImage and FSNamesystem after formatting is complete. For branch-3.2.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569795)
Time Spent: 2h 10m  (was: 2h)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15911:

Status: Patch Available  (was: In Progress)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569784
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:18
Start Date: 22/Mar/21 15:18
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2799:
URL: https://github.com/apache/hadoop/pull/2799


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569784)
Time Spent: 50m  (was: 40m)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15855) Solve the problem of incorrect EC progress when loading FsImage

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15855?focusedWorklogId=569772=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569772
 ]

ASF GitHub Bot logged work on HDFS-15855:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 15:11
Start Date: 22/Mar/21 15:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2741:
URL: https://github.com/apache/hadoop/pull/2741#issuecomment-804139203


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 396m 17s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2741/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 490m 23s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme |
   |   | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestWriteConfigurationToDFS |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.hdfs.server.namenode.TestFileTruncate |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | 

[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15911:

Target Version/s: 3.3.1, 3.4.0, 3.1.5, 3.2.3  (was: 3.3.1, 3.4.0, 3.1.5, 
2.10.2, 3.2.3)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569724=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569724
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 14:10
Start Date: 22/Mar/21 14:10
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2794:
URL: https://github.com/apache/hadoop/pull/2794#issuecomment-804092518


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 53s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 234 unchanged 
- 0 fixed = 237 total (was 234)  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 230m 30s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 315m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2794/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2794 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 00919340c109 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aff8392b2b23b288f63ec5870ef4a4a01780c44f |
   

[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569710=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569710
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 13:49
Start Date: 22/Mar/21 13:49
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2797:
URL: https://github.com/apache/hadoop/pull/2797


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569710)
Time Spent: 0.5h  (was: 20m)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15893) Logs are flooded when dfs.ha.tail-edits.in-progress set to true or dfs.ha.tail-edits.period to 0ms

2021-03-22 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306200#comment-17306200
 ] 

JiangHua Zhu commented on HDFS-15893:
-

[~Sushma_28], I have noticed this problem, I think it should be judged whether 
the value of (dfs.ha.tail-edits.period) is less than or equal to 0, if so, the 
default value should be set.


> Logs are flooded when dfs.ha.tail-edits.in-progress set to true or 
> dfs.ha.tail-edits.period to 0ms
> --
>
> Key: HDFS-15893
> URL: https://issues.apache.org/jira/browse/HDFS-15893
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Ravuri Sushma sree
>Assignee: Ravuri Sushma sree
>Priority: Major
> Attachments: HDFS-15893.001.patch
>
>
> When we set dfs.ha.tail-edits.in-progress to true, dfs.ha.tail-edits.period 
> to 0ms almost all the logs on standby and observer NN are loaded. Such logs 
> will flood useful logs.
> We can adjust the log level of few logs to debug while observer node is in 
> operation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15160) ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl methods should use datanode readlock

2021-03-22 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306193#comment-17306193
 ] 

Stephen O'Donnell commented on HDFS-15160:
--

Are there any objections to backporting this change into branch-3.3 before the 
3.3.1 release is created? We already have the DN RW lock in 3.3.0, but all 
accesses to it are via the write lock, making it exclusive. It needs this 
change to see any improvements.

>From the above comments, there are some community reports that this change has 
>greatly helped reduce DN contention.

> ReplicaMap, Disk Balancer, Directory Scanner and various FsDatasetImpl 
> methods should use datanode readlock
> ---
>
> Key: HDFS-15160
> URL: https://issues.apache.org/jira/browse/HDFS-15160
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.3.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15160.001.patch, HDFS-15160.002.patch, 
> HDFS-15160.003.patch, HDFS-15160.004.patch, HDFS-15160.005.patch, 
> HDFS-15160.006.patch, HDFS-15160.007.patch, HDFS-15160.008.patch, 
> image-2020-04-10-17-18-08-128.png, image-2020-04-10-17-18-55-938.png
>
>
> Now we have HDFS-15150, we can start to move some DN operations to use the 
> read lock rather than the write lock to improve concurrence. The first step 
> is to make the changes to ReplicaMap, as many other methods make calls to it.
> This Jira switches read operations against the volume map to use the readLock 
> rather than the write lock.
> Additionally, some methods make a call to replicaMap.replicas() (eg 
> getBlockReports, getFinalizedBlocks, deepCopyReplica) and only use the result 
> in a read only fashion, so they can also be switched to using a readLock.
> Next is the directory scanner and disk balancer, which only require a read 
> lock.
> Finally (for this Jira) are various "low hanging fruit" items in BlockSender 
> and fsdatasetImpl where is it fairly obvious they only need a read lock.
> For now, I have avoided changing anything which looks too risky, as I think 
> its better to do any larger refactoring or risky changes each in their own 
> Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15439) Setting dfs.mover.retry.max.attempts to negative value will retry forever.

2021-03-22 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15439:
-
Fix Version/s: 3.2.3
   3.1.5
   3.3.1

> Setting dfs.mover.retry.max.attempts to negative value will retry forever.
> --
>
> Key: HDFS-15439
> URL: https://issues.apache.org/jira/browse/HDFS-15439
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: HDFS-15439.000.patch, HDFS-15439.001.patch, 
> HDFS-15439.002.patch
>
>
> Configuration parameter "dfs.mover.retry.max.attempts" is to define the 
> maximum number of retries before the mover consider the move failed. There is 
> no checking code so this parameter can accept any int value.
> Theoratically, setting this value to <=0 should mean that no retry at all. 
> However, if you set the value to negative value. The checking condition for 
> retry failed will never satisfied because the if statement is "*if 
> (retryCount.get() == retryMaxAttempts)*". The retry count will always +1 by 
> retryCount.incrementAndGet() after failed but never *=* *retryMaxAttempts.* 
> {code:java}
> private Result processNamespace() throws IOException {
>   ... //wait for pending move to finish and retry the failed migration
>   if (hasFailed && !hasSuccess) {
> if (retryCount.get() == retryMaxAttempts) {
>   result.setRetryFailed();
>   LOG.error("Failed to move some block's after "
>   + retryMaxAttempts + " retries.");
>   return result;
> } else {
>   retryCount.incrementAndGet();
> }
>   } else {
> // Reset retry count if no failure.
> retryCount.set(0);
>   }
>   ...
> }
> {code}
> *How to fix*
> Add checking code of "dfs.mover.retry.max.attempts" to accept only 
> non-negative value or change the if statement condition when retry count 
> exceeds max attempts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15443) Setting dfs.datanode.max.transfer.threads to a very small value can cause strange failure.

2021-03-22 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-15443:
-
Fix Version/s: 3.3.1

> Setting dfs.datanode.max.transfer.threads to a very small value can cause 
> strange failure.
> --
>
> Key: HDFS-15443
> URL: https://issues.apache.org/jira/browse/HDFS-15443
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: AMC-team
>Assignee: AMC-team
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HDFS-15443.000.patch, HDFS-15443.001.patch, 
> HDFS-15443.002.patch, HDFS-15443.003.patch
>
>
> Configuration parameter dfs.datanode.max.transfer.threads is to specify the 
> maximum number of threads to use for transferring data in and out of the DN. 
> This is a vital param that need to tune carefully. 
> {code:java}
> // DataXceiverServer.java
> // Make sure the xceiver count is not exceeded
> intcurXceiverCount = datanode.getXceiverCount();
> if (curXceiverCount > maxXceiverCount) {
> thrownewIOException("Xceiver count " + curXceiverCount
> + " exceeds the limit of concurrent xceivers: "
> + maxXceiverCount);
> }
> {code}
> There are many issues that caused by not setting this param to an appropriate 
> value. However, there is no any check code to restrict the parameter. 
> Although having a hard-and-fast rule is difficult because we need to consider 
> number of cores, main memory etc, *we can prevent users from setting this 
> value to an absolute wrong value by accident.* (e.g. a negative value that 
> totally break the availability of datanode.)
> *How to fix:*
> Add proper check code for the parameter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569678
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 12:46
Start Date: 22/Mar/21 12:46
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2796:
URL: https://github.com/apache/hadoop/pull/2796


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569678)
Time Spent: 20m  (was: 10m)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread Takanobu Asanuma (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma resolved HDFS-15906.
-
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569653
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 11:48
Start Date: 22/Mar/21 11:48
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2788:
URL: https://github.com/apache/hadoop/pull/2788#issuecomment-804001805


   Merged to trunk and cherry-picked to branch-3.3.
   
   @tomscut There is a small conflict with branch-3.2. Could you create another 
PR for branch-3.2 if necessary?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569653)
Time Spent: 2h  (was: 1h 50m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306134#comment-17306134
 ] 

Stephen O'Donnell commented on HDFS-15907:
--

Let me re-trigger Yetus / check the results. The last run looks pretty bad!

> Reduce Memory Overhead of AclFeature by avoiding AtomicInteger
> --
>
> Key: HDFS-15907
> URL: https://issues.apache.org/jira/browse/HDFS-15907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15907.001.patch
>
>
> In HDFS-15792 we made some changes to the AclFeature and ReferenceCountedMap 
> classes to address a rare bug when loading the FSImage in parallel.
> One change we made was to replace an int inside AclFeature with an 
> AtomicInteger to avoid synchronising the methods in AclFeature.
> Discussing this change with [~weichiu], he pointed out that while the 
> AclFeature cache is intended to reduce the count of AclFeature objects, on a 
> large cluster, it is possible for there to be many millions of AclFeature 
> objects.
> Previously, the int will have taken 4 bytes of heap.
> By moving to a AtomicInteger, we probably have an overhead of:
>  4 bytes (or 8 if the heap is over 32GB) for a reference to the atomic long 
> object
>  12 byte overhead for the java object
>  4 bytes inside the atomic long to store an int.
>  
> So the total heap overhead has gone from 4 bytes to 20 bytes just to use an 
> AtomicInteger.
> Therefore I think it makes sense to remove the AtomicInteger and just 
> synchronise the methods of AclFeature where the value is incremented / 
> decremented / retrieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306132#comment-17306132
 ] 

Ayush Saxena commented on HDFS-15907:
-

Makes sense.
+1

> Reduce Memory Overhead of AclFeature by avoiding AtomicInteger
> --
>
> Key: HDFS-15907
> URL: https://issues.apache.org/jira/browse/HDFS-15907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15907.001.patch
>
>
> In HDFS-15792 we made some changes to the AclFeature and ReferenceCountedMap 
> classes to address a rare bug when loading the FSImage in parallel.
> One change we made was to replace an int inside AclFeature with an 
> AtomicInteger to avoid synchronising the methods in AclFeature.
> Discussing this change with [~weichiu], he pointed out that while the 
> AclFeature cache is intended to reduce the count of AclFeature objects, on a 
> large cluster, it is possible for there to be many millions of AclFeature 
> objects.
> Previously, the int will have taken 4 bytes of heap.
> By moving to a AtomicInteger, we probably have an overhead of:
>  4 bytes (or 8 if the heap is over 32GB) for a reference to the atomic long 
> object
>  12 byte overhead for the java object
>  4 bytes inside the atomic long to store an int.
>  
> So the total heap overhead has gone from 4 bytes to 20 bytes just to use an 
> AtomicInteger.
> Therefore I think it makes sense to remove the AtomicInteger and just 
> synchronise the methods of AclFeature where the value is incremented / 
> decremented / retrieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569648
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 11:29
Start Date: 22/Mar/21 11:29
Worklog Time Spent: 10m 
  Work Description: tasanuma merged pull request #2788:
URL: https://github.com/apache/hadoop/pull/2788


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569648)
Time Spent: 1h 50m  (was: 1h 40m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569647
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 11:29
Start Date: 22/Mar/21 11:29
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #2788:
URL: https://github.com/apache/hadoop/pull/2788#issuecomment-803990889


   @tomscut Thanks for your confirmation. I also confirmed the failed tests are 
not related. Will merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569647)
Time Spent: 1h 40m  (was: 1.5h)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306125#comment-17306125
 ] 

Stephen O'Donnell commented on HDFS-15907:
--

Yea, Shiv was concerned about the memory overhead of concurrentHashMap, but I 
cannot see why it is a problem.

The ConcurentHashMap implement is an object which contains a number of HashMaps 
under the covers. It simply store the keys by hashing the keys across the 
number of Maps it is using internally, and it synchronises at the sub-map 
level, somewhat like a "striped lock". It will be slightly slower for put and 
get due to this extra indirection, but the overhead is tiny.

Does it have a higher memory overhead than a HashMap? Yes but if you are 
storing thousands or millions of keys this will not matter. The extra overhead 
is not a "per entry" overhead, its a static number driven by the Java object 
overhead. If the memory overhead of a HashMap is 32 bytes (picking a number out 
of the air, I have not checked this) then the overhead of a ConcurrentHashMap 
is approx 32 * 32, as I think it creates 32 sub-maps by default.

I feel that this overhead is worth it as it will provide better concurrency 
that synchronising the entire map for all keys.

> Reduce Memory Overhead of AclFeature by avoiding AtomicInteger
> --
>
> Key: HDFS-15907
> URL: https://issues.apache.org/jira/browse/HDFS-15907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15907.001.patch
>
>
> In HDFS-15792 we made some changes to the AclFeature and ReferenceCountedMap 
> classes to address a rare bug when loading the FSImage in parallel.
> One change we made was to replace an int inside AclFeature with an 
> AtomicInteger to avoid synchronising the methods in AclFeature.
> Discussing this change with [~weichiu], he pointed out that while the 
> AclFeature cache is intended to reduce the count of AclFeature objects, on a 
> large cluster, it is possible for there to be many millions of AclFeature 
> objects.
> Previously, the int will have taken 4 bytes of heap.
> By moving to a AtomicInteger, we probably have an overhead of:
>  4 bytes (or 8 if the heap is over 32GB) for a reference to the atomic long 
> object
>  12 byte overhead for the java object
>  4 bytes inside the atomic long to store an int.
>  
> So the total heap overhead has gone from 4 bytes to 20 bytes just to use an 
> AtomicInteger.
> Therefore I think it makes sense to remove the AtomicInteger and just 
> synchronise the methods of AclFeature where the value is incremented / 
> decremented / retrieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306107#comment-17306107
 ] 

Ayush Saxena commented on HDFS-15907:
-

I think he was talking about to change {{ConcurrentHashMap}} as well and have 
methods synchronized accessing them? From this :
{quote}But we should avoid using ConcurrentHashMap. It is known to have 
performance issues and adds a lot of memory overhead. So whoever is using ACLs 
heavily will have larger namespace requirements - very bad for large clusters. 
 Would prefer proper synchronization of the methods in ReferenceCountMap.
{quote}
 

I didn't follow the previous Jira, So, not very sure if that is addressed or 
something not related,  just give a check before you folks conclude. Nothing 
blocking as such from my side

> Reduce Memory Overhead of AclFeature by avoiding AtomicInteger
> --
>
> Key: HDFS-15907
> URL: https://issues.apache.org/jira/browse/HDFS-15907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15907.001.patch
>
>
> In HDFS-15792 we made some changes to the AclFeature and ReferenceCountedMap 
> classes to address a rare bug when loading the FSImage in parallel.
> One change we made was to replace an int inside AclFeature with an 
> AtomicInteger to avoid synchronising the methods in AclFeature.
> Discussing this change with [~weichiu], he pointed out that while the 
> AclFeature cache is intended to reduce the count of AclFeature objects, on a 
> large cluster, it is possible for there to be many millions of AclFeature 
> objects.
> Previously, the int will have taken 4 bytes of heap.
> By moving to a AtomicInteger, we probably have an overhead of:
>  4 bytes (or 8 if the heap is over 32GB) for a reference to the atomic long 
> object
>  12 byte overhead for the java object
>  4 bytes inside the atomic long to store an int.
>  
> So the total heap overhead has gone from 4 bytes to 20 bytes just to use an 
> AtomicInteger.
> Therefore I think it makes sense to remove the AtomicInteger and just 
> synchronise the methods of AclFeature where the value is incremented / 
> decremented / retrieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15901) Solve the problem of DN repeated block reports occupying too many RPCs during Safemode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15901?focusedWorklogId=569637=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569637
 ]

ASF GitHub Bot logged work on HDFS-15901:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 10:56
Start Date: 22/Mar/21 10:56
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2782:
URL: https://github.com/apache/hadoop/pull/2782#issuecomment-803970958


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2782/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 164 unchanged 
- 0 fixed = 170 total (was 164)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 343m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2782/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 435m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2782/2/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306102#comment-17306102
 ] 

Xiaoqiao He commented on HDFS-15907:


[~ayushtkn] would you mind give another checks? I just notice that you 
mentioned HDFS-15792, it seems [^HDFS-15907.001.patch] is same as shv's comment 
at there. Thanks.

> Reduce Memory Overhead of AclFeature by avoiding AtomicInteger
> --
>
> Key: HDFS-15907
> URL: https://issues.apache.org/jira/browse/HDFS-15907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15907.001.patch
>
>
> In HDFS-15792 we made some changes to the AclFeature and ReferenceCountedMap 
> classes to address a rare bug when loading the FSImage in parallel.
> One change we made was to replace an int inside AclFeature with an 
> AtomicInteger to avoid synchronising the methods in AclFeature.
> Discussing this change with [~weichiu], he pointed out that while the 
> AclFeature cache is intended to reduce the count of AclFeature objects, on a 
> large cluster, it is possible for there to be many millions of AclFeature 
> objects.
> Previously, the int will have taken 4 bytes of heap.
> By moving to a AtomicInteger, we probably have an overhead of:
>  4 bytes (or 8 if the heap is over 32GB) for a reference to the atomic long 
> object
>  12 byte overhead for the java object
>  4 bytes inside the atomic long to store an int.
>  
> So the total heap overhead has gone from 4 bytes to 20 bytes just to use an 
> AtomicInteger.
> Therefore I think it makes sense to remove the AtomicInteger and just 
> synchronise the methods of AclFeature where the value is incremented / 
> decremented / retrieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-03-22 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306096#comment-17306096
 ] 

Xiaoqiao He edited comment on HDFS-15787 at 3/22/21, 10:42 AM:
---

Committed to trunk only. Thanks [~leosun08] for your works! Thanks [~ayushtkn] 
for your reviews and reminders!
BTW, I just change the issue type to `improvement` from `BUG` since it is not 
wrong logic but duplicated invoke only. Please let me know if it is not proper. 
Thanks again. 


was (Author: hexiaoqiao):
Committed to trunk only. Thanks [~leosun08] for your works! Thanks [~ayushtkn] 
for your reviews and reminders!

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-03-22 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15787:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk only. Thanks [~leosun08] for your works! Thanks [~ayushtkn] 
for your reviews and reminders!

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-03-22 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-15787:
---
Issue Type: Improvement  (was: Bug)

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-15911:

Issue Type: Improvement  (was: Task)

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-03-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306086#comment-17306086
 ] 

Ayush Saxena edited comment on HDFS-15787 at 3/22/21, 10:21 AM:


+1

[~hexiaoqiao] IMO we should keep it trunk only as of now, and backport later if 
we don't see any issues for some time


was (Author: ayushtkn):
+1

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-03-22 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306086#comment-17306086
 ] 

Ayush Saxena commented on HDFS-15787:
-

+1

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15907) Reduce Memory Overhead of AclFeature by avoiding AtomicInteger

2021-03-22 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306082#comment-17306082
 ] 

Xiaoqiao He commented on HDFS-15907:


 [^HDFS-15907.001.patch] LGTM. +1 from my side. Will commit it shortly if no 
other comments.

> Reduce Memory Overhead of AclFeature by avoiding AtomicInteger
> --
>
> Key: HDFS-15907
> URL: https://issues.apache.org/jira/browse/HDFS-15907
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-15907.001.patch
>
>
> In HDFS-15792 we made some changes to the AclFeature and ReferenceCountedMap 
> classes to address a rare bug when loading the FSImage in parallel.
> One change we made was to replace an int inside AclFeature with an 
> AtomicInteger to avoid synchronising the methods in AclFeature.
> Discussing this change with [~weichiu], he pointed out that while the 
> AclFeature cache is intended to reduce the count of AclFeature objects, on a 
> large cluster, it is possible for there to be many millions of AclFeature 
> objects.
> Previously, the int will have taken 4 bytes of heap.
> By moving to a AtomicInteger, we probably have an overhead of:
>  4 bytes (or 8 if the heap is over 32GB) for a reference to the atomic long 
> object
>  12 byte overhead for the java object
>  4 bytes inside the atomic long to store an int.
>  
> So the total heap overhead has gone from 4 bytes to 20 bytes just to use an 
> AtomicInteger.
> Therefore I think it makes sense to remove the AtomicInteger and just 
> synchronise the methods of AclFeature where the value is incremented / 
> decremented / retrieved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15787) Remove unnecessary Lease Renew in FSNamesystem#internalReleaseLease

2021-03-22 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17306081#comment-17306081
 ] 

Xiaoqiao He commented on HDFS-15787:


Thanks [~leosun08], I think it is safe to push into codebase after review 
[^HDFS-15787.002.patch] again. I would like to commit it to trunk shortly if no 
objections. cc [~ayushtkn]

> Remove unnecessary Lease Renew  in FSNamesystem#internalReleaseLease
> 
>
> Key: HDFS-15787
> URL: https://issues.apache.org/jira/browse/HDFS-15787
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-15787.001.patch, HDFS-15787.002.patch
>
>
> The method of FSNamesystem#internalReleaseLease() as follow:
>  
> {code:java}
> boolean internalReleaseLease(Lease lease, String src, INodesInPath iip,
> String recoveryLeaseHolder) throws IOException {
>   ...
> // Start recovery of the last block for this file
> // Only do so if there is no ongoing recovery for this block,
> // or the previous recovery for this block timed out.
> if (blockManager.addBlockRecoveryAttempt(lastBlock)) {
>   long blockRecoveryId = nextGenerationStamp(
>   blockManager.isLegacyBlock(lastBlock));
>   if(copyOnTruncate) {
> lastBlock.setGenerationStamp(blockRecoveryId);
>   } else if(truncateRecovery) {
> recoveryBlock.setGenerationStamp(blockRecoveryId);
>   }
>   uc.initializeBlockRecovery(lastBlock, blockRecoveryId, true);
>   // Cannot close file right now, since the last block requires recovery.
>   // This may potentially cause infinite loop in lease recovery
>   // if there are no valid replicas on data-nodes.
>   NameNode.stateChangeLog.warn(
>   "DIR* NameSystem.internalReleaseLease: " +
>   "File " + src + " has not been closed." +
>   " Lease recovery is in progress. " +
>   "RecoveryId = " + blockRecoveryId + " for block " + lastBlock);
> }
> lease = reassignLease(lease, src, recoveryLeaseHolder, pendingFile);
> leaseManager.renewLease(lease);
> break;
>   }
>   return false;
> }
> {code}
>  Call LeaseManager#renewLease in 
> FSNamesystem#reassignLease=>FSNamesystem#reassignLeaseInternal.
>  So no need to call LeaseManager#renewLease  again after 
> leaseManager#renewLease.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569610=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569610
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 10:03
Start Date: 22/Mar/21 10:03
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #2788:
URL: https://github.com/apache/hadoop/pull/2788#issuecomment-803935750


   Failed junit tests:
   hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots 
   hadoop.hdfs.server.datanode.TestIncrementalBrVariations 
   hadoop.hdfs.TestPersistBlocks 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.TestDFSShell 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   
   Hi @tasanuma , those failed unit tests were unrelated to the change, and 
they all work fine locally.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569610)
Time Spent: 1.5h  (was: 1h 20m)

> Close FSImage and FSNamesystem after formatting is complete
> ---
>
> Key: HDFS-15906
> URL: https://issues.apache.org/jira/browse/HDFS-15906
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Close FSImage and FSNamesystem after formatting is complete. 
> org.apache.hadoop.hdfs.server.namenode#format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15906) Close FSImage and FSNamesystem after formatting is complete

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15906?focusedWorklogId=569589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569589
 ]

ASF GitHub Bot logged work on HDFS-15906:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 09:40
Start Date: 22/Mar/21 09:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2788:
URL: https://github.com/apache/hadoop/pull/2788#issuecomment-803917713


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 328m 45s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2788/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 421m 47s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots |
   |   | hadoop.hdfs.server.datanode.TestIncrementalBrVariations |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2788/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2788 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 4a9ee8cce8ab 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 

[jira] [Work started] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15911 started by Viraj Jasani.
---
> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HDFS-15911:

Target Version/s: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15908) Possible Resource Leak in org.apache.hadoop.hdfs.qjournal.server.Journal

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15908?focusedWorklogId=569584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569584
 ]

ASF GitHub Bot logged work on HDFS-15908:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 09:06
Start Date: 22/Mar/21 09:06
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2790:
URL: https://github.com/apache/hadoop/pull/2790#discussion_r598532643



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java
##
@@ -264,9 +264,9 @@ void format(NamespaceInfo nsInfo, boolean force) throws 
IOException {
*/
   @Override // Closeable
   public void close() throws IOException {
-storage.close();

Review comment:
   This is because it can throw IOException?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569584)
Time Spent: 0.5h  (was: 20m)

> Possible Resource Leak in org.apache.hadoop.hdfs.qjournal.server.Journal
> 
>
> Key: HDFS-15908
> URL: https://issues.apache.org/jira/browse/HDFS-15908
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Narges Shadab
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> We noticed a possible resource leak 
> [here|https://github.com/apache/hadoop/blob/cd44e917d0b331a2d1e1fa63fdd498eac01ae323/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/Journal.java#L266].
>  The call to close on {{storage}} at line 267 can throw an exception. If it 
> occurs, then {{committedTxnId}} and {{curSegment}} are never closed.
> I'll submit a pull request to fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-15911:
--
Labels: pull-request-available  (was: )

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15911?focusedWorklogId=569581=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569581
 ]

ASF GitHub Bot logged work on HDFS-15911:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 08:53
Start Date: 22/Mar/21 08:53
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #2794:
URL: https://github.com/apache/hadoop/pull/2794


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569581)
Remaining Estimate: 0h
Time Spent: 10m

> Provide blocks moved count in Balancer iteration result
> ---
>
> Key: HDFS-15911
> URL: https://issues.apache.org/jira/browse/HDFS-15911
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Balancer provides Result for iteration and it contains info like exitStatus, 
> bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved 
> count from NameNodeConnector and print it with rest of details in 
> Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15911) Provide blocks moved count in Balancer iteration result

2021-03-22 Thread Viraj Jasani (Jira)
Viraj Jasani created HDFS-15911:
---

 Summary: Provide blocks moved count in Balancer iteration result
 Key: HDFS-15911
 URL: https://issues.apache.org/jira/browse/HDFS-15911
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Viraj Jasani
Assignee: Viraj Jasani


Balancer provides Result for iteration and it contains info like exitStatus, 
bytesLeftToMove, bytesBeingMoved etc. We should also provide blocksMoved count 
from NameNodeConnector and print it with rest of details in Result#print().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15472) Erasure Coding: Support fallback read when zero copy is not supported

2021-03-22 Thread dzcxzl (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17305983#comment-17305983
 ] 

dzcxzl commented on HDFS-15472:
---

Can you review this patch when you are free? [~ayushtkn] [~weichiu]

> Erasure Coding: Support fallback read when zero copy is not supported
> -
>
> Key: HDFS-15472
> URL: https://issues.apache.org/jira/browse/HDFS-15472
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Trivial
> Attachments: HDFS-15472.000.patch, HDFS-15472.001.patch
>
>
> [HDFS-8203|https://issues.apache.org/jira/browse/HDFS-8203] 
> ec does not support zeor copy read, but currently does not support fallback 
> read, it will throw an exception.
> {code:java}
> Caused by: java.lang.UnsupportedOperationException: Not support enhanced byte 
> buffer access.
> at 
> org.apache.hadoop.hdfs.DFSStripedInputStream.read(DFSStripedInputStream.java:524)
> at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:188)
> at 
> org.apache.hadoop.hive.shims.ZeroCopyShims$ZeroCopyAdapter.readBuffer(ZeroCopyShims.java:79)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15855) Solve the problem of incorrect EC progress when loading FsImage

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15855?focusedWorklogId=569554=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569554
 ]

ASF GitHub Bot logged work on HDFS-15855:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 07:03
Start Date: 22/Mar/21 07:03
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #2741:
URL: https://github.com/apache/hadoop/pull/2741#issuecomment-803815902


   @Hexiaoqiao , @jojochuang , I have submitted a new solution.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 569554)
Time Spent: 1h 10m  (was: 1h)

> Solve the problem of incorrect EC progress when loading FsImage
> ---
>
> Key: HDFS-15855
> URL: https://issues.apache.org/jira/browse/HDFS-15855
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: ec_progress.jpg
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When loading FsImage, if EC information exists, the EC loading display is 
> incorrect during processing.
> FSImageFormatProtobuf#loadInternal() : 
> case ERASURE_CODING:
>Step step = new Step(StepType.ERASURE_CODING_POLICIES);
>prog.beginStep(Phase.LOADING_FSIMAGE, step);
>loadErasureCodingSection(in);
>prog.endStep(Phase.LOADING_FSIMAGE, step);
>break;
> StartupProgress is not used here to record EC-related information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-15900) RBF: empty blockpool id on dfsrouter caused by UNAVAILABLE NameNode

2021-03-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15900?focusedWorklogId=569543=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-569543
 ]

ASF GitHub Bot logged work on HDFS-15900:
-

Author: ASF GitHub Bot
Created on: 22/Mar/21 06:34
Start Date: 22/Mar/21 06:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2787:
URL: https://github.com/apache/hadoop/pull/2787#issuecomment-803801256


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2787/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m 59s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2787/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2787/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2787 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 36bd6387d6af 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c51d8f07a68648bd6dc0c52c5a77cad7b8c02f05 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 

[jira] [Work started] (HDFS-15901) Solve the problem of DN repeated block reports occupying too many RPCs during Safemode

2021-03-22 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-15901 started by JiangHua Zhu.
---
> Solve the problem of DN repeated block reports occupying too many RPCs during 
> Safemode
> --
>
> Key: HDFS-15901
> URL: https://issues.apache.org/jira/browse/HDFS-15901
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When the cluster exceeds thousands of nodes, we want to restart the NameNode 
> service, and all DataNodes send a full Block action to the NameNode. During 
> SafeMode, some DataNodes may send blocks to NameNode multiple times, which 
> will take up too much RPC. In fact, this is unnecessary.
> In this case, some block report leases will fail or time out, and in extreme 
> cases, the NameNode will always stay in Safe Mode.
> 2021-03-14 08:16:25,873 [78438700] - INFO  [Block report 
> processor:BlockManager@2158] - BLOCK* processReport 0xe: discarded 
> non-initial block report from DatanodeRegistration(:port, 
> datanodeUuid=, infoPort=, infoSecurePort=, 
> ipcPort=, storageInfo=lv=;nsid=;c=0) because namenode 
> still in startup phase
> 2021-03-14 08:16:31,521 [78444348] - INFO  [Block report 
> processor:BlockManager@2158] - BLOCK* processReport 0xe: discarded 
> non-initial block report from DatanodeRegistration(, 
> datanodeUuid=, infoPort=, infoSecurePort=, 
> ipcPort=, storageInfo=lv=;nsid=;c=0) because namenode 
> still in startup phase
> 2021-03-13 18:35:38,200 [29191027] - WARN  [Block report 
> processor:BlockReportLeaseManager@311] - BR lease 0x is not valid for 
> DN , because the DN is not in the pending set.
> 2021-03-13 18:36:08,143 [29220970] - WARN  [Block report 
> processor:BlockReportLeaseManager@311] - BR lease 0x is not valid for 
> DN , because the DN is not in the pending set.
> 2021-03-13 18:36:08,143 [29220970] - WARN  [Block report 
> processor:BlockReportLeaseManager@317] - BR lease 0x is not valid for 
> DN , because the lease has expired.
> 2021-03-13 18:36:08,145 [29220972] - WARN  [Block report 
> processor:BlockReportLeaseManager@317] - BR lease 0x is not valid for 
> DN , because the lease has expired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org