[jira] [Work logged] (HDFS-16208) [FGL] Implement Delete API with FGL

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16208?focusedWorklogId=784683=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784683
 ]

ASF GitHub Bot logged work on HDFS-16208:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 18:31
Start Date: 24/Jun/22 18:31
Worklog Time Spent: 10m 
  Work Description: prasad-acit commented on PR #3376:
URL: https://github.com/apache/hadoop/pull/3376#issuecomment-1165831454

   > For delete using FGL, it may be safer to add child locks before 
fsn.removeLeasesAndINodes().
   
   Thanks @jianghuazhu to bring up this issue.
   This is very imp task we had missed it. Recently, similar issue got reported 
in our version & we have fixed it in better way along with children lock. I 
will update the patch soon. Thanks once again for review & feedback.




Issue Time Tracking
---

Worklog Id: (was: 784683)
Time Spent: 50m  (was: 40m)

> [FGL] Implement Delete API with FGL
> ---
>
> Key: HDFS-16208
> URL: https://issues.apache.org/jira/browse/HDFS-16208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Replace all global locks for file / directory deletion with FGL.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16641) [HDFS] Add RPC ReQueue Metrics

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16641?focusedWorklogId=784665=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784665
 ]

ASF GitHub Bot logged work on HDFS-16641:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 17:02
Start Date: 24/Jun/22 17:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4496:
URL: https://github.com/apache/hadoop/pull/4496#issuecomment-1165763929

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 34s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  16m 11s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4496/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  29m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |  22m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  4s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   6m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  9s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |  24m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  21m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 26s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4496/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 195 unchanged - 0 fixed = 197 total (was 
195)  |
   | +1 :green_heart: |  mvnsite  |   3m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 55s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 22s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 365m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4496/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 615m 14s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4496/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4496 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 48fb58339e14 

[jira] [Work logged] (HDFS-16642) [HDFS] Moving selecting inputstream from JN in EditlogTailer out of FSNLock

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16642?focusedWorklogId=784661=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784661
 ]

ASF GitHub Bot logged work on HDFS-16642:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 16:26
Start Date: 24/Jun/22 16:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4497:
URL: https://github.com/apache/hadoop/pull/4497#issuecomment-1165735132

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 408m 16s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4497/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 526m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4497/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4497 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 30718586a920 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9ce6b2ae5a7c8f4dabe15bf6559ef8f3b2f43a3d |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[jira] [Work logged] (HDFS-16593) Correct inaccurate BlocksRemoved metric on DataNode side

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16593?focusedWorklogId=784617=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784617
 ]

ASF GitHub Bot logged work on HDFS-16593:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 14:17
Start Date: 24/Jun/22 14:17
Worklog Time Spent: 10m 
  Work Description: ZanderXu commented on PR #4353:
URL: https://github.com/apache/hadoop/pull/4353#issuecomment-1165621559

   The failed UT 
`org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl.testConcurrentWriteAndDeleteBlock`
 has nothing to do with this modification and works fine locally.




Issue Time Tracking
---

Worklog Id: (was: 784617)
Time Spent: 50m  (was: 40m)

> Correct inaccurate BlocksRemoved metric on DataNode side
> 
>
> Key: HDFS-16593
> URL: https://issues.apache.org/jira/browse/HDFS-16593
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When tracing the root cause of production issue, I found that the 
> BlocksRemoved  metric on Datanode size was inaccurate.
> {code:java}
> case DatanodeProtocol.DNA_INVALIDATE:
>   //
>   // Some local block(s) are obsolete and can be 
>   // safely garbage-collected.
>   //
>   Block toDelete[] = bcmd.getBlocks();
>   try {
> // using global fsdataset
> dn.getFSDataset().invalidate(bcmd.getBlockPoolId(), toDelete);
>   } catch(IOException e) {
> // Exceptions caught here are not expected to be disk-related.
> throw e;
>   }
>   dn.metrics.incrBlocksRemoved(toDelete.length);
>   break;
> {code}
> Because even if the invalidate method throws an exception, some blocks may 
> have been successfully deleted internally.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16453) Upgrade okhttp from 2.7.5 to 4.9.3

2022-06-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-16453:
--
Release Note: okhttp has been updated to address CVE-2021-0341  (was: 
okhttp has been updated to address CVE-2018-20200 and CVE-2018-20200)

> Upgrade okhttp from 2.7.5 to 4.9.3
> --
>
> Key: HDFS-16453
> URL: https://issues.apache.org/jira/browse/HDFS-16453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.1
>Reporter: Ivan Viaznikov
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{org.apache.hadoop:hadoop-hdfs-client}} comes with 
> {{com.squareup.okhttp:okhttp:2.7.5}} as a dependency, which is vulnerable to 
> an information disclosure issue due to how the contents of sensitive headers, 
> such as the {{Authorization}} header, can be logged when an 
> {{IllegalArgumentException}} is thrown.
> This issue could allow an attacker or malicious user who has access to the 
> logs to obtain the sensitive contents of the affected headers which could 
> facilitate further attacks.
> Fixed in {{5.0.0-alpha3}} by 
> [this|https://github.com/square/okhttp/commit/dcc6483b7dc6d9c0b8e03ff7c30c13f3c75264a5]
>  commit. The fix was cherry-picked and backported into {{4.9.2}} with 
> [this|https://github.com/square/okhttp/commit/1fd7c0afdc2cee9ba982b07d49662af7f60e1518]
>  commit.
> Requesting you to clarify if this dependency will be updated to a fixed 
> version in the following releases



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16453) Upgrade okhttp from 2.7.5 to 4.9.3

2022-06-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-16453:
--
Release Note: okhttp has been updated to address CVE-2018-20200 and 
CVE-2018-20200

> Upgrade okhttp from 2.7.5 to 4.9.3
> --
>
> Key: HDFS-16453
> URL: https://issues.apache.org/jira/browse/HDFS-16453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.1
>Reporter: Ivan Viaznikov
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{org.apache.hadoop:hadoop-hdfs-client}} comes with 
> {{com.squareup.okhttp:okhttp:2.7.5}} as a dependency, which is vulnerable to 
> an information disclosure issue due to how the contents of sensitive headers, 
> such as the {{Authorization}} header, can be logged when an 
> {{IllegalArgumentException}} is thrown.
> This issue could allow an attacker or malicious user who has access to the 
> logs to obtain the sensitive contents of the affected headers which could 
> facilitate further attacks.
> Fixed in {{5.0.0-alpha3}} by 
> [this|https://github.com/square/okhttp/commit/dcc6483b7dc6d9c0b8e03ff7c30c13f3c75264a5]
>  commit. The fix was cherry-picked and backported into {{4.9.2}} with 
> [this|https://github.com/square/okhttp/commit/1fd7c0afdc2cee9ba982b07d49662af7f60e1518]
>  commit.
> Requesting you to clarify if this dependency will be updated to a fixed 
> version in the following releases



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16453) Upgrade okhttp from 2.7.5 to 4.9.3

2022-06-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-16453:
--
Fix Version/s: 3.3.4
   (was: 3.3.9)

> Upgrade okhttp from 2.7.5 to 4.9.3
> --
>
> Key: HDFS-16453
> URL: https://issues.apache.org/jira/browse/HDFS-16453
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 3.3.1
>Reporter: Ivan Viaznikov
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.4
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {{org.apache.hadoop:hadoop-hdfs-client}} comes with 
> {{com.squareup.okhttp:okhttp:2.7.5}} as a dependency, which is vulnerable to 
> an information disclosure issue due to how the contents of sensitive headers, 
> such as the {{Authorization}} header, can be logged when an 
> {{IllegalArgumentException}} is thrown.
> This issue could allow an attacker or malicious user who has access to the 
> logs to obtain the sensitive contents of the affected headers which could 
> facilitate further attacks.
> Fixed in {{5.0.0-alpha3}} by 
> [this|https://github.com/square/okhttp/commit/dcc6483b7dc6d9c0b8e03ff7c30c13f3c75264a5]
>  commit. The fix was cherry-picked and backported into {{4.9.2}} with 
> [this|https://github.com/square/okhttp/commit/1fd7c0afdc2cee9ba982b07d49662af7f60e1518]
>  commit.
> Requesting you to clarify if this dependency will be updated to a fixed 
> version in the following releases



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph resolved HDFS-16633.
--
Fix Version/s: 3.4.0
   Resolution: Fixed

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space of the Replica from the Volume. 
> DataXceiver#writeBlock finally will call BlockReceiver#close which will check 
> if the ReplicaInfo has any remaining reserved space, if so release from the 
> Volume.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=784593=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784593
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 13:05
Start Date: 24/Jun/22 13:05
Worklog Time Spent: 10m 
  Work Description: PrabhuJoseph merged PR #4452:
URL: https://github.com/apache/hadoop/pull/4452




Issue Time Tracking
---

Worklog Id: (was: 784593)
Time Spent: 1h 20m  (was: 1h 10m)

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space of the Replica from the Volume. 
> DataXceiver#writeBlock finally will call BlockReceiver#close which will check 
> if the ReplicaInfo has any remaining reserved space, if so release from the 
> Volume.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HDFS-16633:
-
Description: 
Have found the Reserved Space For Replicas is not released on some cases in a 
Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the 
issue is not completely fixed. Have tried to debug the root cause but this will 
take lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space of the Replica from the Volume. DataXceiver#writeBlock 
finally will call BlockReceiver#close which will check if the ReplicaInfo has 
any remaining reserved space, if so release from the Volume.



  was:
Have found the Reserved Space For Replicas is not released on some cases in a 
Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still the 
issue is not completely fixed. Have tried to debug the root cause but this will 
take lot of time as it is Cx Prod Cluster. 

But we have an easier way to fix the issue completely by releasing any 
remaining reserved space from BlockReceiver#close which is initiated by 
DataXceiver#writeBlock finally. 




> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space of the Replica from the Volume. 
> DataXceiver#writeBlock finally will call BlockReceiver#close which will check 
> if the ReplicaInfo has any remaining reserved space, if so release from the 
> Volume.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16593) Correct inaccurate BlocksRemoved metric on DataNode side

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16593?focusedWorklogId=784589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784589
 ]

ASF GitHub Bot logged work on HDFS-16593:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 12:55
Start Date: 24/Jun/22 12:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4353:
URL: https://github.com/apache/hadoop/pull/4353#issuecomment-1165546598

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 46s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 243m 18s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4353/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 355m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4353/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4353 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6f7361013a1b 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da80e1f49def1062b6f12cb53fa799bb42e9ea7f |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 

[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=784451=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784451
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 08:16
Start Date: 24/Jun/22 08:16
Worklog Time Spent: 10m 
  Work Description: PrabhuJoseph commented on PR #4452:
URL: https://github.com/apache/hadoop/pull/4452#issuecomment-1165324098

   Thanks @ashutoshcipher for the patch. Looks good to me, +1. Will commit it 
end of day.




Issue Time Tracking
---

Worklog Id: (was: 784451)
Time Spent: 1h 10m  (was: 1h)

> Reserved Space For Replicas is not released on some cases
> -
>
> Key: HDFS-16633
> URL: https://issues.apache.org/jira/browse/HDFS-16633
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Prabhu Joseph
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Have found the Reserved Space For Replicas is not released on some cases in a 
> Cx Prod cluster. There are few fixes like HDFS-9530 and HDFS-8072 but still 
> the issue is not completely fixed. Have tried to debug the root cause but 
> this will take lot of time as it is Cx Prod Cluster. 
> But we have an easier way to fix the issue completely by releasing any 
> remaining reserved space from BlockReceiver#close which is initiated by 
> DataXceiver#writeBlock finally. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16642) [HDFS] Moving selecting inputstream from JN in EditlogTailer out of FSNLock

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16642:
--
Labels: pull-request-available  (was: )

> [HDFS] Moving selecting inputstream from JN in EditlogTailer out of FSNLock
> ---
>
> Key: HDFS-16642
> URL: https://issues.apache.org/jira/browse/HDFS-16642
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> EditlogTailer cost a long time for selecting InputStreams from Journalnode 
> while holding the write lock of FSNLock.
> During this period, 8020 handlers of Observer NameNode will be blocked by the 
> FSN Lock.
> In theory, selecting inputstreams from JournalNode does not involve changing 
> memory information in NameNode, so we can move the selecting out of the FSN 
> Lock, and it can improve the throughput of Observer NameNode.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16642) [HDFS] Moving selecting inputstream from JN in EditlogTailer out of FSNLock

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16642?focusedWorklogId=784441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784441
 ]

ASF GitHub Bot logged work on HDFS-16642:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 07:39
Start Date: 24/Jun/22 07:39
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4497:
URL: https://github.com/apache/hadoop/pull/4497

   ### Description of PR
   In our prod environment, 8020 Handler of Observer NameNode also blocked by 
EditlogTailer. 
   
   And EditlogTailer cost a long time for selecting InputStreams from 
Journalnode with holding the FSNLock. 
   
   During this period, 8020 handlers of Observer NameNode will be blocked by 
the FSN Lock.
   
   In theory, selecting inputstreams from JournalNode does not involve changing 
memory information in NameNode, so we can move the selecting out of the FSN 
Lock, and it can improve the throughput of Observer NameNode.
   
   After reading the relevant code in depth, I found that it is feasible to 
move the selecting inputStream out of the FSNLock.




Issue Time Tracking
---

Worklog Id: (was: 784441)
Remaining Estimate: 0h
Time Spent: 10m

> [HDFS] Moving selecting inputstream from JN in EditlogTailer out of FSNLock
> ---
>
> Key: HDFS-16642
> URL: https://issues.apache.org/jira/browse/HDFS-16642
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> EditlogTailer cost a long time for selecting InputStreams from Journalnode 
> while holding the write lock of FSNLock.
> During this period, 8020 handlers of Observer NameNode will be blocked by the 
> FSN Lock.
> In theory, selecting inputstreams from JournalNode does not involve changing 
> memory information in NameNode, so we can move the selecting out of the FSN 
> Lock, and it can improve the throughput of Observer NameNode.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16642) [HDFS] Moving selecting inputstream from JN in EditlogTailer out of FSNLock

2022-06-24 Thread ZanderXu (Jira)
ZanderXu created HDFS-16642:
---

 Summary: [HDFS] Moving selecting inputstream from JN in 
EditlogTailer out of FSNLock
 Key: HDFS-16642
 URL: https://issues.apache.org/jira/browse/HDFS-16642
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


EditlogTailer cost a long time for selecting InputStreams from Journalnode 
while holding the write lock of FSNLock.
During this period, 8020 handlers of Observer NameNode will be blocked by the 
FSN Lock.

In theory, selecting inputstreams from JournalNode does not involve changing 
memory information in NameNode, so we can move the selecting out of the FSN 
Lock, and it can improve the throughput of Observer NameNode.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16641) [HDFS] Add RPC ReQueue Metrics

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16641:
--
Labels: pull-request-available  (was: )

> [HDFS] Add RPC ReQueue Metrics
> --
>
> Key: HDFS-16641
> URL: https://issues.apache.org/jira/browse/HDFS-16641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add RPC ReQueue Metrics to easily locate the abnormal case that 
> ObserverNameNode has lower RPCProcessingTime and higher RPCQueueTime.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16641) [HDFS] Add RPC ReQueue Metrics

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16641?focusedWorklogId=784432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784432
 ]

ASF GitHub Bot logged work on HDFS-16641:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 06:46
Start Date: 24/Jun/22 06:46
Worklog Time Spent: 10m 
  Work Description: ZanderXu opened a new pull request, #4496:
URL: https://github.com/apache/hadoop/pull/4496

   ### Description of PR
   In our prod environment, Observer NameNode also have high QueueTime, but 
ProcessingTime is low. 
   After tracing and found the root cause is Rpc ReQueue.  
   So I thinks we should add a metric to quantify the number of reQueueRpc. And 
It will be helpfully for us to locate this abnormal case.
   
   




Issue Time Tracking
---

Worklog Id: (was: 784432)
Remaining Estimate: 0h
Time Spent: 10m

> [HDFS] Add RPC ReQueue Metrics
> --
>
> Key: HDFS-16641
> URL: https://issues.apache.org/jira/browse/HDFS-16641
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: ZanderXu
>Assignee: ZanderXu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add RPC ReQueue Metrics to easily locate the abnormal case that 
> ObserverNameNode has lower RPCProcessingTime and higher RPCQueueTime.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16641) [HDFS] Add RPC ReQueue Metrics

2022-06-24 Thread ZanderXu (Jira)
ZanderXu created HDFS-16641:
---

 Summary: [HDFS] Add RPC ReQueue Metrics
 Key: HDFS-16641
 URL: https://issues.apache.org/jira/browse/HDFS-16641
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: ZanderXu
Assignee: ZanderXu


Add RPC ReQueue Metrics to easily locate the abnormal case that 
ObserverNameNode has lower RPCProcessingTime and higher RPCQueueTime.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16633) Reserved Space For Replicas is not released on some cases

2022-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16633?focusedWorklogId=784419=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-784419
 ]

ASF GitHub Bot logged work on HDFS-16633:
-

Author: ASF GitHub Bot
Created on: 24/Jun/22 06:32
Start Date: 24/Jun/22 06:32
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4452:
URL: https://github.com/apache/hadoop/pull/4452#issuecomment-1165248137

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 44s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 334m 23s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 451m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4452 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1e3512c7e17e 4.15.0-175-generic #184-Ubuntu SMP Thu Mar 24 
17:48:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dc43534a20d00cd90c45863aa69c6023d820b9c7 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/4/testReport/ |
   | Max. process+thread count | 2020 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4452/4/console |
   | versions |