[jira] [Work logged] (HDFS-16477) [SPS]: Add metric PendingSPSPaths for getting the number of paths to be processed by SPS

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16477?focusedWorklogId=736544&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736544
 ]

ASF GitHub Bot logged work on HDFS-16477:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 09:28
Start Date: 04/Mar/22 09:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #4009:
URL: https://github.com/apache/hadoop/pull/4009#issuecomment-1058991178


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   5m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  23m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  23m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 25s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  17m  2s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4009/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  unit  |   0m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4009/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 43s |  |  ASF License check generated no 
output?  |
   |  |   | 246m 27s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeCentralMountTableConfig |
   |   | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4009/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4009 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile 
javac javadoc mvninstall unit shadedclient spotbugs checkstyle |
   | uname | Linux 754b3ee36d57 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x

[jira] [Work logged] (HDFS-16477) [SPS]: Add metric PendingSPSPaths for getting the number of paths to be processed by SPS

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16477?focusedWorklogId=736546&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736546
 ]

ASF GitHub Bot logged work on HDFS-16477:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 09:37
Start Date: 04/Mar/22 09:37
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #4009:
URL: https://github.com/apache/hadoop/pull/4009#issuecomment-1058998161


   Unit tests failed because of OOM, regardless of changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736546)
Time Spent: 1h 20m  (was: 1h 10m)

> [SPS]: Add metric PendingSPSPaths for getting the number of paths to be 
> processed by SPS
> 
>
> Key: HDFS-16477
> URL: https://issues.apache.org/jira/browse/HDFS-16477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently we have no idea how many paths are waiting to be processed when 
> using the SPS feature. We should add metric PendingSPSPaths for getting the 
> number of paths to be processed by SPS in NameNode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16494:
---

 Summary: Removed reuse of 
AvailableSpaceVolumeChoosingPolicy#initLocks()
 Key: HDFS-16494
 URL: https://issues.apache.org/jira/browse/HDFS-16494
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.9.2, 3.4.0
Reporter: JiangHua Zhu


When building the AvailableSpaceVolumeChoosingPolicy, if the default 
constructor is used, initLocks() will be used twice, which is actually 
unnecessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu reassigned HDFS-16494:
---

Assignee: JiangHua Zhu

> Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()
> ---
>
> Key: HDFS-16494
> URL: https://issues.apache.org/jira/browse/HDFS-16494
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.2, 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>
> When building the AvailableSpaceVolumeChoosingPolicy, if the default 
> constructor is used, initLocks() will be used twice, which is actually 
> unnecessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16494:
--
Labels: pull-request-available  (was: )

> Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()
> ---
>
> Key: HDFS-16494
> URL: https://issues.apache.org/jira/browse/HDFS-16494
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.2, 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When building the AvailableSpaceVolumeChoosingPolicy, if the default 
> constructor is used, initLocks() will be used twice, which is actually 
> unnecessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16494?focusedWorklogId=736574&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736574
 ]

ASF GitHub Bot logged work on HDFS-16494:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 10:40
Start Date: 04/Mar/22 10:40
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #4048:
URL: https://github.com/apache/hadoop/pull/4048


   
   ### Description of PR
   When using the default constructor to build the 
AvailableSpaceVolumeChoosingPolicy, initLocks() is used twice, which is 
actually unnecessary, the purpose of this pr is to avoid this from happening.
   Details: HDFS-16494
   
   ### How was this patch tested?
   Not too stressful for testing. Because this class is relatively mature.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736574)
Remaining Estimate: 0h
Time Spent: 10m

> Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()
> ---
>
> Key: HDFS-16494
> URL: https://issues.apache.org/jira/browse/HDFS-16494
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.2, 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When building the AvailableSpaceVolumeChoosingPolicy, if the default 
> constructor is used, initLocks() will be used twice, which is actually 
> unnecessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16494 started by JiangHua Zhu.
---
> Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()
> ---
>
> Key: HDFS-16494
> URL: https://issues.apache.org/jira/browse/HDFS-16494
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.2, 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When building the AvailableSpaceVolumeChoosingPolicy, if the default 
> constructor is used, initLocks() will be used twice, which is actually 
> unnecessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16155) Allow configurable exponential backoff in DFSInputStream refetchLocations

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16155?focusedWorklogId=736716&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736716
 ]

ASF GitHub Bot logged work on HDFS-16155:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 15:21
Start Date: 04/Mar/22 15:21
Worklog Time Spent: 10m 
  Work Description: bbeaudreault commented on pull request #3271:
URL: https://github.com/apache/hadoop/pull/3271#issuecomment-1059256787


   Thanks for the approval @Hexiaoqiao. Is there a downside to jsut merging 
this? It's been open for over 6 months, so I doubt anyone else will be jumping 
in any time soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736716)
Time Spent: 3h 20m  (was: 3h 10m)

> Allow configurable exponential backoff in DFSInputStream refetchLocations
> -
>
> Key: HDFS-16155
> URL: https://issues.apache.org/jira/browse/HDFS-16155
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: dfsclient
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> The retry policy in 
> [DFSInputStream#refetchLocations|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java#L1018-L1040]
>  was first written many years ago. It allows configuration of the base time 
> window, but subsequent retries double in an un-configurable way. This retry 
> strategy makes sense in some clusters as it's very conservative and will 
> avoid DDOSing the namenode in certain systemic failure modes – for example, 
> if a  file is being read by a large hadoop job and the underlying blocks are 
> moved by the balancer. In this case, enough datanodes would be added to the 
> deadNodes list and all hadoop tasks would simultaneously try to refetch the 
> blocks. The 3s doubling with random factor helps break up that stampeding 
> herd.
> However, not all cluster use-cases are created equal, so there are other 
> cases where a more aggressive initial backoff is preferred. For example in a 
> low-latency single reader scenario. In this case, if the balancer moves 
> enough blocks, the reader hits this 3s backoff which is way too long for a 
> low latency use-case.
> One could configure the the window very low (10ms), but then you can hit 
> other systemic failure modes which would result in readers DDOSing the 
> namenode again. For example, if blocks went missing due to truly dead 
> datanodes. In this case, many readers might be refetching locations for 
> different files with retry backoffs like 10ms, 20ms, 40ms, etc. It takes a 
> while to backoff enough to avoid impacting the namenode with that strategy.
> I suggest adding a configurable multiplier to the backoff strategy so that 
> operators can tune this as they see fit for their use-case. In the above low 
> latency case, one could set the base very low (say 2ms) and the multiplier 
> very high (say 50). This gives an aggressive first retry that very quickly 
> backs off.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-11107) TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-11107?focusedWorklogId=736747&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736747
 ]

ASF GitHub Bot logged work on HDFS-11107:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 15:54
Start Date: 04/Mar/22 15:54
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3862:
URL: https://github.com/apache/hadoop/pull/3862#issuecomment-1059284831


   Thanks @ayushtkn for the pointers, I'll keep trying to fix this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736747)
Time Spent: 1h  (was: 50m)

> TestStartup#testStorageBlockContentsStaleAfterNNRestart fails intermittently
> 
>
> Key: HDFS-11107
> URL: https://issues.apache.org/jira/browse/HDFS-11107
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Xiaobing Zhou
>Assignee: Ajith S
>Priority: Minor
>  Labels: flaky-test, pull-request-available, unit-test
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> It's noticed that this failed in the last Jenkins run of HDFS-11085, but it's 
> not reproducible and passed with and without the patch.
> {noformat}
> Error Message
> expected:<0> but was:<2>
> Stacktrace
> java.lang.AssertionError: expected:<0> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestStartup.testStorageBlockContentsStaleAfterNNRestart(TestStartup.java:726)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16481) Provide support to set Http and Rpc ports in MiniJournalCluster

2022-03-04 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-16481.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Provide support to set Http and Rpc ports in MiniJournalCluster
> ---
>
> Key: HDFS-16481
> URL: https://issues.apache.org/jira/browse/HDFS-16481
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> We should provide support for clients to set Http and Rpc ports of 
> JournalNodes in MiniJournalCluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16481) Provide support to set Http and Rpc ports in MiniJournalCluster

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16481?focusedWorklogId=736779&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736779
 ]

ASF GitHub Bot logged work on HDFS-16481:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 16:48
Start Date: 04/Mar/22 16:48
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #4028:
URL: https://github.com/apache/hadoop/pull/4028


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736779)
Time Spent: 6h  (was: 5h 50m)

> Provide support to set Http and Rpc ports in MiniJournalCluster
> ---
>
> Key: HDFS-16481
> URL: https://issues.apache.org/jira/browse/HDFS-16481
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> We should provide support for clients to set Http and Rpc ports of 
> JournalNodes in MiniJournalCluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16481) Provide support to set Http and Rpc ports in MiniJournalCluster

2022-03-04 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17501433#comment-17501433
 ] 

Ayush Saxena commented on HDFS-16481:
-

Committed to trunk.
Thanx [~vjasani] for the contribution!!!

> Provide support to set Http and Rpc ports in MiniJournalCluster
> ---
>
> Key: HDFS-16481
> URL: https://issues.apache.org/jira/browse/HDFS-16481
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> We should provide support for clients to set Http and Rpc ports of 
> JournalNodes in MiniJournalCluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16481) Provide support to set Http and Rpc ports in MiniJournalCluster

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16481?focusedWorklogId=736780&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736780
 ]

ASF GitHub Bot logged work on HDFS-16481:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 16:48
Start Date: 04/Mar/22 16:48
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #4028:
URL: https://github.com/apache/hadoop/pull/4028#issuecomment-1059331962


   Thanx @tomscut & @virajjasani 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736780)
Time Spent: 6h 10m  (was: 6h)

> Provide support to set Http and Rpc ports in MiniJournalCluster
> ---
>
> Key: HDFS-16481
> URL: https://issues.apache.org/jira/browse/HDFS-16481
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> We should provide support for clients to set Http and Rpc ports of 
> JournalNodes in MiniJournalCluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16493) [SBN Read]When fast path tail enabled, standby or observer namenode may read uncommitted data

2022-03-04 Thread Erik Krogen (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17501435#comment-17501435
 ] 

Erik Krogen commented on HDFS-16493:


Thanks for reporting [~liutongwei]! I guess this is a continuation of [your 
comment on 
HDFS-13150|https://issues.apache.org/jira/browse/HDFS-13150?focusedCommentId=17408479&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17408479],
 is that correct?

As I said there I don't personally have bandwidth to dig deep onto this, but 
from your detailed explanation, it does seem to be a valid issue. I will let 
[~shv] take a closer look.

> [SBN Read]When fast path tail enabled, standby or observer namenode may read 
> uncommitted data
> -
>
> Key: HDFS-16493
> URL: https://issues.apache.org/jira/browse/HDFS-16493
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: journal-node, namanode
>Reporter: liutongwei
>Priority: Critical
> Attachments: example.patch
>
>
> Although fast path tail use quorum read to pull edit log, it seem like can 
> read uncommitted data in some corner case.
> Here is an example. Suppose we have three JN, their init state is:
>  
> {code:java}
> epoch 1
> JN1 [1-3](in-progress)
> JN2 [1-3](in-progress)
> JN3 [1-4](in-progress)
> Note that, in epoch 1 txid 1-3 was committed, and txid 4 not.
> {code}
> When a failover occur, if a new writer cannot contact to JN3 for network 
> partition, and finish the recovery stage, and write a new txid 4 in epoch 2, 
> which value not equal to JN3's.
>  
> {code:java}
> epcho 2
> JN1 [1-3](finalized) [4-4](inprogress)
> JN2 [1-3](finalized) [4-4](inprogress)
> JN3 [1-4](inprogress)
> Note that, in JN3 txid4's value not equal to other JN.
> {code}
>  
> Now there is a read namenode to pull edits, and it contact to JN3 and JN2, it 
> got majority response. But it got logs of same length but different 
> content.And no more information to choose which log is right. If we choose 
> JN3, we got meta data corruption.
> There is a test example patch [^example.patch] for running and debug.
> For fix it i think we should add finalized state to 
> {{{}GetJournaledEditsResponseProto{}}}, so we can discard the fault log.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16477) [SPS]: Add metric PendingSPSPaths for getting the number of paths to be processed by SPS

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16477?focusedWorklogId=736786&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736786
 ]

ASF GitHub Bot logged work on HDFS-16477:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 16:59
Start Date: 04/Mar/22 16:59
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #4009:
URL: https://github.com/apache/hadoop/pull/4009#discussion_r819740564



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/sps/StoragePolicySatisfyManager.java
##
@@ -292,4 +292,13 @@ public boolean isEnabled() {
   public StoragePolicySatisfierMode getMode() {
 return mode;
   }
+
+  /**
+   * @return the number of paths to be processed by storage policy satisfier.
+   */
+  public int getPendingSPSPaths() {
+synchronized (pathsToBeTraversed) {
+  return pathsToBeTraversed.size();
+}

Review comment:
   This is synchronised and will be called a bunch of times. What impact is 
it going to have on normal SPS performance

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -5524,6 +5524,11 @@ public int getBlockCapacity() {
 return blockManager.getCapacity();
   }
 
+  @Metric({"PendingSPSPaths", "The number of paths to be processed by storage 
policy satisfier"})
+  public int getPendingSPSPaths() {

Review comment:
   Do you not plan to add this method to ``FSNamesystemMBean``?
   if no specific reason, we should add it there, this is quite specific to FSN




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 736786)
Time Spent: 1.5h  (was: 1h 20m)

> [SPS]: Add metric PendingSPSPaths for getting the number of paths to be 
> processed by SPS
> 
>
> Key: HDFS-16477
> URL: https://issues.apache.org/jira/browse/HDFS-16477
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Currently we have no idea how many paths are waiting to be processed when 
> using the SPS feature. We should add metric PendingSPSPaths for getting the 
> number of paths to be processed by SPS in NameNode.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16494?focusedWorklogId=736963&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736963
 ]

ASF GitHub Bot logged work on HDFS-16494:
-

Author: ASF GitHub Bot
Created on: 04/Mar/22 23:34
Start Date: 04/Mar/22 23:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #4048:
URL: https://github.com/apache/hadoop/pull/4048#issuecomment-1059604940


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 325m 43s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4048/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 450m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestMover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4048/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4048 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e83832001c7d 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 57207da5948665cf4f7653bf24a812abd2e9073b |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4048/2/

[jira] [Created] (HDFS-16495) RBF should prepend the client ip rather than append it.

2022-03-04 Thread Owen O'Malley (Jira)
Owen O'Malley created HDFS-16495:


 Summary: RBF should prepend the client ip rather than append it.
 Key: HDFS-16495
 URL: https://issues.apache.org/jira/browse/HDFS-16495
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley


Currently the Routers append the client ip to the caller context if and only if 
it is not already set. This would allow the user to fake their ip by setting 
the caller context. Much better is to prepend it unconditionally.

The NN must be able to trust the client ip from the caller context.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16476) Increase the number of metrics used to record PendingRecoveryBlocks

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16476?focusedWorklogId=736979&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-736979
 ]

ASF GitHub Bot logged work on HDFS-16476:
-

Author: ASF GitHub Bot
Created on: 05/Mar/22 00:15
Start Date: 05/Mar/22 00:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #4010:
URL: https://github.com/apache/hadoop/pull/4010#issuecomment-1059618600


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 16s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  cc  |  21m 40s | 
[/results-compile-cc-root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4010/4/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 generated 13 new + 310 unchanged - 13 
fixed = 323 total (was 323)  |
   | +1 :green_heart: |  javac  |  21m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  cc  |  19m 34s | 
[/results-compile-cc-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4010/4/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 36 new + 287 
unchanged - 36 fixed = 323 total (was 323)  |
   | +1 :green_heart: |  javac  |  19m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 42s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4010/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 255 unchanged - 1 fixed = 256 total (was 
256)  |
   | +1 :green_heart: |  mvnsite  |   4m 17s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 56s | 
[/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4010/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.txt)
 |  hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04.  |
   | +1 :green_heart: |  javadoc  |   4m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 14s |  |  patch has no errors 
when building and testing our client artifacts. 

[jira] [Work logged] (HDFS-16462) Make HDFS get tool cross platform

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16462?focusedWorklogId=737023&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-737023
 ]

ASF GitHub Bot logged work on HDFS-16462:
-

Author: ASF GitHub Bot
Created on: 05/Mar/22 05:40
Start Date: 05/Mar/22 05:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #4003:
URL: https://github.com/apache/hadoop/pull/4003#issuecomment-1059696850


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  43m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  61m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  99m 14s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 233m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4003/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4003 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell golang |
   | uname | Linux 87488cbcc0d8 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 
06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 40e9eaae34deebc94be7a3a39378cfe5deeac96f |
   | Default Java | Red Hat, Inc.-1.8.0_322-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4003/11/testReport/ |
   | Max. process+thread count | 573 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4003/11/console |
   | versions | git=2.9.5 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 737023)
Time Spent: 1.5h  (was: 1h 20m)

> Make HDFS get tool cross platform
> -
>
> Key: HDFS-16462
> URL: https://issues.apache.org/jira/browse/HDFS-16462
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, libhdfs++, tools
>Affects Versions: 3.4.0
> Environment: Centos 7, Centos 8, Debian 10, Ubuntu Focal
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: libhdfscpp, pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The source files for *hdfs_get* uses *getopt* for parsing the command line 
> arguments. getopt is available only on Linux and thus, isn't cross platform. 
> We need to replace getopt with *boost

[jira] [Work logged] (HDFS-16494) Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()

2022-03-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16494?focusedWorklogId=737034&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-737034
 ]

ASF GitHub Bot logged work on HDFS-16494:
-

Author: ASF GitHub Bot
Created on: 05/Mar/22 06:57
Start Date: 05/Mar/22 06:57
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #4048:
URL: https://github.com/apache/hadoop/pull/4048#issuecomment-1059709044


   Here are some unit tests that are failing. E.g:
   org.apache.hadoop.hdfs.server.namenode.TestFileTruncate
   org.apache.hadoop.hdfs.server.mover.TestMover
   
   It looks like these exceptions have little to do with the update I submitted.
   Can you help review this pr, @aajisaka  @virajjasani   @tomscut .
   Thank you very much.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 737034)
Time Spent: 0.5h  (was: 20m)

> Removed reuse of AvailableSpaceVolumeChoosingPolicy#initLocks()
> ---
>
> Key: HDFS-16494
> URL: https://issues.apache.org/jira/browse/HDFS-16494
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.9.2, 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When building the AvailableSpaceVolumeChoosingPolicy, if the default 
> constructor is used, initLocks() will be used twice, which is actually 
> unnecessary.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org