[jira] [Commented] (HDFS-16195) Fix log message when choosing storage groups for block movement in balancer

2021-09-03 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409786#comment-17409786
 ] 

Hadoop QA commented on HDFS-16195:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
36s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 26s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 20m 
50s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m  
4s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 46s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private 

[jira] [Commented] (HDFS-16195) Fix log message when choosing storage groups for block movement in balancer

2021-09-03 Thread Preeti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409750#comment-17409750
 ] 

Preeti commented on HDFS-16195:
---

[~prasad-acit] [~vjasani] I have updated the patch with adjusted line length 
and formatting. As far as I can see, the formatting changes required was the 
indentation. Was there anything else you wanted me to change? The problem is I 
am unable to run the check style locally and find this myself.

> Fix log message when choosing storage groups for block movement in balancer
> ---
>
> Key: HDFS-16195
> URL: https://issues.apache.org/jira/browse/HDFS-16195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Preeti
>Priority: Major
> Attachments: HADOOP-16195.001.patch, HADOOP-16195.002.patch, 
> HADOOP-16195.003.patch
>
>
> Correct the log message in line with the logic associated with
> moving blocks in chooseStorageGroups() in the balancer. All log lines should 
> indicate from which storage source the blocks are being moved correctly to 
> avoid ambiguity. Right now one of the log lines is incorrect: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L555]
>  which indicates that storage blocks are moved from underUtilized to 
> aboveAvgUtilized nodes, while it is actually the other way around in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16195) Fix log message when choosing storage groups for block movement in balancer

2021-09-03 Thread Preeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Preeti updated HDFS-16195:
--
Attachment: HADOOP-16195.003.patch
Status: Patch Available  (was: Open)

> Fix log message when choosing storage groups for block movement in balancer
> ---
>
> Key: HDFS-16195
> URL: https://issues.apache.org/jira/browse/HDFS-16195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Preeti
>Priority: Major
> Attachments: HADOOP-16195.001.patch, HADOOP-16195.002.patch, 
> HADOOP-16195.003.patch
>
>
> Correct the log message in line with the logic associated with
> moving blocks in chooseStorageGroups() in the balancer. All log lines should 
> indicate from which storage source the blocks are being moved correctly to 
> avoid ambiguity. Right now one of the log lines is incorrect: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L555]
>  which indicates that storage blocks are moved from underUtilized to 
> aboveAvgUtilized nodes, while it is actually the other way around in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16195) Fix log message when choosing storage groups for block movement in balancer

2021-09-03 Thread Preeti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Preeti updated HDFS-16195:
--
Status: Open  (was: Patch Available)

> Fix log message when choosing storage groups for block movement in balancer
> ---
>
> Key: HDFS-16195
> URL: https://issues.apache.org/jira/browse/HDFS-16195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Preeti
>Priority: Major
> Attachments: HADOOP-16195.001.patch, HADOOP-16195.002.patch, 
> HADOOP-16195.003.patch
>
>
> Correct the log message in line with the logic associated with
> moving blocks in chooseStorageGroups() in the balancer. All log lines should 
> indicate from which storage source the blocks are being moved correctly to 
> avoid ambiguity. Right now one of the log lines is incorrect: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L555]
>  which indicates that storage blocks are moved from underUtilized to 
> aboveAvgUtilized nodes, while it is actually the other way around in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16186) Datanode kicks out hard disk logic optimization

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16186?focusedWorklogId=646443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646443
 ]

ASF GitHub Bot logged work on HDFS-16186:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 19:21
Start Date: 03/Sep/21 19:21
Worklog Time Spent: 10m 
  Work Description: prasad-acit commented on pull request #3334:
URL: https://github.com/apache/hadoop/pull/3334#issuecomment-912758502


   > @jianghuazhu Hello, I’m a novice, I’m not sure if the patch failed is 
related to my code, can you help me?
   
   Checkstyle issues are because of new code. Take a look at the report - 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3334/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
   
   Failed tests you can run locally with & without the patch changes. Looks 
like impacted with the patch.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646443)
Time Spent: 1h 20m  (was: 1h 10m)

> Datanode kicks out hard disk logic optimization
> ---
>
> Key: HDFS-16186
> URL: https://issues.apache.org/jira/browse/HDFS-16186
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.1.2
> Environment: In the hadoop cluster, a certain hard disk in a certain 
> Datanode has a problem, but the datanode of hdfs did not kick out the hard 
> disk in time, causing the datanode to become a slow node
>Reporter: yanbin.zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> 2021-08-24 08:56:10,456 WARN datanode.DataNode 
> (BlockSender.java:readChecksum(681)) - Could not read or failed to verify 
> checksum for data at offset 113115136 for block 
> BP-1801371083-x.x.x.x-1603704063698:blk_5635828768_4563943709
> java.io.IOException: Input/output error
>  at java.io.FileInputStream.readBytes(Native Method)
>  at java.io.FileInputStream.read(FileInputStream.java:255)
>  at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider$WrappedFileInputStream.read(FileIoProvider.java:876)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>  at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
>  at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
>  at java.io.DataInputStream.read(DataInputStream.java:149)
>  at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:210)
>  at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.ReplicaInputStreams.readChecksumFully(ReplicaInputStreams.java:90)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.readChecksum(BlockSender.java:679)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:588)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:803)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:750)
>  at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.scanBlock(VolumeScanner.java:448)
>  at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.runLoop(VolumeScanner.java:558)
>  at 
> org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:633)
> 2021-08-24 08:56:11,121 WARN datanode.VolumeScanner 
> (VolumeScanner.java:handle(292)) - Reporting bad 
> BP-1801371083-x.x.x.x-1603704063698:blk_5635828768_4563943709 on 
> /data11/hdfs/data



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?focusedWorklogId=646442=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646442
 ]

ASF GitHub Bot logged work on HDFS-16199:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 19:15
Start Date: 03/Sep/21 19:15
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3362:
URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912755390


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  34m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 123m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3362 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 580b0cefb209 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 23b02a9d59d494705916338722ae437b4d8e3687 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Work started] (HDFS-16208) [FGL] Implement Delete API with FGL

2021-09-03 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16208 started by Renukaprasad C.
-
> [FGL] Implement Delete API with FGL
> ---
>
> Key: HDFS-16208
> URL: https://issues.apache.org/jira/browse/HDFS-16208
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Replace all global locks for file / directory deletion with FGL.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16191) [FGL] Fix FSImage loading issues on dynamic partitions

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16191?focusedWorklogId=646434=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646434
 ]

ASF GitHub Bot logged work on HDFS-16191:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 18:40
Start Date: 03/Sep/21 18:40
Worklog Time Spent: 10m 
  Work Description: prasad-acit commented on pull request #3351:
URL: https://github.com/apache/hadoop/pull/3351#issuecomment-912736057


   @shvachko In org.apache.hadoop.hdfs.server.namenode.INode#indexOf(), index 
is calculated based on static partition count. Will it has any impact on 
dynamic partitions? I couldnt get this part, plz correct if i am wrong.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646434)
Time Spent: 0.5h  (was: 20m)

> [FGL] Fix FSImage loading issues on dynamic partitions
> --
>
> Key: HDFS-16191
> URL: https://issues.apache.org/jira/browse/HDFS-16191
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When new partitions gets added into PartitionGSet, iterator do not consider 
> the new partitions. Which always iterate on Static Partition count. This lead 
> to full of warn messages as below.
> 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139780 when saving the leases.
> 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139781 when saving the leases.
> 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139784 when saving the leases.
> 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139785 when saving the leases.
> 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139786 when saving the leases.
> 2021-08-28 03:23:19,420 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139788 when saving the leases.
> 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139789 when saving the leases.
> 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139790 when saving the leases.
> 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139791 when saving the leases.
> 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139793 when saving the leases.
> 2021-08-28 03:23:19,421 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139795 when saving the leases.
> 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139796 when saving the leases.
> 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139797 when saving the leases.
> 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139800 when saving the leases.
> 2021-08-28 03:23:19,422 WARN namenode.FSImageFormatPBINode: Fail to find 
> inode 139801 when saving the leases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16188) RBF: Router to support resolving monitored namenodes with DNS

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16188?focusedWorklogId=646403=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646403
 ]

ASF GitHub Bot logged work on HDFS-16188:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 17:26
Start Date: 03/Sep/21 17:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3346:
URL: https://github.com/apache/hadoop/pull/3346#issuecomment-912694724


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   9m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  root: The patch generated 
0 new + 50 unchanged - 1 fixed = 50 total (was 51)  |
   | +1 :green_heart: |  mvnsite  |   5m  0s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  11m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 37s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 345m 45s |  |  hadoop-hdfs in the patch 
passed.  |
   | -1 :x: |  unit  |  37m 25s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 624m 43s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3346/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3346 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 296d15a8f07c 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven 

[jira] [Work logged] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?focusedWorklogId=646356=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646356
 ]

ASF GitHub Bot logged work on HDFS-16199:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 15:45
Start Date: 03/Sep/21 15:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3362:
URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912633897


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  35m  2s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3362/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3362 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1b40c8ec6e4a 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 23b02a9d59d494705916338722ae437b4d8e3687 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Work logged] (HDFS-16138) BlockReportProcessingThread exit doesnt print the acutal stack

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16138?focusedWorklogId=646337=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646337
 ]

ASF GitHub Bot logged work on HDFS-16138:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 14:41
Start Date: 03/Sep/21 14:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3244:
URL: https://github.com/apache/hadoop/pull/3244#issuecomment-912590710


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 343m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3244/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 436m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3244/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3244 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 048a4f8fdb03 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9eccce744e14d29ac566447e4451d5c58e3afe15 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Work logged] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=646335=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646335
 ]

ASF GitHub Bot logged work on HDFS-16210:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 14:34
Start Date: 03/Sep/21 14:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3379:
URL: https://github.com/apache/hadoop/pull/3379#issuecomment-912586249


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m  4s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3379 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux daed326725d4 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / de4f45fa8fc6d9b4486654cb2bba0a2afb0624a1 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/2/testReport/ |
   | Max. process+thread count | 2634 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
 

[jira] [Commented] (HDFS-16195) Fix log message when choosing storage groups for block movement in balancer

2021-09-03 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409524#comment-17409524
 ] 

Viraj Jasani commented on HDFS-16195:
-

+1 (non-binding) for the actual changes, Thanks [~preetium]. Also, agree with 
[~prasad-acit] that formatting needs changes.

> Fix log message when choosing storage groups for block movement in balancer
> ---
>
> Key: HDFS-16195
> URL: https://issues.apache.org/jira/browse/HDFS-16195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Preeti
>Priority: Major
> Attachments: HADOOP-16195.001.patch, HADOOP-16195.002.patch
>
>
> Correct the log message in line with the logic associated with
> moving blocks in chooseStorageGroups() in the balancer. All log lines should 
> indicate from which storage source the blocks are being moved correctly to 
> avoid ambiguity. Right now one of the log lines is incorrect: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L555]
>  which indicates that storage blocks are moved from underUtilized to 
> aboveAvgUtilized nodes, while it is actually the other way around in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16091) WebHDFS should support getSnapshotDiffReportListing

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16091?focusedWorklogId=646311=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646311
 ]

ASF GitHub Bot logged work on HDFS-16091:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 13:55
Start Date: 03/Sep/21 13:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3374:
URL: https://github.com/apache/hadoop/pull/3374#issuecomment-912558267


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  17m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |   5m 15s | 
[/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 652 unchanged - 1 
fixed = 655 total (was 653)  |
   | +1 :green_heart: |  compile  |   4m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |   4m 45s | 
[/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 
with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 
631 unchanged - 1 fixed = 634 total (was 632)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 2 new + 258 unchanged - 0 fixed = 
260 total (was 258)  |
   | +1 :green_heart: |  mvnsite  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 371m 53s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  32m  4s | 

[jira] [Work logged] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?focusedWorklogId=646293=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646293
 ]

ASF GitHub Bot logged work on HDFS-16199:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 13:09
Start Date: 03/Sep/21 13:09
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #3362:
URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912527131


   > It is debug log, Can we not just pass e as whole, rather than just 
printing the message, The trace might be more helpful while debugging?
   
   Nothing wrong with that, the only reason why I kept it `e.getMessage()` is 
because it was already in place, but yes since the error message is not getting 
printed anyways, let's keep entire stacktrace, sounds good.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646293)
Time Spent: 1h 10m  (was: 1h)

> Resolve log placeholders in NamenodeBeanMetrics
> ---
>
> Key: HDFS-16199
> URL: https://issues.apache.org/jira/browse/HDFS-16199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> NamenodeBeanMetrics has some missing placeholders in logs. This Jira is to 
> fix them all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16211?focusedWorklogId=646290=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646290
 ]

ASF GitHub Bot logged work on HDFS-16211:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 13:02
Start Date: 03/Sep/21 13:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3380:
URL: https://github.com/apache/hadoop/pull/3380#issuecomment-912521720


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  25m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  25m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  22m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 44s |  |  hadoop-auth in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 186m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3380/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3380 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 4a5df42b4704 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e464f7136dbfa6bc1e3c49ff053b1493e07c3cb8 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3380/1/testReport/ |
   | Max. process+thread count | 518 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3380/1/console |
   | versions | 

[jira] [Work logged] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=646258=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646258
 ]

ASF GitHub Bot logged work on HDFS-16210:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 11:14
Start Date: 03/Sep/21 11:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3379:
URL: https://github.com/apache/hadoop/pull/3379#issuecomment-912460294


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  37m 42s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 24s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 0 
unchanged - 0 fixed = 4 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m 46s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 116m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3379/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3379 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0f01098e0c26 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 75b2b043a1c9b67ef2e9466a5646252a9d6b6dcb |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Work started] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-16211 started by JiangHua Zhu.
---
> Complete some descriptions related to AuthToken
> ---
>
> Key: HDFS-16211
> URL: https://issues.apache.org/jira/browse/HDFS-16211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In AuthToken, some description information is missing.
> The purpose of this jira is to complete some descriptions related to 
> AuthToken.
> /**
>  */
> public class AuthToken implements Principal {
>   ..
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16091) WebHDFS should support getSnapshotDiffReportListing

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16091?focusedWorklogId=646256=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646256
 ]

ASF GitHub Bot logged work on HDFS-16091:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 11:11
Start Date: 03/Sep/21 11:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3374:
URL: https://github.com/apache/hadoop/pull/3374#issuecomment-912458277


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   5m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 19s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |   5m 45s | 
[/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 3 new + 652 unchanged - 1 
fixed = 655 total (was 653)  |
   | +1 :green_heart: |  compile  |   4m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |   4m 59s | 
[/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/results-compile-javac-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs-project-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 
with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 
631 unchanged - 1 fixed = 634 total (was 632)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  9s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 2 new + 230 unchanged - 0 fixed = 
232 total (was 230)  |
   | +1 :green_heart: |  mvnsite  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 243m 35s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3374/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  23m 43s |  |  

[jira] [Created] (HDFS-16212) Flaky when creating multiple export point

2021-09-03 Thread Minyang Ye (Jira)
Minyang Ye created HDFS-16212:
-

 Summary: Flaky when creating multiple export point
 Key: HDFS-16212
 URL: https://issues.apache.org/jira/browse/HDFS-16212
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs
Affects Versions: 3.3.1
Reporter: Minyang Ye


The flaky test is 
org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable#testViewFsMultipleExportPoint.

When creating new nfsServer with some similar URIs (IP addresses and ports are 
the same but different folder) in config, the constructor cannot distinguish 
the UPIs.

The test also does assuming the order of a set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646255=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646255
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 11:08
Start Date: 03/Sep/21 11:08
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912456375


   > As @ayushtkn said, facing the same problem, 
[HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820) add ability to 
disable the feature, you can also set it false.
   > If you change the default value, it's an incompatible change, especially 
for upgrading(using this feature). Seem that it's not so good.
   
   Thanks @ferhui for your comments. 
   
   Maybe we can add a release note for this change. For new users who may not 
know this feature(Centralized Cache Management) exists, but it already runs 
quietly in the background. I think it's not a very elegant way. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646255)
Time Spent: 1h 20m  (was: 1h 10m)

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646251=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646251
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 10:34
Start Date: 03/Sep/21 10:34
Worklog Time Spent: 10m 
  Work Description: ferhui commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912438441


   As @ayushtkn said, facing the same problem, HDFS-13820 add ability to 
disable the feature, you can also set it false.
   If you change the default value, it's an incompatible change, especially for 
upgrading(using this feature). Seem that it's not so good. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646251)
Time Spent: 1h 10m  (was: 1h)

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646247=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646247
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 10:16
Start Date: 03/Sep/21 10:16
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912427687


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 249m  5s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3378/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 345m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.TestEnhancedByteBufferAccess |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestPmemCacheRecovery |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestCacheByPmemMappableBlockLoader |
   |   | hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetCache |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3378/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3378 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml markdownlint |
   | uname | Linux 8f8fcb9e3d81 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | 

[jira] [Work logged] (HDFS-16194) Simplify the code with DatanodeID#getXferAddrWithHostname

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16194?focusedWorklogId=646244=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646244
 ]

ASF GitHub Bot logged work on HDFS-16194:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 10:05
Start Date: 03/Sep/21 10:05
Worklog Time Spent: 10m 
  Work Description: tasanuma commented on pull request #3354:
URL: https://github.com/apache/hadoop/pull/3354#issuecomment-912420849


   @Hexiaoqiao Could you review it again, and merge it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646244)
Time Spent: 1h 50m  (was: 1h 40m)

> Simplify the code with DatanodeID#getXferAddrWithHostname   
> 
>
> Key: HDFS-16194
> URL: https://issues.apache.org/jira/browse/HDFS-16194
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Simplify the code with DatanodeID#getXferAddrWithHostname.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16211:
--
Labels: pull-request-available  (was: )

> Complete some descriptions related to AuthToken
> ---
>
> Key: HDFS-16211
> URL: https://issues.apache.org/jira/browse/HDFS-16211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In AuthToken, some description information is missing.
> The purpose of this jira is to complete some descriptions related to 
> AuthToken.
> /**
>  */
> public class AuthToken implements Principal {
>   ..
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16211?focusedWorklogId=646240=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646240
 ]

ASF GitHub Bot logged work on HDFS-16211:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:55
Start Date: 03/Sep/21 09:55
Worklog Time Spent: 10m 
  Work Description: jianghuazhu opened a new pull request #3380:
URL: https://github.com/apache/hadoop/pull/3380


   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646240)
Remaining Estimate: 0h
Time Spent: 10m

> Complete some descriptions related to AuthToken
> ---
>
> Key: HDFS-16211
> URL: https://issues.apache.org/jira/browse/HDFS-16211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In AuthToken, some description information is missing.
> The purpose of this jira is to complete some descriptions related to 
> AuthToken.
> /**
>  */
> public class AuthToken implements Principal {
>   ..
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16204) Improve FSDirEncryptionZoneOp related parameter comments

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16204?focusedWorklogId=646239=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646239
 ]

ASF GitHub Bot logged work on HDFS-16204:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:50
Start Date: 03/Sep/21 09:50
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3368:
URL: https://github.com/apache/hadoop/pull/3368#issuecomment-912411935


   Thanks @ayushtkn  for the comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646239)
Time Spent: 50m  (was: 40m)

> Improve FSDirEncryptionZoneOp related parameter comments
> 
>
> Key: HDFS-16204
> URL: https://issues.apache.org/jira/browse/HDFS-16204
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In FSDirEncryptionZoneOp, there are some parameter comments that are too 
> simple to understand. We can try to perfect them, this is the purpose of this 
> jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16194) Simplify the code with DatanodeID#getXferAddrWithHostname

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16194?focusedWorklogId=646235=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646235
 ]

ASF GitHub Bot logged work on HDFS-16194:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:45
Start Date: 03/Sep/21 09:45
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3354:
URL: https://github.com/apache/hadoop/pull/3354#issuecomment-912408663


   Those failed unit tests are unrelated to the change.
   
   @tasanuma Please take a look. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646235)
Time Spent: 1h 40m  (was: 1.5h)

> Simplify the code with DatanodeID#getXferAddrWithHostname   
> 
>
> Key: HDFS-16194
> URL: https://issues.apache.org/jira/browse/HDFS-16194
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Simplify the code with DatanodeID#getXferAddrWithHostname.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16203) Discover datanodes with unbalanced block pool usage by the standard deviation

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16203?focusedWorklogId=646232=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646232
 ]

ASF GitHub Bot logged work on HDFS-16203:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:35
Start Date: 03/Sep/21 09:35
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3366:
URL: https://github.com/apache/hadoop/pull/3366#issuecomment-912402783


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  jshint  |   0m  0s |  |  jshint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   4m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  hadoop-hdfs-project: The 
patch generated 0 new + 113 unchanged - 9 fixed = 113 total (was 122)  |
   | +1 :green_heart: |  mvnsite  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 231m 52s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 346m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3366/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3366 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell jshint |
   | uname | Linux b2667f9ca18e 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 58f0a2e2f44bacf4b98d236228b185df34501ae4 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[jira] [Assigned] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu reassigned HDFS-16211:
---

Assignee: JiangHua Zhu

> Complete some descriptions related to AuthToken
> ---
>
> Key: HDFS-16211
> URL: https://issues.apache.org/jira/browse/HDFS-16211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>
> In AuthToken, some description information is missing.
> The purpose of this jira is to complete some descriptions related to 
> AuthToken.
> /**
>  */
> public class AuthToken implements Principal {
>   ..
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread JiangHua Zhu (Jira)
JiangHua Zhu created HDFS-16211:
---

 Summary: Complete some descriptions related to AuthToken
 Key: HDFS-16211
 URL: https://issues.apache.org/jira/browse/HDFS-16211
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: JiangHua Zhu


In AuthToken, some description information is missing.
The purpose of this jira is to complete some descriptions related to AuthToken.
/**
 */
public class AuthToken implements Principal {
  ..
}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16211) Complete some descriptions related to AuthToken

2021-09-03 Thread JiangHua Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangHua Zhu updated HDFS-16211:

Component/s: documentation

> Complete some descriptions related to AuthToken
> ---
>
> Key: HDFS-16211
> URL: https://issues.apache.org/jira/browse/HDFS-16211
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Reporter: JiangHua Zhu
>Priority: Major
>
> In AuthToken, some description information is missing.
> The purpose of this jira is to complete some descriptions related to 
> AuthToken.
> /**
>  */
> public class AuthToken implements Principal {
>   ..
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15669) RBF: Incorrect GetQuota caused by different implementation of HashSet

2021-09-03 Thread Janus Chow (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409398#comment-17409398
 ] 

Janus Chow commented on HDFS-15669:
---

[~ayushtkn] Thanks for the check of this ticket.

It's been quite a long time. I think this issue happens when we were checking 
the failed test cases of "TestRouterQuota#testGetQuota". 

Java 7 is quite outdated, so the issue should not happen. Maybe I'll close this 
ticket then.

> RBF: Incorrect GetQuota caused by different implementation of HashSet 
> --
>
> Key: HDFS-15669
> URL: https://issues.apache.org/jira/browse/HDFS-15669
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Janus Chow
>Priority: Major
>
> In the method of Quota#getQuotaUsage, the result could be different on 
> different versions of Java. 
> The trace route is as follows:
> {code:java}
> Quota#getQuotaUsage
>   - Quota#getValidQuotaLocations
> - Quota#getQuotaRemoteLocations
>   - RouterQuotaManager#getPaths{code}
> In RouterQuotaManager#getPaths, the path is stored in a _HashSet_. And in 
> Quota#getValidQuotaLocations, the values in HashSet would be used to check if 
> the paths are parent-child relation. So paths in HashSet could affect the 
> result of getValidQuotaLocations as follows:
> {code:java}
> // Situation.1 Locations in HashSet
> [ns0->/testdir7, ns0->/testdir7/subdir, ns1->/testdir8]
> // Situation.1 getQuota results
> {ns0->/testdir7= 10 6 100 100 , ns1->/testdir8= 10 8 100 100}
> // Situation.2 Locations in HashSet
> [ns0->/testdir7/subdir, ns1->/testdir8, ns0->/testdir7]
> // Situation.2 getQuota results
> {ns0->/testdir7= 10 8 100 100 , ns0->/testdir7/subdir= 10 6 100 100 , 
> ns1->/testdir8= 10 8 100 100 }{code}
> Situation.1 and Situation.2 happen when the underlying implementation of 
> HashSet is different, that is Situation.2 happens using Java-7, and 
> Situation.2 happens after Java-8.
> This problem can be solved when we do sorting on the results of 
> Quota#_getQuotaRemoteLocations_, but I'm not sure if we should do it.
> {code:java}
> /**
>  * Get all quota remote locations across subclusters under given
>  * federation path.
>  * @param path Federation path.
>  * @return List of quota remote locations.
>  * @throws IOException
>  */
> private List getQuotaRemoteLocations(String path)
> throws IOException {
>   List locations = new LinkedList<>();
>   RouterQuotaManager manager = this.router.getQuotaManager();
>   if (manager != null) {
> Set childrenPaths = manager.getPaths(path);
> for (String childPath : childrenPaths) {
>   locations.addAll(rpcServer.getLocationsForPath(childPath, true, false));
> }
>   }
> ++  Collections.sort(locations);
>   return locations;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16194) Simplify the code with DatanodeID#getXferAddrWithHostname

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16194?focusedWorklogId=646231=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646231
 ]

ASF GitHub Bot logged work on HDFS-16194:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:21
Start Date: 03/Sep/21 09:21
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3354:
URL: https://github.com/apache/hadoop/pull/3354#issuecomment-912393183


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 35s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   7m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 333m 37s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3354/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  31m 33s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3354/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 495m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3354/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 81a761efa091 4.15.0-143-generic 

[jira] [Work logged] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=646230=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646230
 ]

ASF GitHub Bot logged work on HDFS-16210:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:16
Start Date: 03/Sep/21 09:16
Worklog Time Spent: 10m 
  Work Description: symious commented on pull request #3379:
URL: https://github.com/apache/hadoop/pull/3379#issuecomment-912390205


   @goiri Could you help to review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646230)
Time Spent: 20m  (was: 10m)

> Add the option of refreshCallQueue to RouterAdmin
> -
>
> Key: HDFS-16210
> URL: https://issues.apache.org/jira/browse/HDFS-16210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We enabled FairCallQueue to RouterRpcServer, but Router can not 
> refreshCallQueue as NameNode does.
> This ticket is to enable the refreshCallQueue for Router so that we don't 
> have to restart the Routers when updating FairCallQueue configurations.
>  
> The option is not to refreshCallQueue to NameNodes, just trying to refresh 
> the callQueue of Router itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16210:
--
Labels: pull-request-available  (was: )

> Add the option of refreshCallQueue to RouterAdmin
> -
>
> Key: HDFS-16210
> URL: https://issues.apache.org/jira/browse/HDFS-16210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We enabled FairCallQueue to RouterRpcServer, but Router can not 
> refreshCallQueue as NameNode does.
> This ticket is to enable the refreshCallQueue for Router so that we don't 
> have to restart the Routers when updating FairCallQueue configurations.
>  
> The option is not to refreshCallQueue to NameNodes, just trying to refresh 
> the callQueue of Router itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16210?focusedWorklogId=646229=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646229
 ]

ASF GitHub Bot logged work on HDFS-16210:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 09:15
Start Date: 03/Sep/21 09:15
Worklog Time Spent: 10m 
  Work Description: symious opened a new pull request #3379:
URL: https://github.com/apache/hadoop/pull/3379


   ### Description of PR
   
   We enabled FairCallQueue to RouterRpcServer, but Router can not 
refreshCallQueue as NameNode does.
   
   This ticket is to enable the refreshCallQueue for Router so that we don't 
have to restart the Routers when updating FairCallQueue configurations.
   

   The option is not to refreshCallQueue to NameNodes, just trying to refresh 
the callQueue of Router itself.
   
   ### How was this patch tested?
   
   Unit test
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646229)
Remaining Estimate: 0h
Time Spent: 10m

> Add the option of refreshCallQueue to RouterAdmin
> -
>
> Key: HDFS-16210
> URL: https://issues.apache.org/jira/browse/HDFS-16210
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We enabled FairCallQueue to RouterRpcServer, but Router can not 
> refreshCallQueue as NameNode does.
> This ticket is to enable the refreshCallQueue for Router so that we don't 
> have to restart the Routers when updating FairCallQueue configurations.
>  
> The option is not to refreshCallQueue to NameNodes, just trying to refresh 
> the callQueue of Router itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16196) Namesystem#completeFile method will log incorrect path information when router to access

2021-09-03 Thread lei w (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409393#comment-17409393
 ] 

lei w commented on HDFS-16196:
--

Thanks [~ayushtkn] comment. I may not have made it clear . The issue is  when 
complete method invoked by router to namenode , NameNode will log "/" as file 
path rather than file's real path.  This is not conducive to troubleshooting. 
NameNode logs the path through router as follows:
2021-09-03 16:01:26,838 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /  is closed by DFSClient_attempt_***
NameNode logs the  path through client directly as follows:
2021-09-03 16:01:26,803 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /home/* /* /   is closed by DFSClient_attempt_***
So I think we can use fileID(complete method`s parameter ) to get file real 
path , then logs it.

> Namesystem#completeFile method will log incorrect path information when 
> router to access
> 
>
> Key: HDFS-16196
> URL: https://issues.apache.org/jira/browse/HDFS-16196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Priority: Minor
>
> Router not send entire path information to namenode because 
> ClientProtocol#complete method`s parameter with fileId. Then NameNode will 
> log incorrect path information. This is very confusing, should we let the 
> router pass the path information or modify the log path on  namenode?
> completeFile log as fllow:
> StateChange: DIR* completeFile: / is closed by DFSClient_NONMAPREDUC_*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16210) Add the option of refreshCallQueue to RouterAdmin

2021-09-03 Thread Janus Chow (Jira)
Janus Chow created HDFS-16210:
-

 Summary: Add the option of refreshCallQueue to RouterAdmin
 Key: HDFS-16210
 URL: https://issues.apache.org/jira/browse/HDFS-16210
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Janus Chow
Assignee: Janus Chow


We enabled FairCallQueue to RouterRpcServer, but Router can not 
refreshCallQueue as NameNode does.

This ticket is to enable the refreshCallQueue for Router so that we don't have 
to restart the Routers when updating FairCallQueue configurations.

 

The option is not to refreshCallQueue to NameNodes, just trying to refresh the 
callQueue of Router itself.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646215=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646215
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 08:12
Start Date: 03/Sep/21 08:12
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912349488


   > [HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820), added this 
configuration to disable the feature, But still it was made to true by default, 
guess due to compatibility reasons.
   > Folks using the Cache feature would get impacted with this change, right? 
they have to now enable this explicitly. There was a proposal on on 
[HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820)
   > 
   > ```
   > Please implement a way to disable the CacheReplicationMonitor class if 
there are no paths specified. Adding the first cached path to the NameNode 
should kick off the CacheReplicationMonitor and when the last one is deleted, 
the CacheReplicationMonitor should be disabled again.
   > ```
   > 
   > Is something like this possible?
   
   Thanks @ayushtkn for your comments. 
   
   I have also seen 
[HDFS-13820](https://issues.apache.org/jira/browse/HDFS-13820). But that 
feature(auto enable or auto disable) is not currently implemented. For new 
users who may not know this feature(Centralized Cache Management) exists, but 
it already runs quietly in the background, which incurs performance overhead.
   
   IMO, if we need to use this feature, it makes sense to turn it on and 
specify the path. What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646215)
Time Spent: 50m  (was: 40m)

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646204=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646204
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 07:56
Start Date: 03/Sep/21 07:56
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912338368


   HDFS-13820, added this configuration to disable the feature, But still it 
was made to true by default, guess due to compatibility reasons.
   Folks using the Cache feature would get impacted with this change, right? 
they have to now enable this explicitly. There was a proposal on on HDFS-13820
   ```
   Please implement a way to disable the CacheReplicationMonitor class if there 
are no paths specified. Adding the first cached path to the NameNode should 
kick off the CacheReplicationMonitor and when the last one is deleted, the 
CacheReplicationMonitor should be disabled again.
   ``` 
   Is something like this possible?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646204)
Time Spent: 40m  (was: 0.5h)

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15669) RBF: Incorrect GetQuota caused by different implementation of HashSet

2021-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409345#comment-17409345
 ] 

Ayush Saxena commented on HDFS-15669:
-

Couldn't decode much, Do you have a test to repro this?

> RBF: Incorrect GetQuota caused by different implementation of HashSet 
> --
>
> Key: HDFS-15669
> URL: https://issues.apache.org/jira/browse/HDFS-15669
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Janus Chow
>Priority: Major
>
> In the method of Quota#getQuotaUsage, the result could be different on 
> different versions of Java. 
> The trace route is as follows:
> {code:java}
> Quota#getQuotaUsage
>   - Quota#getValidQuotaLocations
> - Quota#getQuotaRemoteLocations
>   - RouterQuotaManager#getPaths{code}
> In RouterQuotaManager#getPaths, the path is stored in a _HashSet_. And in 
> Quota#getValidQuotaLocations, the values in HashSet would be used to check if 
> the paths are parent-child relation. So paths in HashSet could affect the 
> result of getValidQuotaLocations as follows:
> {code:java}
> // Situation.1 Locations in HashSet
> [ns0->/testdir7, ns0->/testdir7/subdir, ns1->/testdir8]
> // Situation.1 getQuota results
> {ns0->/testdir7= 10 6 100 100 , ns1->/testdir8= 10 8 100 100}
> // Situation.2 Locations in HashSet
> [ns0->/testdir7/subdir, ns1->/testdir8, ns0->/testdir7]
> // Situation.2 getQuota results
> {ns0->/testdir7= 10 8 100 100 , ns0->/testdir7/subdir= 10 6 100 100 , 
> ns1->/testdir8= 10 8 100 100 }{code}
> Situation.1 and Situation.2 happen when the underlying implementation of 
> HashSet is different, that is Situation.2 happens using Java-7, and 
> Situation.2 happens after Java-8.
> This problem can be solved when we do sorting on the results of 
> Quota#_getQuotaRemoteLocations_, but I'm not sure if we should do it.
> {code:java}
> /**
>  * Get all quota remote locations across subclusters under given
>  * federation path.
>  * @param path Federation path.
>  * @return List of quota remote locations.
>  * @throws IOException
>  */
> private List getQuotaRemoteLocations(String path)
> throws IOException {
>   List locations = new LinkedList<>();
>   RouterQuotaManager manager = this.router.getQuotaManager();
>   if (manager != null) {
> Set childrenPaths = manager.getPaths(path);
> for (String childPath : childrenPaths) {
>   locations.addAll(rpcServer.getLocationsForPath(childPath, true, false));
> }
>   }
> ++  Collections.sort(locations);
>   return locations;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16199) Resolve log placeholders in NamenodeBeanMetrics

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16199?focusedWorklogId=646198=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646198
 ]

ASF GitHub Bot logged work on HDFS-16199:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 07:33
Start Date: 03/Sep/21 07:33
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #3362:
URL: https://github.com/apache/hadoop/pull/3362#issuecomment-912323906


   FYI @aajisaka, Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646198)
Time Spent: 1h  (was: 50m)

> Resolve log placeholders in NamenodeBeanMetrics
> ---
>
> Key: HDFS-16199
> URL: https://issues.apache.org/jira/browse/HDFS-16199
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> NamenodeBeanMetrics has some missing placeholders in logs. This Jira is to 
> fix them all.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15528) Not able to list encryption zone with federation

2021-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409322#comment-17409322
 ] 

Ayush Saxena commented on HDFS-15528:
-

CryptoAdmin isn't supported with ViewFs and is being tracked at HDFS-14178

> Not able to list encryption zone with federation
> 
>
> Key: HDFS-15528
> URL: https://issues.apache.org/jira/browse/HDFS-15528
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: encryption, federation
>Affects Versions: 3.0.0
>Reporter: Thangamani Murugasamy
>Priority: Major
>
>  hdfs crypto -listZones
> IllegalArgumentException: 'viewfs://cluster14' is not an HDFS URI.
>  
> --
> debug log
> 20/08/12 05:53:14 DEBUG util.Shell: setsid exited with exit code 0
> 20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of 
> successful kerberos logins and latency (milliseconds)])
> 20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Rate of failed 
> kerberos logins and latency (milliseconds)])
> 20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field 
> org.apache.hadoop.metrics2.lib.MutableRate 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with 
> annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
> 20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field private 
> org.apache.hadoop.metrics2.lib.MutableGaugeLong 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal
>  with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Renewal failures 
> since startup])
> 20/08/12 05:53:14 DEBUG lib.MutableMetricsFactory: field private 
> org.apache.hadoop.metrics2.lib.MutableGaugeInt 
> org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures 
> with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, 
> sampleName=Ops, about=, type=DEFAULT, valueName=Time, value=[Renewal failures 
> since last successful login])
> 20/08/12 05:53:14 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group 
> related metrics
> 20/08/12 05:53:14 DEBUG security.SecurityUtil: Setting 
> hadoop.security.token.service.use_ip to true
> 20/08/12 05:53:14 DEBUG security.Groups: Creating new Groups object
> 20/08/12 05:53:14 DEBUG security.Groups: Group mapping 
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout=30; warningDeltaMs=5000
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: hadoop login
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: hadoop login commit
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: using kerberos 
> user:h...@corp.epsilon.com
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: Using user: 
> "h...@corp.epsilon.com" with name h...@corp.epsilon.com
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: User entry: 
> "h...@corp.epsilon.com"
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: UGI 
> loginUser:h...@corp.epsilon.com (auth:KERBEROS)
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: Current time is 
> 1597233194735
> 20/08/12 05:53:14 DEBUG security.UserGroupInformation: Next refresh is 
> 1597261977000
> 20/08/12 05:53:14 DEBUG core.Tracer: sampler.classes = ; loaded no samplers
> 20/08/12 05:53:14 DEBUG core.Tracer: span.receiver.classes = ; loaded no span 
> receivers
> 20/08/12 05:53:14 DEBUG fs.FileSystem: Loading filesystems
> 20/08/12 05:53:14 DEBUG fs.FileSystem: s3n:// = class 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem from 
> /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-aws-3.0.0-cdh6.2.1.jar
> 20/08/12 05:53:14 DEBUG fs.FileSystem: gs:// = class 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem from 
> /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/hadoop/gcs-connector-hadoop3-1.9.10-cdh6.2.1-shaded.jar
> 20/08/12 05:53:14 DEBUG fs.FileSystem: file:// = class 
> org.apache.hadoop.fs.LocalFileSystem from 
> /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/jars/hadoop-common-3.0.0-cdh6.2.1.jar
> 20/08/12 05:53:14 DEBUG fs.FileSystem: viewfs:// = class 
> org.apache.hadoop.fs.viewfs.ViewFileSystem from 
> 

[jira] [Work logged] (HDFS-16207) Remove NN logs stack trace for non-existent xattr query

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16207?focusedWorklogId=646196=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646196
 ]

ASF GitHub Bot logged work on HDFS-16207:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 07:11
Start Date: 03/Sep/21 07:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3375:
URL: https://github.com/apache/hadoop/pull/3375#issuecomment-912311956


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   4m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   5m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   4m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 483m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3375/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 605m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.TestSnapshotCommands |
   |   | hadoop.hdfs.TestLeaseRecovery |
   |   | 
hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3375/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3375 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |

[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646194=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646194
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 07:06
Start Date: 03/Sep/21 07:06
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912309108


   Hi @ayushtkn , could you please also take a look. Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646194)
Time Spent: 0.5h  (was: 20m)

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16049) Display HDFS rack info in HDFS Namenode ui

2021-09-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-16049:

Fix Version/s: (was: 3.2.0)

> Display HDFS rack info in HDFS Namenode ui
> --
>
> Key: HDFS-16049
> URL: https://issues.apache.org/jira/browse/HDFS-16049
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ui
>Affects Versions: 3.2.0
>Reporter: Huibo Peng
>Priority: Major
>
> In the cloud environment, we use rack awareness to set data replication in 
> different racks to prevent data losing. In order to display datanodes 
> distributions of rack, we need to display rack info in datanode ui.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16069) Remove locally stored files (edit log) when NameNode becomes Standby

2021-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409312#comment-17409312
 ] 

Ayush Saxena commented on HDFS-16069:
-

Do you mean to local-edits, in case of HA they have little significance, I 
think HDFS-12733 is chasing to disable them itself in case of a HA setup

> Remove locally stored files (edit log) when NameNode becomes Standby
> 
>
> Key: HDFS-16069
> URL: https://issues.apache.org/jira/browse/HDFS-16069
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.9.2
>Reporter: JiangHua Zhu
>Priority: Minor
>
> When zkfc is working, one of the NameNode (Active) will become the Standby 
> state. Before the state change, this NameNode has saved some files (edit 
> log), these files are stored in the directory 
> (dfs.namenode.edits.dir/dfs.namenode.name.dir) , And will not disappear in 
> the short term until the status of this NameNode becomes Active again.
> These files (edit log) are of little significance to the cluster.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16196) Namesystem#completeFile method will log incorrect path information when router to access

2021-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409311#comment-17409311
 ] 

Ayush Saxena commented on HDFS-16196:
-

What do you mean by entire path? The path with respect to Router, before mount 
point resolution? If so, we can not pass that to Namenode, the namenode is 
least bothered with the client logic, Someone coming through ViewFs would also 
expect similar behaviour. Logs aren't for the end-client, If for debuging, the 
Admin would be smart enough and aware of the mount points to decode the path. 

> Namesystem#completeFile method will log incorrect path information when 
> router to access
> 
>
> Key: HDFS-16196
> URL: https://issues.apache.org/jira/browse/HDFS-16196
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: lei w
>Priority: Minor
>
> Router not send entire path information to namenode because 
> ClientProtocol#complete method`s parameter with fileId. Then NameNode will 
> log incorrect path information. This is very confusing, should we let the 
> router pass the path information or modify the log path on  namenode?
> completeFile log as fllow:
> StateChange: DIR* completeFile: / is closed by DFSClient_NONMAPREDUC_*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16195) Fix log message when choosing storage groups for block movement in balancer

2021-09-03 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409310#comment-17409310
 ] 

Renukaprasad C commented on HDFS-16195:
---

Thanks [~preetium] for the patch, still line length is exceeding the threshold. 
You can correct.

Also, formatter is different, you can follow the hadoop formatting.

> Fix log message when choosing storage groups for block movement in balancer
> ---
>
> Key: HDFS-16195
> URL: https://issues.apache.org/jira/browse/HDFS-16195
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Preeti
>Priority: Major
> Attachments: HADOOP-16195.001.patch, HADOOP-16195.002.patch
>
>
> Correct the log message in line with the logic associated with
> moving blocks in chooseStorageGroups() in the balancer. All log lines should 
> indicate from which storage source the blocks are being moved correctly to 
> avoid ambiguity. Right now one of the log lines is incorrect: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java#L555]
>  which indicates that storage blocks are moved from underUtilized to 
> aboveAvgUtilized nodes, while it is actually the other way around in the code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16202) Use constants HdfsClientConfigKeys.Failover.PREFIX instead of "dfs.client.failover."

2021-09-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-16202.
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Use constants HdfsClientConfigKeys.Failover.PREFIX instead of 
> "dfs.client.failover."
> 
>
> Key: HDFS-16202
> URL: https://issues.apache.org/jira/browse/HDFS-16202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Weison Wei
>Assignee: Weison Wei
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16202) Use constants HdfsClientConfigKeys.Failover.PREFIX instead of "dfs.client.failover."

2021-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409308#comment-17409308
 ] 

Ayush Saxena commented on HDFS-16202:
-

Committed to trunk.

Thanx [~weisonwei] for the contribution!!!

> Use constants HdfsClientConfigKeys.Failover.PREFIX instead of 
> "dfs.client.failover."
> 
>
> Key: HDFS-16202
> URL: https://issues.apache.org/jira/browse/HDFS-16202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Weison Wei
>Assignee: Weison Wei
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16202) Use constants HdfsClientConfigKeys.Failover.PREFIX instead of "dfs.client.failover."

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16202?focusedWorklogId=646189=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646189
 ]

ASF GitHub Bot logged work on HDFS-16202:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 06:36
Start Date: 03/Sep/21 06:36
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #3367:
URL: https://github.com/apache/hadoop/pull/3367


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646189)
Time Spent: 20m  (was: 10m)

> Use constants HdfsClientConfigKeys.Failover.PREFIX instead of 
> "dfs.client.failover."
> 
>
> Key: HDFS-16202
> URL: https://issues.apache.org/jira/browse/HDFS-16202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Weison Wei
>Assignee: Weison Wei
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16202) Use constants HdfsClientConfigKeys.Failover.PREFIX instead of "dfs.client.failover."

2021-09-03 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HDFS-16202:
---

Assignee: Weison Wei

> Use constants HdfsClientConfigKeys.Failover.PREFIX instead of 
> "dfs.client.failover."
> 
>
> Key: HDFS-16202
> URL: https://issues.apache.org/jira/browse/HDFS-16202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Weison Wei
>Assignee: Weison Wei
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16202) Use constants HdfsClientConfigKeys.Failover.PREFIX instead of "dfs.client.failover."

2021-09-03 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409301#comment-17409301
 ] 

Ayush Saxena commented on HDFS-16202:
-

Added weisonwei as HDFS contributor & assigned the ticket

> Use constants HdfsClientConfigKeys.Failover.PREFIX instead of 
> "dfs.client.failover."
> 
>
> Key: HDFS-16202
> URL: https://issues.apache.org/jira/browse/HDFS-16202
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Weison Wei
>Assignee: Weison Wei
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread tomscut (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17409290#comment-17409290
 ] 

tomscut commented on HDFS-16209:


I can't upload the screenshot to wiki.

{color:#de350b}*An internal error has occurred. Please contact your 
administrator.*{color}

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?focusedWorklogId=646187=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646187
 ]

ASF GitHub Bot logged work on HDFS-16209:
-

Author: ASF GitHub Bot
Created on: 03/Sep/21 06:10
Start Date: 03/Sep/21 06:10
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3378:
URL: https://github.com/apache/hadoop/pull/3378#issuecomment-912284182


   @tasanuma @jojochuang @Hexiaoqiao @ferhui  Please help review the change. 
Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 646187)
Time Spent: 20m  (was: 10m)

> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16209) Set dfs.namenode.caching.enabled to false as default

2021-09-03 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut updated HDFS-16209:
---
Description: 
Namenode config:
 dfs.namenode.write-lock-reporting-threshold-ms=50ms
 dfs.namenode.caching.enabled=true (default)

 

In fact, the caching feature is not used in our cluster, but this switch is 
turned on by default(dfs.namenode.caching.enabled=true), incurring some 
additional write lock overhead. We count the number of write lock warnings in a 
log file, and find that the number of rescan cache warnings reaches about 32%, 
which greatly affects the performance of Namenode.

!namenode-write-lock.jpg!

 

We should set 'dfs.namenode.caching.enabled' to false by default and turn it on 
when we wants to use it.

  was:
Namenode config:
 dfs.namenode.write-lock-reporting-threshold-ms=50ms
 dfs.namenode.caching.enabled=true (default)

 

In fact, the caching feature is not used in our cluster, but this switch is 
turned on by default(dfs.namenode.caching.enabled=true), incurring some 
additional write lock overhead. We count the number of write lock warnings in a 
log file, and find that the number of rescan cache warnings reaches about 32%, 
which greatly affects the performance of Namenode.

!namenode-write-lock.jpg|width=713,height=82!

 

We should set 'dfs.namenode.caching.enabled' to false by default and turn it on 
when we wants to use it.


> Set dfs.namenode.caching.enabled to false as default
> 
>
> Key: HDFS-16209
> URL: https://issues.apache.org/jira/browse/HDFS-16209
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: tomscut
>Assignee: tomscut
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Namenode config:
>  dfs.namenode.write-lock-reporting-threshold-ms=50ms
>  dfs.namenode.caching.enabled=true (default)
>  
> In fact, the caching feature is not used in our cluster, but this switch is 
> turned on by default(dfs.namenode.caching.enabled=true), incurring some 
> additional write lock overhead. We count the number of write lock warnings in 
> a log file, and find that the number of rescan cache warnings reaches about 
> 32%, which greatly affects the performance of Namenode.
> !namenode-write-lock.jpg!
>  
> We should set 'dfs.namenode.caching.enabled' to false by default and turn it 
> on when we wants to use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org