[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=796070&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796070
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 13:30
Start Date: 28/Jul/22 13:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4653:
URL: https://github.com/apache/hadoop/pull/4653#issuecomment-1198144710

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 59s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 26s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 8 unchanged - 
0 fixed = 12 total (was 8)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=796012&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-796012
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 09:47
Start Date: 28/Jul/22 09:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4653:
URL: https://github.com/apache/hadoop/pull/4653#issuecomment-1197916607

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 41s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 25s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m  2s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4653/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 6 new + 8 unchanged - 
0 fixed = 14 total (was 8)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795980&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795980
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 08:13
Start Date: 28/Jul/22 08:13
Worklog Time Spent: 10m 
  Work Description: Likkey closed pull request #4649:  HDFS-16697.Randomly 
setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent 
safe mode from being turned off
URL: https://github.com/apache/hadoop/pull/4649




Issue Time Tracking
---

Worklog Id: (was: 795980)
Time Spent: 1.5h  (was: 1h 20m)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding cond

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795979&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795979
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 08:13
Start Date: 28/Jul/22 08:13
Worklog Time Spent: 10m 
  Work Description: Likkey opened a new pull request, #4653:
URL: https://github.com/apache/hadoop/pull/4653

   
   
   ### Description of PR
   Add Precondition.checkArgument() for minimumRedundantVolumes to ensure that 
the value is greater than the number of NameNode storage volumes to avoid never 
being able to turn off safe mode afterwards.
   
   JIRA:[[HDFS-16697](https://issues.apache.org/jira/browse/HDFS-16697)]
   
   ### How was this patch tested?
   It is found that “dfs.namenode.resource.checked.volumes.minimum” lacks a 
condition check and an associated exception handling mechanism, which makes it 
impossible to find the root cause of the impact when a misconfiguration occurs.
   This patch provides a check of the configuration items,it will throw a 
IllegalArgumentException and detailed error message when the value of 
"dfs.namenode.resource.checked.volumes.minimum" is set greater than the number 
of NameNode storage volumes to avoid the misconfiguration from affecting the 
subsequent operation of the program.




Issue Time Tracking
---

Worklog Id: (was: 795979)
Time Spent: 1h 20m  (was: 1h 10m)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795963&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795963
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 07:31
Start Date: 28/Jul/22 07:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4649:
URL: https://github.com/apache/hadoop/pull/4649#issuecomment-1197772721

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  trunk passed with JDK 
Private Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 51s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 55s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  javac  |   0m 55s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-11.0.15+10-Ubuntu-0ubuntu0.20.04.1.  |
   | -1 :x: |  compile  |   0m 50s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  javac  |   0m 50s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/blanks-eol.txt)
 |  The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m  1s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4649/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 4 new + 2 unchanged - 
0 fixed = 6 total (was 2)  |
   | -1 :x: |  mvnsite 

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795943&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795943
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 05:52
Start Date: 28/Jul/22 05:52
Worklog Time Spent: 10m 
  Work Description: Likkey closed pull request #4641: HDFS-16697.Randomly 
setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent 
safe mode from being turned off
URL: https://github.com/apache/hadoop/pull/4641




Issue Time Tracking
---

Worklog Id: (was: 795943)
Time Spent: 1h  (was: 50m)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding conditions t

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795942&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795942
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 05:51
Start Date: 28/Jul/22 05:51
Worklog Time Spent: 10m 
  Work Description: Likkey opened a new pull request, #4649:
URL: https://github.com/apache/hadoop/pull/4649

   
   
   
   
   ### Description of PR
   Add Precondition.checkArgument() for minimumRedundantVolumes to ensure that 
the value is greater than the number of NameNode storage volumes to avoid never 
being able to turn off safe mode afterwards.
   
   JIRA:[[HDFS-16697](https://issues.apache.org/jira/browse/HDFS-16697)](
   
   ### How was this patch tested?
   It is found that “dfs.namenode.resource.checked.volumes.minimum” lacks a 
condition check and an associated exception handling mechanism, which makes it 
impossible to find the root cause of the impact when a misconfiguration occurs.
   This patch provides a check of the configuration items,it will throw a 
IllegalArgumentException and detailed error message when the value of 
"dfs.namenode.resource.checked.volumes.minimum" is set greater than the number 
of NameNode storage volumes to avoid the misconfiguration from affecting the 
subsequent operation of the program.
   
   
   
   




Issue Time Tracking
---

Worklog Id: (was: 795942)
Time Spent: 50m  (was: 40m)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.ca

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795940&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795940
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 05:41
Start Date: 28/Jul/22 05:41
Worklog Time Spent: 10m 
  Work Description: Likkey opened a new pull request, #4641:
URL: https://github.com/apache/hadoop/pull/4641

   
   
   ### Description of PR
   Add Precondition.checkArgument() for minimumRedundantVolumes to ensure that 
the value is greater than the number of  NameNode storage volumes to avoid 
never being able to turn off safe mode afterwards.
   
   JIRA:[HDFS-16697](https://issues.apache.org/jira/browse/HDFS-16697)
   
   ### How was this patch tested?
   
   It is found that “dfs.namenode.resource.checked.volumes.minimum” lacks a 
condition check and an associated exception handling mechanism, which makes it 
impossible to find the root cause of the impact when a misconfiguration occurs.
   This patch provides a check of the configuration items,it will throw a 
IllegalArgumentException and detailed error message when the value of 
"dfs.namenode.resource.checked.volumes.minimum" is set greater than the number 
of  NameNode storage volumes to avoid the misconfiguration from affecting the 
subsequent operation of the program.




Issue Time Tracking
---

Worklog Id: (was: 795940)
Time Spent: 40m  (was: 0.5h)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Cli

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795934&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795934
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 05:28
Start Date: 28/Jul/22 05:28
Worklog Time Spent: 10m 
  Work Description: Likkey closed pull request #4641: HDFS-16697.Randomly 
setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent 
safe mode from being turned off
URL: https://github.com/apache/hadoop/pull/4641




Issue Time Tracking
---

Worklog Id: (was: 795934)
Time Spent: 20m  (was: 10m)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding conditions

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795935&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795935
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 28/Jul/22 05:28
Start Date: 28/Jul/22 05:28
Worklog Time Spent: 10m 
  Work Description: Likkey opened a new pull request, #4641:
URL: https://github.com/apache/hadoop/pull/4641

   
   
   ### Description of PR
   Add Precondition.checkArgument() for minimumRedundantVolumes to ensure that 
the value is greater than the number of  NameNode storage volumes to avoid 
never being able to turn off safe mode afterwards.
   
   JIRA:[HDFS-16697](https://issues.apache.org/jira/browse/HDFS-16697)
   
   ### How was this patch tested?
   
   It is found that “dfs.namenode.resource.checked.volumes.minimum” lacks a 
condition check and an associated exception handling mechanism, which makes it 
impossible to find the root cause of the impact when a misconfiguration occurs.
   This patch provides a check of the configuration items,it will throw a 
IllegalArgumentException and detailed error message when the value of 
"dfs.namenode.resource.checked.volumes.minimum" is set greater than the number 
of  NameNode storage volumes to avoid the misconfiguration from affecting the 
subsequent operation of the program.




Issue Time Tracking
---

Worklog Id: (was: 795935)
Time Spent: 0.5h  (was: 20m)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Cl

[jira] [Work logged] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2022-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?focusedWorklogId=795635&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795635
 ]

ASF GitHub Bot logged work on HDFS-16697:
-

Author: ASF GitHub Bot
Created on: 27/Jul/22 12:30
Start Date: 27/Jul/22 12:30
Worklog Time Spent: 10m 
  Work Description: Likkey opened a new pull request, #4641:
URL: https://github.com/apache/hadoop/pull/4641

   
   
   ### Description of PR
   Add Precondition.checkArgument() to check minimumRedundantVolumes and give a 
prompt when the value is greater than the number of  NameNode storage volumes 
to avoid never being able to turn off safe mode afterwards.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 795635)
Remaining Estimate: 0h
Time Spent: 10m

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: Jingxuan Fu
>Assignee: Jingxuan Fu
>Priority: Major
> Fix For: 3.1.3
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apac