[jira] [Updated] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17238:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}
And then format the namenode, and start the hdfs. HDFS is running normally.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996] {code}
Finally, use dfs to place a file. 
{code:java}
bin/hdfs dfs -mkdir -p /user/hadoop
bin/hdfs dfs -mkdir input
bin/hdfs dfs -put etc/hadoop/*.xml input {code}
Discovering Exception Throwing.
{code:java}
2023-10-19 14:56:34,603 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/hadoop/input/capacity-scheduler.xml._COPYING_ could only be written to 0 
of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) 
are excluded in this operation.
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2350)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2989)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:912)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:595)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)        at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
        at org.apache.hadoop.ipc.Client.call(Client.java:1513)
        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:531)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1088)
        at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1915)
        at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1717)
        at 

[jira] [Updated] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17238:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}
And then format the namenode, and start the hdfs. HDFS is running normally.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996] {code}
Finally, use dfs to place a file. 
{code:java}
bin/hdfs dfs -mkdir -p /user/hadoop
bin/hdfs dfs -mkdir input
bin/hdfs dfs -put etc/hadoop/*.xml input {code}
Discovering Exception Throwing.
{code:java}
2023-10-19 14:56:34,603 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/hadoop/input/capacity-scheduler.xml._COPYING_ could only be written to 0 
of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) 
are excluded in this operation.
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2350)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2989)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:912)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:595)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)        at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
        at org.apache.hadoop.ipc.Client.call(Client.java:1513)
        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:531)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1088)
        at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1915)
        at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1717)
        at 

[jira] [Updated] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17238:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}
And then format the namenode, and start the hdfs. HDFS is running normally.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996] {code}
Finally, use dfs to place a file. 
{code:java}
bin/hdfs dfs -mkdir -p /user/hadoop
bin/hdfs dfs -mkdir input
bin/hdfs dfs -put etc/hadoop/*.xml input {code}
Discovering Exception Throwing.
{code:java}
2023-10-19 14:56:34,603 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/hadoop/input/capacity-scheduler.xml._COPYING_ could only be written to 0 
of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node(s) 
are excluded in this operation.
        at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2350)
        at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2989)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:912)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:595)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)        at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
        at org.apache.hadoop.ipc.Client.call(Client.java:1513)
        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:531)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:433)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:166)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:158)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:96)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:362)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.addBlock(DFSOutputStream.java:1088)
        at 
org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1915)
        at 
org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1717)
        at 

[jira] [Updated] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17238:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}
Then format the name node and start hdfs.

 

 

 

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}




> Setting the value of "dfs.blocksize" too large will cause HDFS to be unable 
> to write to files
> -
>
> Key: HDFS-17238
> URL: https://issues.apache.org/jira/browse/HDFS-17238
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.6
>Reporter: ECFuzz
>Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         hdfs://localhost:9000
>     
>     
>         hadoop.tmp.dir
>         /home/hadoop/Mutil_Component/tmp
>     
>    
> {code}
> hdfs-site.xml like below.
> {code:java}
> 
>    
>         dfs.replication
>         1
>     
> 
>         dfs.blocksize
>         134217728
>     
>    
> {code}
> Then format the name node and start hdfs.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-10-26 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
然后格式化名称节点,并启动 hdfs。
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
最后,使用 dfs 放置一个文件。然后我收到消息,这意味着 128k 小于 1M。

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
但我发现在文档中,dfs.blocksize 可以设置为 128k 和 hdfs-default 中的其他值.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
那么,这里的文档是否应该存在一些问题?或者应该注意用户将此配置设置为大于 1M?

 

此外,我启动纱线并运行给定的 mapreduce 作业。
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-yarn.sh 
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep 
input output 'dfs[a-z.]+'{code}
并且,外壳会抛出一些异常,如下所示。
{code:java}
2023-07-12 15:12:29,964 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at /0.0.0.0:8032
2023-07-12 15:12:30,430 INFO mapreduce.JobResourceUploader: Disabling Erasure 
Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
2023-07-12 15:12:30,542 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2690)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)        at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
        at org.apache.hadoop.ipc.Client.call(Client.java:1513)
        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:139)
        at com.sun.proxy.$Proxy9.create(Unknown Source)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:383)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        

[jira] [Updated] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17238:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}



  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}


> Setting the value of "dfs.blocksize" too large will cause HDFS to be unable 
> to write to files
> -
>
> Key: HDFS-17238
> URL: https://issues.apache.org/jira/browse/HDFS-17238
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.3.6
>Reporter: ECFuzz
>Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         hdfs://localhost:9000
>     
>     
>         hadoop.tmp.dir
>         /home/hadoop/Mutil_Component/tmp
>     
>    
> {code}
> hdfs-site.xml like below.
> {code:java}
> 
>    
>         dfs.replication
>         1
>     
> 
>         dfs.blocksize
>         134217728
>     
>    
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)
ECFuzz created HDFS-17238:
-

 Summary: Setting the value of "dfs.blocksize" too large will cause 
HDFS to be unable to write to files
 Key: HDFS-17238
 URL: https://issues.apache.org/jira/browse/HDFS-17238
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.3.6
Reporter: ECFuzz


My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-16 Thread ECFuzz (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17743610#comment-17743610
 ] 

ECFuzz commented on HDFS-17069:
---

Yes, you are right, these two configuration items have a dependency 
relationship clearly in doc. And it's not a bug, I'm so sorry for making such 
mistakes. 

> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         hdfs://localhost:9000
>     
>     
>         hadoop.tmp.dir
>         /home/hadoop/Mutil_Component/tmp
>     
>    
> {code}
> hdfs-site.xml like below.
> {code:java}
> 
>    
>         dfs.replication
>         1
>     
> 
>         dfs.blocksize
>         128k
>     
>    
> {code}
> And then format the namenode, and start the hdfs.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs namenode -format
> x(many info)
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> sbin/start-dfs.sh
> Starting namenodes on [localhost]
> Starting datanodes
> Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
> Finally, use dfs to put a file. Then I get the message which means 128k is 
> less than 1M.
>  
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -mkdir -p /user/hadoop
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -mkdir input
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
> put: Specified block size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
> {code}
> But I find that in the document, dfs.blocksize can be set like 128k and other 
> values in hdfs-default.xml .
> {code:java}
> The default block size for new files, in bytes. You can use the following 
> suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), 
> e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or provide 
> complete size in bytes (such as 134217728 for 128 MB).{code}
> So, should there be some issues with the documents here?Or should notice user 
> to set this configuration to be larger than 1M?
>  
> Additionally, I start the yarn and run the given mapreduce job.
> {code:java}
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> sbin/start-yarn.sh 
> hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar 
> grep input output 'dfs[a-z.]+'{code}
>  And,  the shell will throw some exceptions like below.
> {code:java}
> 2023-07-12 15:12:29,964 INFO client.DefaultNoHARMFailoverProxyProvider: 
> Connecting to ResourceManager at /0.0.0.0:8032
> 2023-07-12 15:12:30,430 INFO mapreduce.JobResourceUploader: Disabling Erasure 
> Coding for path: 
> /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
> 2023-07-12 15:12:30,542 INFO mapreduce.JobSubmitter: Cleaning up the staging 
> area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
> size is less than configured minimum value 
> (dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2690)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> 

[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-12 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?

 

Additionally, I start the yarn and run the given mapreduce job.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-yarn.sh 
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep 
input output 'dfs[a-z.]+'{code}

 And,  the shell will throw some exceptions like below.
{code:java}
2023-07-12 15:12:29,964 INFO client.DefaultNoHARMFailoverProxyProvider: 
Connecting to ResourceManager at /0.0.0.0:8032
2023-07-12 15:12:30,430 INFO mapreduce.JobResourceUploader: Disabling Erasure 
Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
2023-07-12 15:12:30,542 INFO mapreduce.JobSubmitter: Cleaning up the staging 
area /tmp/hadoop-yarn/staging/hadoop/.staging/job_1689145947338_0001
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Specified block 
size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2690)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2625)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:807)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:496)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:621)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:589)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:573)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1227)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1094)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1017)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:3048)        at 
org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1567)
        at org.apache.hadoop.ipc.Client.call(Client.java:1513)
        at org.apache.hadoop.ipc.Client.call(Client.java:1410)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:258)
        at 

[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-12 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?

 

Additionally, 
 

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?


> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 

[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-05 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be larger than 1M?

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?


> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         hdfs://localhost:9000
>     

[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-05 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   
{code}
And then format the namenode, and start the hdfs.
{code:java}

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]{code}
Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

 
{code:java}
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576
{code}
But I find that in the document, dfs.blocksize can be set like 128k and other 
values in hdfs-default.xml .
{code:java}
The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).{code}
So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

`core-site.xml` like below.
{quote}```shell

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   

```
{quote}
`hdfs-site.xml` like below.

```shell

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   

```

 

And then format the namenode, and start the hdfs.

```shell
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]

```

Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

```shell

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576

```

 

But I find that in the document, dfs.blocksize can be set like 128k and other 
values in `hdfs-default.xml` .

```shell

The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).

```

So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?


> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> core-site.xml like below.
> {code:java}
> 
>   
>         fs.defaultFS
>         

[jira] [Updated] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-05 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-17069:
--
Description: 
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

`core-site.xml` like below.
{quote}```shell

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   

```
{quote}
`hdfs-site.xml` like below.

```shell

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   

```

 

And then format the namenode, and start the hdfs.

```shell
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]

```

Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

```shell

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576

```

 

But I find that in the document, dfs.blocksize can be set like 128k and other 
values in `hdfs-default.xml` .

```shell

The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).

```

So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?

  was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

`core-site.xml` like below.

```shell

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   

```

`hdfs-site.xml` like below.

```shell

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   

```

 

And then format the namenode, and start the hdfs.

```shell
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]

```

Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

```shell

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576

```

 

But I find that in the document, dfs.blocksize can be set like 128k and other 
values in `hdfs-default.xml` .

```shell

The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).

```

So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?


> The documentation and implementation of "dfs.blocksize" are inconsistent.
> -
>
> Key: HDFS-17069
> URL: https://issues.apache.org/jira/browse/HDFS-17069
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: dfs, documentation
>Affects Versions: 3.3.6
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> `core-site.xml` like below.
> {quote}```shell
> 
>   
>         fs.defaultFS
>         

[jira] [Created] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-05 Thread ECFuzz (Jira)
ECFuzz created HDFS-17069:
-

 Summary: The documentation and implementation of "dfs.blocksize" 
are inconsistent.
 Key: HDFS-17069
 URL: https://issues.apache.org/jira/browse/HDFS-17069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfs, documentation
Affects Versions: 3.3.6
 Environment: Linux version 4.15.0-142-generic (buildd@lgw01-amd64-039) 
(gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12))

java version "1.8.0_162"
Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
Reporter: ECFuzz


My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

`core-site.xml` like below.

```shell

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   

```

`hdfs-site.xml` like below.

```shell

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   

```

 

And then format the namenode, and start the hdfs.

```shell
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]

```

Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

```shell

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576

```

 

But I find that in the document, dfs.blocksize can be set like 128k and other 
values in `hdfs-default.xml` .

```shell

The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).

```

So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2023-05-18 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16697:
--
Issue Type: Bug  (was: Improvement)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Assignee: ECFuzz
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding conditions to close safe mode, but after 
> adding or releasing more resources and lowering the resource condition 
> threshold "dfs.namenode.resource.du.reserved", it still fails to close safe 
> mode and throws the same prompt .
> According to the source code, we know that if the NameNode has redundant 
> storage volumes less than the "dfs.namenode.resource.checked.volumes.minimum" 
> set the minimum number of redundant storage volumes will enter safe mode. 
> After debugging, *we found that the current NameNode storage volumes are 
> abundant resource space, but 

[jira] [Updated] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2023-05-18 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16697:
--
Priority: Minor  (was: Major)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Assignee: ECFuzz
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding conditions to close safe mode, but after 
> adding or releasing more resources and lowering the resource condition 
> threshold "dfs.namenode.resource.du.reserved", it still fails to close safe 
> mode and throws the same prompt .
> According to the source code, we know that if the NameNode has redundant 
> storage volumes less than the "dfs.namenode.resource.checked.volumes.minimum" 
> set the minimum number of redundant storage volumes will enter safe mode. 
> After debugging, *we found that the current NameNode storage volumes are 
> abundant resource space, but 

[jira] [Updated] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2023-05-18 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16697:
--
Issue Type: Improvement  (was: Bug)

> Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always 
> prevent safe mode from being turned off
> 
>
> Key: HDFS-16697
> URL: https://issues.apache.org/jira/browse/HDFS-16697
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namenode
>Affects Versions: 3.1.3
> Environment: Linux version 4.15.0-142-generic 
> (buildd@lgw01-amd64-039) (gcc version 5.4.0 20160609 (Ubuntu 
> 5.4.0-6ubuntu1~16.04.12))
> java version "1.8.0_162"
> Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
> Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
>Reporter: ECFuzz
>Assignee: ECFuzz
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 
>   dfs.namenode.resource.checked.volumes.minimum
>   1
>   
>     The minimum number of redundant NameNode storage volumes required.
>   
> {code}
> I found that when setting the value of 
> “dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
> number of storage volumes in the NameNode, it is always impossible to turn 
> off the safe mode, and when in safe mode, the file system only accepts read 
> data requests, but not delete, modify and other change requests, which is 
> greatly limited by the function.
> The default value of the configuration item is 1, we set to 2 as an example 
> for illustration, after starting hdfs logs and the client will throw the 
> relevant reminders.
> {code:java}
> 2022-07-27 17:37:31,772 WARN 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on 
> available disk space. Already in safe mode.
> 2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
> mode is ON.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off.
> {code}
> {code:java}
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
> directory /hdfsapi/test. Name node is in safe mode.
> Resources are low on NN. Please add or free up more resourcesthen turn off 
> safe mode manually. NOTE:  If you turn off safe mode before adding resources, 
> the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode 
> leave" to turn safe mode off. NamenodeHostName:192.168.1.167
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
>         at java.base/java.security.AccessController.doPrivileged(Native 
> Method)
>         at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
> According to the prompt, it is believed that there is not enough resource 
> space to meet the corresponding conditions to close safe mode, but after 
> adding or releasing more resources and lowering the resource condition 
> threshold "dfs.namenode.resource.du.reserved", it still fails to close safe 
> mode and throws the same prompt .
> According to the source code, we know that if the NameNode has redundant 
> storage volumes less than the "dfs.namenode.resource.checked.volumes.minimum" 
> set the minimum number of redundant storage volumes will enter safe mode. 
> After debugging, *we found that the current NameNode storage volumes are 
> abundant resource space, 

[jira] [Updated] (HDFS-16697) Randomly setting “dfs.namenode.resource.checked.volumes.minimum” will always prevent safe mode from being turned off

2023-05-16 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16697:
--
Description: 
{code:java}

  dfs.namenode.resource.checked.volumes.minimum
  1
  
    The minimum number of redundant NameNode storage volumes required.
  
{code}
I found that when setting the value of 
“dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
number of storage volumes in the NameNode, it is always impossible to turn off 
the safe mode, and when in safe mode, the file system only accepts read data 
requests, but not delete, modify and other change requests, which is greatly 
limited by the function.

The default value of the configuration item is 1, we set to 2 as an example for 
illustration, after starting hdfs logs and the client will throw the relevant 
reminders.
{code:java}
2022-07-27 17:37:31,772 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: NameNode low on available 
disk space. Already in safe mode.
2022-07-27 17:37:31,772 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe 
mode is ON.
Resources are low on NN. Please add or free up more resourcesthen turn off safe 
mode manually. NOTE:  If you turn off safe mode before adding resources, the NN 
will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to 
turn safe mode off.
{code}
{code:java}
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create 
directory /hdfsapi/test. Name node is in safe mode.
Resources are low on NN. Please add or free up more resourcesthen turn off safe 
mode manually. NOTE:  If you turn off safe mode before adding resources, the NN 
will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to 
turn safe mode off. NamenodeHostName:192.168.1.167
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1468)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1455)
        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3174)
        at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1145)
        at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:714)
        at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1036)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1000)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:928)
        at java.base/java.security.AccessController.doPrivileged(Native Method)
        at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2916){code}
According to the prompt, it is believed that there is not enough resource space 
to meet the corresponding conditions to close safe mode, but after adding or 
releasing more resources and lowering the resource condition threshold 
"dfs.namenode.resource.du.reserved", it still fails to close safe mode and 
throws the same prompt .

According to the source code, we know that if the NameNode has redundant 
storage volumes less than the "dfs.namenode.resource.checked.volumes.minimum" 
set the minimum number of redundant storage volumes will enter safe mode. After 
debugging, *we found that the current NameNode storage volumes are abundant 
resource space, but because the total number of NameNode storage volumes is 
less than the set value, so the number of NameNode storage volumes with 
redundancy space must also be less than the set value, resulting in always 
entering safe mode.*

In summary, it is found that the configuration item lacks a condition check and 
an associated exception handling mechanism, which makes it impossible to find 
the root cause of the impact when a misconfiguration occurs.

The solution I propose is to add a mechanism to check the value of this 
configuration item, it will printing a warning message in the log when the 
value is greater than the number of NameNode storage volumes in order to solve 
the problem in time and avoid the misconfiguration from affecting the 
subsequent operations of the program.

  was:
{code:java}

  dfs.namenode.resource.checked.volumes.minimum
  1
  
    The minimum number of redundant NameNode storage volumes required.
  
{code}
I found that when setting the value of 
“dfs.namenode.resource.checked.volumes.minimum” is greater than the total 
number of storage volumes in the NameNode, it is always impossible 

[jira] [Updated] (HDFS-16653) Safe mode related operations cannot be performed when “dfs.client.mmap.cache.size” is set to a negative number

2023-04-18 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16653:
--
Description: 
 
{code:java}

  dfs.client.mmap.cache.size
  256
  
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  

{code}
When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
operation options including enter, leave, get, wait and forceExit are invalid, 
the terminal returns security mode is null and no exceptions are thrown.

In summary, I think we need to improve the check mechanism related to this 
configuration item, *add maxEvictableMmapedSize that is 
"dfs.client.mmap.cache.size" related Precondition check suite error message,and 
give a clear indication when the configuration is abnormal in order to solve 
the problem in time and reduce the impact on the safe mode related operations.*

The details are as follows.

I think that since the constructor of the ShortCircuitCache class in 
ShortCircuitCache.java in the source code already uses 
Preconditions.checkArgument() to check whether the configuration item value is 
greater than or equal to zero.So when set to a negative number, it will lead to 
the creation of ShortCircuitCache class object in ClientContext.java failed.

But due to Preconditions.checkArgument () in the lack of error information, 
resulting in the terminal using hdfs dfsadmin script appears as follows:
{code:java}
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave 
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode enter
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode get
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode forceExit
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]{code}
And hdfs logs and terminal are not related to the exception thrown.

Therefore, the cause of the situation can be found directly after adding an 
error message to the original Preconditions.checkArgument(), as follows:
{code:java}
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave
safemode: Invalid argument: dfs.client.mmap.cache.size must be greater than 
zero.
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]{code}

 

  was:
 
{code:java}

  dfs.client.mmap.cache.size
  256
  
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  

{code}
When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
operation options including enter, leave, get, wait and forceExit are invalid, 
the terminal returns security mode is null and no exceptions are thrown.

In summary, I think we need to improve the check mechanism related to this 
configuration item, add maxEvictableMmapedSize that is 
"dfs.client.mmap.cache.size" related Precondition check suite error message,and 
give a clear indication when the configuration is abnormal in order to solve 
the problem in time and reduce the impact on the safe mode related operations.

The details are as follows.

I think that since the constructor of the ShortCircuitCache class in 
ShortCircuitCache.java in the source code already uses 
Preconditions.checkArgument() to check whether the configuration item value is 
greater than or equal to zero.So when set to a negative number, it will lead to 
the creation of ShortCircuitCache class object in ClientContext.java failed.

But due to Preconditions.checkArgument () in the lack of error information, 
resulting in the terminal using hdfs dfsadmin script appears as 

[jira] [Updated] (HDFS-16653) Safe mode related operations cannot be performed when “dfs.client.mmap.cache.size” is set to a negative number

2023-04-18 Thread ECFuzz (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ECFuzz updated HDFS-16653:
--
Description: 
 
{code:java}

  dfs.client.mmap.cache.size
  256
  
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  

{code}
When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
operation options including enter, leave, get, wait and forceExit are invalid, 
the terminal returns security mode is null and no exceptions are thrown.

In summary, I think we need to improve the check mechanism related to this 
configuration item, add maxEvictableMmapedSize that is 
"dfs.client.mmap.cache.size" related Precondition check suite error message,and 
give a clear indication when the configuration is abnormal in order to solve 
the problem in time and reduce the impact on the safe mode related operations.

The details are as follows.

I think that since the constructor of the ShortCircuitCache class in 
ShortCircuitCache.java in the source code already uses 
Preconditions.checkArgument() to check whether the configuration item value is 
greater than or equal to zero.So when set to a negative number, it will lead to 
the creation of ShortCircuitCache class object in ClientContext.java failed.

But due to Preconditions.checkArgument () in the lack of error information, 
resulting in the terminal using hdfs dfsadmin script appears as follows:
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave 
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode enter
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode get
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode forceExit
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
And hdfs logs and terminal are not related to the exception thrown.

Therefore, the cause of the situation can be found directly after adding an 
error message to the original Preconditions.checkArgument(), as follows:
hadoop@ljq1:~/hadoop-3.1.3-work/sbin$ hdfs dfsadmin -safemode leave
safemode: Invalid argument: dfs.client.mmap.cache.size must be greater than 
zero.
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
 

  was:
 
{code:java}

  dfs.client.mmap.cache.size
  256
  
    When zero-copy reads are used, the DFSClient keeps a cache of recently used
    memory mapped regions.  This parameter controls the maximum number of
    entries that we will keep in that cache.
    The larger this number is, the more file descriptors we will potentially
    use for memory-mapped files.  mmaped files also use virtual address space.
    You may need to increase your ulimit virtual address space limits before
    increasing the client mmap cache size.
    
    Note that you can still do zero-copy reads when this size is set to 0.
  

{code}
When the configuration item “dfs.client.mmap.cache.size” is set to a negative 
number, it will cause /hadoop/bin hdfs dfsadmin -safemode provides all the 
operation options including enter, leave, get, wait and forceExit are invalid, 
the terminal returns security mode is null and no exceptions are thrown.
{code:java}
hadoop@ljq1:~/hadoop-3.1.3-work/etc/hadoop$ hdfs dfsadmin -safemode leave
safemode: null
Usage: hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit] {code}
In summary, I think we need to improve the check mechanism related to this 
configuration item, add maxEvictableMmapedSize that is 
"dfs.client.mmap.cache.size" related Precondition check suite error message,and 
give a clear indication when the configuration is abnormal in order to solve 
the problem in time and reduce the impact on the safe mode related operations.

 

 

 

 


> Safe mode related operations cannot be performed when 
> “dfs.client.mmap.cache.size” is set to a negative number
> --
>
> Key: HDFS-16653
> URL: https://issues.apache.org/jira/browse/HDFS-16653
> Project: Hadoop HDFS
>  Issue