[jira] [Created] (HDFS-17238) Setting the value of "dfs.blocksize" too large will cause HDFS to be unable to write to files

2023-10-26 Thread ECFuzz (Jira)
ECFuzz created HDFS-17238:
-

 Summary: Setting the value of "dfs.blocksize" too large will cause 
HDFS to be unable to write to files
 Key: HDFS-17238
 URL: https://issues.apache.org/jira/browse/HDFS-17238
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Affects Versions: 3.3.6
Reporter: ECFuzz


My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   
{code}
hdfs-site.xml like below.
{code:java}

   
        dfs.replication
        1
    

        dfs.blocksize
        134217728
    
   
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17069) The documentation and implementation of "dfs.blocksize" are inconsistent.

2023-07-05 Thread ECFuzz (Jira)
ECFuzz created HDFS-17069:
-

 Summary: The documentation and implementation of "dfs.blocksize" 
are inconsistent.
 Key: HDFS-17069
 URL: https://issues.apache.org/jira/browse/HDFS-17069
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfs, documentation
Affects Versions: 3.3.6
 Environment: Linux version 4.15.0-142-generic (buildd@lgw01-amd64-039) 
(gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.12))

java version "1.8.0_162"
Java(TM) SE Runtime Environment (build 1.8.0_162-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.162-b12, mixed mode)
Reporter: ECFuzz


My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

`core-site.xml` like below.

```shell

  
        fs.defaultFS
        hdfs://localhost:9000
    
    
        hadoop.tmp.dir
        /home/hadoop/Mutil_Component/tmp
    
   

```

`hdfs-site.xml` like below.

```shell

   
        dfs.replication
        1
    

        dfs.blocksize
        128k
    
   

```

 

And then format the namenode, and start the hdfs.

```shell
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs namenode -format
x(many info)
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [hadoop-Standard-PC-i440FX-PIIX-1996]

```

Finally, use dfs to put a file. Then I get the message which means 128k is less 
than 1M.

```shell

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir -p /user/hadoop
hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -mkdir input

hadoop@hadoop-Standard-PC-i440FX-PIIX-1996:~/Mutil_Component/hadoop-3.3.6$ 
bin/hdfs dfs -put etc/hadoop/hdfs-site.xml input
put: Specified block size is less than configured minimum value 
(dfs.namenode.fs-limits.min-block-size): 131072 < 1048576

```

 

But I find that in the document, dfs.blocksize can be set like 128k and other 
values in `hdfs-default.xml` .

```shell

The default block size for new files, in bytes. You can use the following 
suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera), p(peta), e(exa) 
to specify the size (such as 128k, 512m, 1g, etc.), Or provide complete size in 
bytes (such as 134217728 for 128 MB).

```

So, should there be some issues with the documents here?Or should notice user 
to set this configuration to be lager than 1M?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org