ECFuzz created HDFS-17238:
-----------------------------

             Summary: Setting the value of "dfs.blocksize" too large will cause 
HDFS to be unable to write to files
                 Key: HDFS-17238
                 URL: https://issues.apache.org/jira/browse/HDFS-17238
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: hdfs
    Affects Versions: 3.3.6
            Reporter: ECFuzz


My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.

core-site.xml like below.
{code:java}
<configuration>
  <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/hadoop/Mutil_Component/tmp</value>
    </property>
   
</configuration>{code}
hdfs-site.xml like below.
{code:java}
<configuration>
   <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
<property>
        <name>dfs.blocksize</name>
        <value>1342177280000</value>
    </property>
   
</configuration>{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Reply via email to