[
https://issues.apache.org/jira/browse/MAPREDUCE-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
ECFuzz updated MAPREDUCE-7460:
------------------------------
Description:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
If configuration a and configuration b violate the constraint relationship, it
will cause the mapreduce sample program to block
core-site.xml like below.
{code:java}
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/lfl/Mutil_Component/tmp</value>
</property>
</configuration>{code}
hdfs-site.xml like below.
{noformat}
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
{noformat}
And then format the namenode, and start the hdfs. HDFS is running normally.
{noformat}
lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ ./bin/hdfs namenode -format
lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$
./sbin/start-dfs.sh{noformat}
We add yarn.nodemanager.resource.memory-mb to yarn-site.xml like below.
{noformat}
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1024</value>
</property>{noformat}
Also, we alse add mapreduce.map.memory.mb to mapred-site.xml like below.
{code:java}
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property> {code}
was:
My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
core-site.xml like below.
{code:java}
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/lfl/Mutil_Component/tmp</value>
</property>
</configuration>{code}
hdfs-site.xml like below.
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
And then format the namenode, and start the hdfs. HDFS is running normally.
{noformat}
lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ ./bin/hdfs namenode -format
lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$
./sbin/start-dfs.sh{noformat}
> When "yarn.nodemanager.resource.memory-mb" and "mapreduce.map.memory.mb" work
> together, the mapreduce sample program blocks
> ----------------------------------------------------------------------------------------------------------------------------
>
> Key: MAPREDUCE-7460
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-7460
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: yarn
> Affects Versions: 3.3.6
> Reporter: ECFuzz
> Priority: Major
>
> My hadoop version is 3.3.6, and I use the Pseudo-Distributed Operation.
> If configuration a and configuration b violate the constraint relationship,
> it will cause the mapreduce sample program to block
> core-site.xml like below.
> {code:java}
> <configuration>
> <property>
> <name>fs.defaultFS</name>
> <value>hdfs://localhost:9000</value>
> </property>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/home/lfl/Mutil_Component/tmp</value>
> </property>
>
> </configuration>{code}
> hdfs-site.xml like below.
> {noformat}
> <configuration>
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> </configuration>
> {noformat}
> And then format the namenode, and start the hdfs. HDFS is running normally.
> {noformat}
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$ ./bin/hdfs namenode
> -format
> lfl@LAPTOP-QR7GJ7B1:~/Mutil_Component/hadoop-3.3.6$
> ./sbin/start-dfs.sh{noformat}
>
> We add yarn.nodemanager.resource.memory-mb to yarn-site.xml like below.
> {noformat}
> <property>
> <name>yarn.nodemanager.resource.memory-mb</name>
> <value>1024</value>
> </property>{noformat}
> Also, we alse add mapreduce.map.memory.mb to mapred-site.xml like below.
> {code:java}
> <property>
> <name>mapreduce.map.memory.mb</name>
> <value>2048</value>
> </property> {code}
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]