But he didn't say he had a "5120MB Available Node Size." He said he had a 512MiB (i.e., half a GiB) of RAM per node.

On 8/15/19 7:50 AM, Prabhu Josephraj wrote:
YARN allocates based on the configuration (yarn.nodemanager.resource.memory-mb) user has configured. It has allocated the AM Container of size 1536MB as it can fit in 5120MB Available Node Size.

yarn.nodemanager.pmem-check-enabled will kill the container if the physical memory usage of the container process is above 1536MB. MR ApplicationMaster for a pi job is light weight and it won't require that much memory and so not got killed.



On Thu, Aug 15, 2019 at 4:02 PM . . <writeme...@googlemail.com.invalid> wrote:

    Correct:?? I set 'yarn.nodemanager.resource.memory-mb' ten times
    the node physical memory (512MB) and I was able to successfully
    execute a?? 'pi 1 10' mapreduce job.

    Since default 'yarn.app.mapreduce.am.resource.mb' value is 1536MB
    I expected the job to never start / be allocated and I have no
    valid explanation.


    On Wed, Aug 14, 2019 at 10:32 PM . . <writeme...@googlemail.com
    <mailto:writeme...@googlemail.com>> wrote:

        Correct:?? I set 'yarn.nodemanager.resource.memory-mb' ten
        times the node physical memory (512MB) and I was able to
        successfully execute a?? 'pi 1 10' mapreduce job.

        Since default 'yarn.app.mapreduce.am.resource.mb' value is
        1536MB I expected the job to never start / be allocated and I
        have no valid explanation.



        On Wed, Aug 14, 2019 at 8:31 PM Jeff Hubbs <jhubbsl...@att.net
        <mailto:jhubbsl...@att.net>> wrote:

            To make sure I understand...you've allocated /ten times/
            your physical RAM for containers? If so, I think that's
            your issue.

            For reference, under Hadoop 3.x I didn't have a cluster
            that would really do anything until its worker nodes had
            at least 8GiB.

            On 8/14/19 12:10 PM, . . wrote:
            Hi all,

            I installed a basic 3 nodes Hadoop 2.9.1 cluster and
            playing with YARN settings.
            The 3 nodes has following configuration:
            1 cpu / 1 core?? / 512MB RAM

            I wonder I was able to configure yarn-site.xml with
            following settings (higher than node physical limits) and
            successfully run a mapreduce 'pi 1 10' job

            quote...
            ?? <property>
            ?? ?? ??
            
<name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
            </property>

            ?? ?? <property>
            ?? ?? ?? ?? <name>yarn.nodemanager.resource.memory-mb</name>
            ?? ?? ?? ?? <value>5120</value>
            ?? ?? ?? ?? <description>Amount of physical memory, in
            MB, that can be allocated for containers. If set to -1
            and
            yarn.nodemanager.resource.detect-hardware-capabilities is
            true, it is automatically calculated. In other cases, the
            default is 8192MB</description>
            ?? ?? </property>

            ?? ?? <property>
            ?? ?? ?? ?? <name>yarn.nodemanager.resource.cpu-vcores</name>
            ?? ?? ?? ?? <value>6</value>
            ?? ?? ?? ?? <description>Number of CPU cores that can be
            allocated for containers.</description>
            ?? ?? </property>
            ...unquote

            Can anyone provide an explanation please?

            Should 'yarn.nodemanager.vmem-check-enabled' and
            'yarn.nodemanager.pmem-check-enabled' properties (set to
            'true' as default) check that my YARN settings are higher
            than physical limits?

            Which mapreduce 'pi' job settings can I use, to 'force'
            containers to use more than node physical resources?

            Many thanks in advance!
            Guido



Reply via email to