Hi Anu,

I think this is most likely an Ansible issue.

Thanks for the info...

- Dmitry

On Tue, Feb 19, 2019 at 2:40 PM Anu Engineer <aengin...@hortonworks.com>
wrote:

> I don’t know of any python process in the Hadoop path that gobbles up that
> much memory.
>
> Would it be possible that you have some kind of memory (–mx) flags in
> Namenode Options (Probably in HADOOP_OPTS),
>
> such that the min and max is set to the same value, say 32GB – so that
> namenode when it boots up is reserving that much memory?
>
>
>
> Thanks
>
> Anu
>
>
>
>
>
> *From: *Dmitry Goldenberg <dgoldenb...@hexastax.com>
> *Date: *Tuesday, February 19, 2019 at 11:25 AM
> *To: *"user@hadoop.apache.org" <user@hadoop.apache.org>
> *Subject: *Memory error during hdfs dfs -format
>
>
>
> Hi,
>
>
>
> We've got a task in Ansible which returns a MemoryError during HDFS
> installation on a box with 64 GB memory total, 30 GB free at the moment.
>
>
>
> It appears that during the execution of the hdfs dfs -format command, a
> Python process is spawned which gobbles up ~32ish GB of memory and then the
> Ansible deploy fails.
>
>
>
> Any ideas as to how we could curtail / manage memory consumption better?
>
>
>
> Thanks
>

Reply via email to