Hi Suresh!

YARN's accounting for memory on each node is completely different from the
Linux kernel's accounting of memory used. e.g. I could launch a MapReduce
task which in reality allocates just 100 Mb, and tell YARN to give it 8 Gb.
The kernel would show the memory requested by the task, the resident memory
(which would be ~ 100Mb) and the NodeManager page will show 8Gb used.
Please see
https://yahooeng.tumblr.com/post/147408435396/moving-the-utilization-needle-with-hadoop

HTH
Ravi

On Mon, Aug 15, 2016 at 5:58 AM, Sunil Govind <sunil.gov...@gmail.com>
wrote:

> Hi Suresh
>
> "This 'memory used' would be the memory used by all containers running on
> that node"
> >> "Memory Used" in Nodes page indicates how memory is used in all the
> node managers with respect to the corresponding demand made to RM. For eg,
> if application has asked for 4GB resource and if its really using only 2GB,
> then this kind of difference can be shown (one possibility). Which means
> 4GB will be displayed in Node page.
>
> As Ray has mentioned if the demand for resource is more from AM itself OR
> with highly configured JVM size for containers (through java opts), there
> can be chances that containers may take more that you intented and UI will
> display higher value.
>
> Thanks
> Sunil
>
> On Sun, Aug 14, 2016 at 6:35 AM Suresh V <verdi...@gmail.com> wrote:
>
>> Hello Ray,
>>
>> I'm referring to the nodes of the cluster page, which shows the
>> individual nodes and the total memory available in each node and the memory
>> used in each node.
>>
>> This 'memory used' would be the memory used by all containers running on
>> that node; however, if I check free command in the node, there is
>> significant difference. I'm unable to understand this...
>>
>> Appreciate any light into this. I agree the main RM page shows the total
>> containers memory utilization across nodes., which is matching the sum of
>> memory used in each nodes as displayed in the 'nodes of the cluster' page...
>>
>> Thank you
>> Suresh.
>>
>>
>> Suresh V
>> http://www.justbirds.in
>>
>>
>> On Sat, Aug 13, 2016 at 12:44 PM, Ray Chiang <rchi...@apache.org> wrote:
>>
>>> The RM page will show the combined container memory usage.  If you have
>>> a significant difference between any or all of
>>>
>>> 1) actual process memory usage
>>> 2) JVM heap size
>>> 3) container maximum
>>>
>>> then you will have significant memory underutilization.
>>>
>>> -Ray
>>>
>>>
>>> On 20160813 6:31 AM, Suresh V wrote:
>>>
>>> Hello,
>>>
>>> In our cluster when a MR job is running, in the 'Nodes of the cluster'
>>> page, it shows the memory used as 84GB out of 87GB allocated to yarn
>>> nodemanagers.
>>> However when I actually do a top or free command while logged in to the
>>> node, it shows as only 23GB used and about 95GB or more free.
>>>
>>> I would imagine the memory used displayed in the Yarn web UI should
>>> match the memory used shown by top or free command on the node.
>>>
>>> Please advise if this is right thinking or am I missing something?
>>>
>>> Thank you
>>> Suresh.
>>>
>>>
>>>
>>>
>>

Reply via email to