Forgot to add one thing, all memory (120G) is reserved now.

Apps SubmittedApps PendingApps RunningApps CompletedContainers RunningMemory
UsedMemory TotalMemory ReservedVCores UsedVCores TotalVCores ReservedActive
NodesDecommissioned NodesLost NodesUnhealthy NodesRebooted Nodes211060120 GB120
GB20 GB608010100000
Furthermore, 10 more VCores are reserved. I don't know what is it.


2015-01-28 16:47 GMT+09:00 임정택 <kabh...@gmail.com>:

> Hello all!
>
> I'm new to YARN, so it could be beginner question.
> (I've been used MRv1 and changed just now.)
>
> I'm using HBase with 3 masters and 10 slaves - CDH 5.2 (Hadoop 2.5.0).
> In order to migrate MRv1 to YARN, I read several docs, and change
> configrations.
>
> ```
> yarn.nodemanager.resource.memory-mb: 12288
> yarn.scheduler.minimum-allocation-mb: 512
> mapreduce.map.memory.mb: 1536
> mapreduce.reduce.memory.mb: 1536
> mapreduce.map.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> mapreduce.reduce.java.opts: -Xmx1024m -Dfile.encoding=UTF-8
> -Dfile.client.encoding=UTF-8 -Dclient.encoding.override=UTF-8
> ```
>
> I'm expecting that it will be 80 containers running concurrently, but in
> real it's 60 containers. (59 maps ran concurrently, maybe 1 is
> ApplicationManager.)
>
> All YarnChilds' VIRT are higher than 1.5G and lower than 2G now, so I'm
> suspecting it.
> But it's better to make clear, to understand YARN clearer.
>
> Any helps & explanations are really appreciated.
> Thanks!
>
> Best regards.
> Jungtaek Lim (HeartSaVioR)
>
>


-- 
Name : 임 정택
Blog : http://www.heartsavior.net / http://dev.heartsavior.net
Twitter : http://twitter.com/heartsavior
LinkedIn : http://www.linkedin.com/in/heartsavior

Reply via email to