[ 
https://issues.apache.org/jira/browse/OOZIE-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15892730#comment-15892730
 ] 

Peter Cseh commented on OOZIE-2810:
-----------------------------------

If you go to the RM's web interface on an address like 
http://localhost:8088/cluster you'll see a table at the top like this:

||Apps Submitted ||     Apps Pending    || Apps Running ||      Apps Completed 
||       Containers Running ||   Memory Used     || Memory Total ||      Memory 
Reserved ||      VCores Used      || VCores Total        || VCores Reserved ||
|7      |0      |0      |7|     0       |0 B    |24 GB  |0 B    |0|     6       
| 0|

If the VCores Used or Memory Used are too high, there will be no more space in 
the cluster to create a container. Provide more resources to the cluster or 
tweak the yarn properties to make it spawn smaller containers.



> RM job was stuck when running with oozie
> ----------------------------------------
>
>                 Key: OOZIE-2810
>                 URL: https://issues.apache.org/jira/browse/OOZIE-2810
>             Project: Oozie
>          Issue Type: Improvement
>          Components: action, core, HA, workflow
>    Affects Versions: 4.3.0
>         Environment: hadoop2.7.2,centos7*3
>            Reporter: yangsongjie
>              Labels: newbie
>             Fix For: 4.3.0
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> I'm running a mapreduce wordcount job task on oozie. 2 jobs were submitted to 
> the yarn, and then the monitoring tasks running upto 99% were stuck. 
> Wordcount job has been 0%.
> When I kill off the monitor job, wordcount job runs smoothly.
> I use a cluster of 3 virtual machines, configuration is as follows:
>  Profile per VM: cores=2 memory=2048MB reserved=0GB usableMem=0GB disks=1
>  Num Container=3
>  Container Ram=640MB
>  Used Ram=1GB
>  Unused Ram=0GB
>  yarn.scheduler.minimum-allocation-mb=640
>  yarn.scheduler.maximum-allocation-mb=1920
>  yarn.nodemanager.resource.memory-mb=1920
>  mapreduce.map.memory.mb=640
>  mapreduce.map.java.opts=-Xmx512m
>  mapreduce.reduce.memory.mb=1280
>  mapreduce.reduce.java.opts=-Xmx1024m
>  yarn.app.mapreduce.am.resource.mb=640
>  yarn.app.mapreduce.am.command-opts=-Xmx512m
>  mapreduce.task.io.sort.mb=256
> Is there any way to solve this, to make two jobs run smoothly and finish at 
> the same time ?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to