another question is why the map process progress will back when it reach
100%?




On Tue, Dec 3, 2013 at 10:07 AM, ch huang <justlo...@gmail.com> wrote:

> hi,maillist:
>               i run a job on my CDH4.4 yarn framework ,it's map task
> finished very fast,but reduce is very slow, i check it use ps command
> find it's work heap size is 200m,so i try to increase heap size used by
> reduce task,i add "YARN_OPTS="$YARN_OPTS
> -Dmapreduce.reduce.java.opts=-Xmx1024m -verbose:gc -XX:+PrintGCDetails
> -XX:+PrintGCDateStamps
> -Xloggc:$YARN_LOG_DIR/gc-$(hostname)-resourcemanager.log
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=15M
> -XX:-UseGCOverheadLimit"    in yarn-env.sh file ,but when i restart the
> nodemanager ,i find new reduce task still use 200m heap ,why?
>
> # jps
> 2853 DataNode
> 19533 Jps
> 10949 YarnChild
> 10661 NodeManager
> 15130 HRegionServer
> # ps -ef|grep 10949
> yarn     10949 10661 99 09:52 ?        00:19:31
> /usr/java/jdk1.7.0_45/bin/java -Djava.net.preferIPv4Stack=true
> -Dhadoop.metrics.log.level=WARN -Xmx200m
> -Djava.io.tmpdir=/data/1/mrlocal/yarn/local/usercache/hdfs/appcache/application_1385983958793_0022/container_1385983958793_0022_01_005650/tmp
> -Dlog4j.configuration=container-log4j.properties
> -Dyarn.app.mapreduce.container.log.dir=/data/2/mrlocal/yarn/logs/application_1385983958793_0022/container_1385983958793_0022_01_005650
> -Dyarn.app.mapreduce.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA
> org.apache.hadoop.mapred.YarnChild 192.168.11.10 48936
> attempt_1385983958793_0022_r_000000_14 5650
>
>
>
>
>

Reply via email to