RE: How to build Hadoop from source using Ant

2013-07-15 Thread Chuan Liu
If you want to build release tarball, you can use the ant target 'tar'. If you want native libraries built, you need to set 'complile.native' flag to true. 'forrest.home' need to be set to Apache Forrest location in order to build java doc. So you will have a command like the following: >ant -D

RE: Parameter 'yarn.nodemanager.resource.cpu-cores' does not work

2013-07-03 Thread Chuan Liu
I think you need to change the following configurations in yarn-site.xml to enable CPU resource limits. 'yarn.nodemanager.container-monitor.resource-calculator.class' 'org.apache.hadoop.yarn.util.resource.DominantResourceCalculator' 'yarn.nodemanager.container-executor.class' 'org.apache.hadoop

RE: Containers and CPU

2013-07-02 Thread Chuan Liu
I believe this is the default behavior. By default, only memory limit on resources is enforced. The capacity scheduler will use DefaultResourceCalculator to compute resource allocation for containers by default, which also does not take CPU into account. -Chuan From: John Lilley [mailto:john.lil

RE: Submitting jobs programmatically (even from Windows, IDE...)

2013-06-20 Thread Chuan Liu
Hi Vjeran, First, thanks for the blog post! As for the support for Windows, we are working actively to make Hadoop run smoothly on Windows. The following JIRAs may provide some background and general outline of this effort. https://issues.apache.org/jira/browse/HADOOP-8079 https://issues.apache

RE: How to configure container capacity?

2013-05-31 Thread Chuan Liu
Bcc'd dev mailing list. Hi Andrew, The memory allocated will always be an integral multiple of minimal allocation unit which is configured via the property "yarn.scheduler.minimum-allocation-mb ". The default configuration for the value is 1024. If you change the config to 512, the container m