Adding to what Varun has said, Resource Manager log will be of help here to
confirm same.

The code snippet which you have mentioned is correct. But it also has a
check that if the number of active application is less than 1, this check
wont be performed. And it seems you have only one application.

- Sunil



On Wed, Jun 15, 2016 at 12:27 PM Varun saxena <varun.sax...@huawei.com>
wrote:

> Can you open the Resource Manager(RM) UI and share screenshot of main RM
> page. We can check cluster resources there. Most probably cluster does not
> have enough resources.
>
> How much memory and VCores does your AM need ?
>
> RM UI can be accessed at http://localhost:8088/
>
>
>
> - Varun Saxena.
>
>
>
> *From:* Phillip Wu [mailto:phillip...@unsw.edu.au]
> *Sent:* 15 June 2016 14:42
> *To:* user@hadoop.apache.org
> *Cc:* Sunil Govind
> *Subject:* RE: maximum-am-resource-percent is insufficient to start a
> single application
>
>
>
> Sunil,
>
>
>
> Thanks for your email.
>
>
>
> 1.       I don’t think anything on the cluster is being used – see below
>
> I’m not sure how to get my “total cluster resource size” – please advise
> how to get this?
>
> After doing the hive insert I get this:
>
> hduser@ip-10-118-112-182:/$ hadoop queue -info default -showJobs
>
> 16/06/10 02:24:49 INFO client.RMProxy: Connecting to ResourceManager at /
> 127.0.0.1:8050
>
> ======================
>
> Queue Name : default
>
> Queue State : running
>
> Scheduling Info : Capacity: 100.0, MaximumCapacity: 100.0,
> CurrentCapacity: 0.0
>
> Total jobs:1
>
>                   JobId      State           StartTime
> UserName           Queue      Priority       UsedContainers
> RsvdContainers  UsedMem         RsvdMem         NeededMem         AM info
>
> job_1465523894946_0001       PREP       1465524072194
>  hduser         default        NORMAL                    0
> 0       0M              0M                0M
> http://localhost:8088/proxy/application_1465523894946_0001/
>
>
>
> hduser@ip-10-118-112-182:/$ mapred job -status  job_1465523894946_0001
>
> Job: job_1465523894946_0001
>
> Job File:
> /tmp/hadoop-yarn/staging/hduser/.staging/job_1465523894946_0001/job.xml
>
> Job Tracking URL :
> http://localhost:8088/proxy/application_1465523894946_0001/
>
> Uber job : false
>
> Number of maps: 0
>
> Number of reduces: 0
>
> map() completion: 0.0
>
> reduce() completion: 0.0
>
> Job state: PREP
>
> retired: false
>
> reason for failure:
>
> Counters: 0
>
> 2.       There are no other applications except I’m running zookeeper
>
> 3.       There is only one user
>
>
>
> For your assistance this seems to be the code generating the error
> message[…yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java]:
>
> if (!Resources.lessThanOrEqual(
>
>           resourceCalculator, lastClusterResource, userAmIfStarted,
>
>           userAMLimit)) {
>
>         if (getNumActiveApplications() < 1) {
>
>           LOG.warn("maximum-am-resource-percent is insufficient to start
> a" +
>
>             " single application in queue for user, it is likely set too
> low." +
>
>             " skipping enforcement to allow at least one application to
> start");
>
>         } else {
>
>           LOG.info("not starting application as amIfStarted exceeds " +
>
>             "userAmLimit");
>
>           continue;
>
>         }
>
>       }
>
>
>
> Any ideas?
>
>
>
> Phillip
>
> *From:* Sunil Govind [mailto:sunil.gov...@gmail.com
> <sunil.gov...@gmail.com>]
> *Sent:* Wednesday, 15 June 2016 4:24 PM
> *To:* Phillip Wu; user@hadoop.apache.org
> *Subject:* Re: maximum-am-resource-percent is insufficient to start a
> single application
>
>
>
> Hi Philip
>
>
>
> Higher maximum-am-resource-percent value (0~1) will help to allocate more
> resource for your ApplicationMaster container of a yarn application (MR
> Jobs here), but also depend on the capacity configured for the queue. You
> have mentioned that there is only default queue here, so that wont be a
> problem. Few questions:
>
>     - How much is your total cluster resource size and how much of cluster
> resource is used now ?
>
>     - Is there any other application were running in cluster and whether
> it was taking full cluster resource.? This is a possibility since you now
> gave whole queue's capacity for AM containers.
>
>     - Do you have multiple users in your cluster who runs applications
> other that this hive job? If so,
> yarn.scheduler.capacity.<queue-path>.minimum-user-limit-percent will have
> impact on AM resource usage limit. I think you can double check this.
>
>
>
>
>
> - Sunil
>
>
>
> On Wed, Jun 15, 2016 at 8:47 AM Phillip Wu <phillip...@unsw.edu.au> wrote:
>
> Hi,
>
>
>
> I'm new to Hadoop and Hive.
>
>
>
> I'm using Hadoop 2.6.4 (binary I got from internet) & Hive 2.0.1 (binary I
> got from internet).
>
> I can create a database and table in hive.
>
>
>
> However when I try to insert a record into a previously created table I
> get:
>
> "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue:
> maximum-am-resource-percent is insufficient to start a single application
> in queue"
>
>
>
> yarn-site.xml
>
> <property>
>
>       <name>yarn.resourcemanager.scheduler.class</name>
>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
>
> </property>
>
>
>
> capacity-scheduler.xml
>
> <property>
>
>     <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
>
>     <value>1.0</value>
>
>     <description>
>
>       Maximum percent of resources in the cluster which can be used to run
>
>       application masters i.e. controls number of concurrent running
>
>       applications.
>
>     </description>
>
>   </property>
>
>
>
> According to the documentation this means I have allocated 100% to my one
> and only default scheduler queue.
>
> [
> https://hadoop.apache.org/docs/r2.6.4/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html
> ]
>
> "yarn.scheduler.capacity.maximum-am-resource-percent /
> yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent
>
> Maximum percent of resources in the cluster which can be used to run
> application masters - controls number of concurrent active applications.
>
> Limits on each queue are directly proportional to their queue capacities
> and user limits.
>
> Specified as a float - ie 0.5 = 50%. Default is 10%. This can be set for
> all queues with yarn.scheduler.capacity.maximum-am-resource-percent and can
> also be overridden on a per queue basis by setting
>
> yarn.scheduler.capacity.<queue-path>.maximum-am-resource-percent"
>
>
>
> Can someone tell me how to fix this?
>
>

Reply via email to