That is normal behavior, Myriad keeps the resources to flexup a node
manager incase a job comes in of a few seconds and then releases them.  The
info statement is arguably chatty and will probably go to debug in a few
more releases.


On Fri, Jun 3, 2016 at 9:18 AM, Stephen Gran <stephen.g...@piksel.com>
wrote:

> Hi,
>
> Not sure if this is relevant, but I see this in the RM logs:
>
> 2016-06-03 13:06:55,466 INFO
> org.apache.myriad.scheduler.fgs.YarnNodeCapacityManager: Setting
> capacity for node slave1.testing.local to <memory:4637, vCores:6>
> 2016-06-03 13:06:55,467 INFO
>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler:
> Update resource on node: slave1.testing.local from: <memory:0,
> vCores:0>, to: <memory:4637, vCores:6>
> 2016-06-03 13:06:55,467 INFO
> org.apache.myriad.scheduler.fgs.YarnNodeCapacityManager: Setting
> capacity for node slave1.testing.local to <memory:0, vCores:0>
> 2016-06-03 13:06:55,470 INFO
>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler:
> Update resource on node: slave1.testing.local from: <memory:4637,
> vCores:6>, to: <memory:0, vCores:0>
>
>
> This is happening for each nodemanager, repeating every 5 or 6 seconds.
>   I'm assuming this will be the NM sending the actual capacity report to
> the RM, for use in updating YARN's view of available resource.  I don't
> know if it should be going back and forth like it is, though?
>
> Cheers,
>
> On 03/06/16 09:29, Stephen Gran wrote:
> > Hi,
> >
> > I'm trying to get fine grained scaling going on a test mesos cluster.  I
> > have a single master and 2 agents.  I am running 2 node managers with
> > the zero profile, one per agent.  I can see both of them in the RM UI
> > reporting correctly as having 0 resources.
> >
> > I'm getting stack traces when I try to launch a sample application,
> > though.  I feel like I'm just missing something obvious somewhere - can
> > anyone shed any light?
> >
> > This is on a build of yesterday's git head.
> >
> > Cheers,
> >
> > root@master:/srv/apps/hadoop# bin/yarn jar
> > share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar teragen 10000
> > /outDir
> > 16/06/03 08:23:33 INFO client.RMProxy: Connecting to ResourceManager at
> > master.testing.local/10.0.5.3:8032
> > 16/06/03 08:23:34 INFO terasort.TeraSort: Generating 10000 using 2
> > 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: number of splits:2
> > 16/06/03 08:23:34 INFO mapreduce.JobSubmitter: Submitting tokens for
> > job: job_1464902078156_0001
> > 16/06/03 08:23:35 INFO mapreduce.JobSubmitter: Cleaning up the staging
> > area /tmp/hadoop-yarn/staging/root/.staging/job_1464902078156_0001
> > java.io.IOException:
> > org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
> > Invalid resource request, requested memory < 0, or requested memory >
> > max configured, requestedMemory=1536, maxMemory=0
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >          at
> >
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >          at
> >
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >          at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >          at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >          at java.security.AccessController.doPrivileged(Native Method)
> >          at javax.security.auth.Subject.doAs(Subject.java:422)
> >          at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >
> >          at
> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:306)
> >          at
> >
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:240)
> >          at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> >          at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> >          at java.security.AccessController.doPrivileged(Native Method)
> >          at javax.security.auth.Subject.doAs(Subject.java:422)
> >          at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >          at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> >          at
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> >          at
> org.apache.hadoop.examples.terasort.TeraGen.run(TeraGen.java:301)
> >          at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> >          at
> org.apache.hadoop.examples.terasort.TeraGen.main(TeraGen.java:305)
> >          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >          at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >          at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >          at java.lang.reflect.Method.invoke(Method.java:497)
> >          at
> >
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> >          at
> org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> >          at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> >          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >          at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >          at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >          at java.lang.reflect.Method.invoke(Method.java:497)
> >          at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> >          at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> > Caused by:
> > org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException:
> > Invalid resource request, requested memory < 0, or requested memory >
> > max configured, requestedMemory=1536, maxMemory=0
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >          at
> >
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >          at
> >
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >          at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >          at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >          at java.security.AccessController.doPrivileged(Native Method)
> >          at javax.security.auth.Subject.doAs(Subject.java:422)
> >          at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >
> >          at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> >          at
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> >          at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >          at
> java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> >          at
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
> >          at
> >
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
> >          at
> >
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:239)
> >          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >          at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >          at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >          at java.lang.reflect.Method.invoke(Method.java:497)
> >          at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
> >          at
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >          at com.sun.proxy.$Proxy13.submitApplication(Unknown Source)
> >          at
> >
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:253)
> >          at
> >
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:290)
> >          at
> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:290)
> >          ... 24 more
> > Caused by:
> >
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):
> > Invalid resource request, requested memory < 0, or requested memory >
> > max configured, requestedMemory=1536, maxMemory=0
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:268)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:228)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:236)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:329)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
> >          at
> >
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
> >          at
> >
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
> >          at
> >
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
> >          at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> >          at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> >          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> >          at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
> >          at java.security.AccessController.doPrivileged(Native Method)
> >          at javax.security.auth.Subject.doAs(Subject.java:422)
> >          at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> >          at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> >
> >          at org.apache.hadoop.ipc.Client.call(Client.java:1475)
> >          at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> >          at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> >          at com.sun.proxy.$Proxy12.submitApplication(Unknown Source)
> >          at
> >
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.submitApplication(ApplicationClientProtocolPBClientImpl.java:236)
> >          ... 34 more
> >
> >
> > Cheers,
> > --
> > Stephen Gran
> > Senior Technical Architect
> >
> > picture the possibilities | piksel.com
> > This message is private and confidential. If you have received this
> message in error, please notify the sender or serviced...@piksel.com and
> remove it from your system.
> >
> > Piksel Inc is a company registered in the United States New York City,
> 1250 Broadway, Suite 1902, New York, NY 10001. F No. = 2931986
> >
>
> --
> Stephen Gran
> Senior Technical Architect
>
> picture the possibilities | piksel.com
>

Reply via email to