Wouldn't it likely be the opposite? Too much memory / too many cores being
requested relative to the resource that YARN makes available?
On Nov 24, 2014 11:00 AM, "Akhil Das" <ak...@sigmoidanalytics.com> wrote:

> This can happen mainly because of the following:
>
> - Wrong master url (Make sure you give the master url which is listed on
> top left corner of the webui - running on 8080)
>
> - Allocated more memory/cores while creating the sparkContext.
>
>
>
> Thanks
> Best Regards
>
> On Mon, Nov 24, 2014 at 4:13 PM, vdiwakar.malladi <
> vdiwakar.mall...@gmail.com> wrote:
>
>> Hi,
>>
>> When i trying to execute the program from my laptop by connecting to HDP
>> environment (on which Spark also configured), i'm getting the warning
>> ("Initial job has not accepted any resources; check your cluster UI to
>> ensure that workers are registered and have sufficient memory") and Job is
>> being terminated. My console has following log statements.
>>
>> Note: I could able to run the same client program by using spark-submit
>> command. Whatever parameters i passed to spark-submit command, i passed
>> the
>> same to to SparkConf object. But still getting the same error. Any clue on
>> this?
>>
>> 14/11/24 16:07:09 INFO scheduler.DAGScheduler: Submitting 2 missing tasks
>> from Stage 0 (MappedRDD[4] at map at JavaSchemaRDD.scala:42)
>> 14/11/24 16:07:09 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0
>> with
>> 2 tasks
>> 14/11/24 16:07:09 INFO client.AppClient$ClientActor: Executor updated:
>> app-20141124023636-0004/0 is now EXITED (Command exited with code 1)
>> 14/11/24 16:07:09 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20141124023636-0004/0 removed: Command exited with code 1
>> 14/11/24 16:07:09 INFO client.AppClient$ClientActor: Executor added:
>> app-20141124023636-0004/2 on worker-20141124021958-STI-SM-DEV-SYS4-51561
>> (STI-SM-DEV-SYS4:51561) with 4 cores
>> 14/11/24 16:07:09 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor
>> ID app-20141124023636-0004/2 on hostPort STI-SM-DEV-SYS4:51561 with 4
>> cores,
>> 8.0 GB RAM
>> 14/11/24 16:07:09 INFO client.AppClient$ClientActor: Executor updated:
>> app-20141124023636-0004/1 is now EXITED (Command exited with code 1)
>> 14/11/24 16:07:09 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20141124023636-0004/1 removed: Command exited with code 1
>> 14/11/24 16:07:09 INFO client.AppClient$ClientActor: Executor added:
>> app-20141124023636-0004/3 on worker-20141124022001-STI-SM-DEV-SYS5-50404
>> (STI-SM-DEV-SYS5:50404) with 4 cores
>> 14/11/24 16:07:09 INFO cluster.SparkDeploySchedulerBackend: Granted
>> executor
>> ID app-20141124023636-0004/3 on hostPort STI-SM-DEV-SYS5:50404 with 4
>> cores,
>> 8.0 GB RAM
>> 14/11/24 16:07:09 INFO client.AppClient$ClientActor: Executor updated:
>> app-20141124023636-0004/2 is now RUNNING
>> 14/11/24 16:07:10 INFO client.AppClient$ClientActor: Executor updated:
>> app-20141124023636-0004/3 is now RUNNING
>> 14/11/24 16:07:24 WARN scheduler.TaskSchedulerImpl: Initial job has not
>> accepted any resources; check your cluster UI to ensure that workers are
>> registered and have sufficient memory
>> 14/11/24 16:07:39 WARN scheduler.TaskSchedulerImpl: Initial job has not
>> accepted any resources; check your cluster UI to ensure that workers are
>> registered and have sufficient memory
>> 14/11/24 16:07:43 INFO client.AppClient$ClientActor: Executor updated:
>> app-20141124023636-0004/3 is now EXITED (Command exited with code 1)
>> 14/11/24 16:07:43 INFO cluster.SparkDeploySchedulerBackend: Executor
>> app-20141124023636-0004/3 removed: Command exited with code 1
>>
>> Thanks in advance.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/issue-while-running-the-code-in-standalone-mode-Initial-job-has-not-accepted-any-resources-check-you-tp19628.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to