The only UI I have currently is the Application Master (Cluster mode), with
the following executor nodes status:
Executors (3)

   - *Memory:* 0.0 B Used (3.7 GB Total)
   - *Disk:* 0.0 B Used

Executor IDAddressRDD BlocksMemory UsedDisk UsedActive TasksFailed
TasksComplete
TasksTotal TasksTask TimeShuffle ReadShuffle Write1<add1>00.0 B / 1766.4 MB0.0
B00000 ms0.0 B0.0 B2<add2>00.0 B / 1766.4 MB0.0 B00000 ms0.0 B0.0 B<driver>
<add3>00.0 B / 294.6 MB0.0 B00000 ms0.0 B0.0 B


On Tue, Aug 5, 2014 at 11:32 AM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> Are you able to see the job on the WebUI (8080)? If yes, how much memory
> are you seeing there specifically for this job?
>
> [image: Inline image 1]
>
> Here you can see i have 11.8Gb RAM on both workers and my app is using
> 11GB.
>
> 1. What are all the memory that you are seeing in your case?
> 2. Make sure your application is using the same spark URI (as seen in the
> top left of the webUI) while creating the SparkContext.
>
>
>
> Thanks
> Best Regards
>
>
> On Tue, Aug 5, 2014 at 11:38 PM, Sunny Khatri <sunny.k...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I'm trying to run a spark application with the executor-memory 3G. but
>> I'm running into the following error:
>>
>> 14/08/05 18:02:58 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[5] at map 
>> at KMeans.scala:123), which has no missing parents
>> 14/08/05 18:02:58 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 
>> (MappedRDD[5] at map at KMeans.scala:123)
>> 14/08/05 18:02:58 INFO YarnClusterScheduler: Adding task set 0.0 with 1 tasks
>> 14/08/05 18:02:59 INFO CoarseGrainedSchedulerBackend: Registered executor: 
>> Actor[akka.tcp://sparkexecu...@test-hadoop2.vpc.natero.com:54358/user/Executor#1670455157]
>>  with ID 2
>> 14/08/05 18:02:59 INFO BlockManagerInfo: Registering block manager 
>> test-hadoop2.vpc.natero.com:39156 with 1766.4 MB RAM
>> 14/08/05 18:03:13 WARN YarnClusterScheduler: Initial job has not accepted 
>> any resources; check your cluster UI to ensure that workers are registered 
>> and have sufficient memory
>> 14/08/05 18:03:28 WARN YarnClusterScheduler: Initial job has not accepted 
>> any resources; check your cluster UI to ensure that workers are registered 
>> and have sufficient memory
>> 14/08/05 18:03:43 WARN YarnClusterScheduler: Initial job has not accepted 
>> any resources; check your cluster UI to ensure that workers are registered 
>> and have sufficient memory
>> 14/08/05 18:03:58 WARN YarnClusterScheduler: Initial job has not accepted 
>> any resources; check your cluster UI to ensure that workers are registered 
>> and have sufficient memory
>>
>>
>> Tried tweaking executor-memory as well, but same result. It always gets 
>> stuck registering the block manager.
>>
>>
>> Are there any other settings that needs to be adjusted.
>>
>>
>> Thanks
>>
>> Sunny
>>
>>
>>
>

Reply via email to