Did you try to define the spark.executor.memory property to the amount of
memory you want per worker?

For example spark.executor.memory=2g

http://spark.incubator.apache.org/docs/latest/configuration.html


2014/1/2 Archit Thakur <archit279tha...@gmail.com>

> Need not mention Workers could be seen on the UI.
>
>
> On Thu, Jan 2, 2014 at 5:01 PM, Archit Thakur 
> <archit279tha...@gmail.com>wrote:
>
>> Hi,
>>
>> I have some 5G of data. distributed in some 597 sequence files. My
>> application does a flatmap on the union of all rdd's created from
>> individual files. The flatmap statement throws java.lang.stackOverflowError
>> with the default stack size. I increased the stack size to 1g (both system
>> and jvm). Now, it has started printing "Initial job has not accepted any
>> resources; check your cluster UI to ensure that workers are registered and
>> have sufficient memory" and is not moving forward. Just printing it in the
>> continuous loop. Any ideas? Or suggestions would help. Archit.
>>
>> -Thx.
>>
>
>

Reply via email to