Hi,

I have run the job in cluster mode as well. The job is not ending. After
sometime the container just do nothing but it shows running.

In my code, every record has been inserted into solr and cassandra as well.
When i ran it only for solr the job completed successfully. Still i did not
test cassandra part. Will check and update.

does anyone have faced this issue earlier.

I added sparsession.stop after foreachpartition ends.


My code overview:

SparkSession
read parquet file(20 partition- roughly 90k records)
foreachpartition
          every record do some compution
          insert into cassandra(  i am using insert command )
         index into solr

stop the sparksession
exit the code.




Thanks,
selvam R

On Thu, Dec 1, 2016 at 7:03 AM, Daniel van der Ende <
daniel.vandere...@gmail.com> wrote:

> Hi,
>
> I've seen this a few times too. Usually it indicates that your driver
> doesn't have enough resources to process the result. Sometimes increasing
> driver memory is enough (yarn memory overhead can also help). Is there any
> specific reason for you to run in client mode and not in cluster mode?
> Having run into this a number of times (and wanting to spare the resources
> of our submitting machines) we have now switched to use yarn cluster mode
> by default. This seems to resolve the problem.
>
> Hope this helps,
>
> Daniel
>
> On 29 Nov 2016 11:20 p.m., "Selvam Raman" <sel...@gmail.com> wrote:
>
>> Hi,
>>
>> I have submitted spark job in yarn client mode. The executor and cores
>> were dynamically allocated. In the job i have 20 partitions, so 5 container
>> each with 4 core has been submitted. It almost processed all the records
>> but it never exit the job and in the application master container i am
>> seeing the below error message.
>>
>>  INFO yarn.YarnAllocator: Canceling requests for 0 executor containers
>>  WARN yarn.YarnAllocator: Expected to find pending requests, but found none.
>>
>>
>>
>> ​The same job i ran it for only 1000 records which successfully finished.
>> ​
>>
>> Can anyone help me to sort out this issue.
>>
>> Spark version:2.0( AWS EMR).
>>
>> --
>> Selvam Raman
>> "லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"
>>
>


-- 
Selvam Raman
"லஞ்சம் தவிர்த்து நெஞ்சம் நிமிர்த்து"

Reply via email to