Hi,
I have run the job in cluster mode as well. The job is not ending. After
sometime the container just do nothing but it shows running.
In my code, every record has been inserted into solr and cassandra as well.
When i ran it only for solr the job completed successfully. Still i did not
test
Can you add sc.stop at the end of the code and try?
On 1 Dec 2016 18:03, "Daniel van der Ende"
wrote:
> Hi,
>
> I've seen this a few times too. Usually it indicates that your driver
> doesn't have enough resources to process the result. Sometimes increasing
> driver
Hi,
I've seen this a few times too. Usually it indicates that your driver
doesn't have enough resources to process the result. Sometimes increasing
driver memory is enough (yarn memory overhead can also help). Is there any
specific reason for you to run in client mode and not in cluster mode?
Hi,
I have submitted spark job in yarn client mode. The executor and cores were
dynamically allocated. In the job i have 20 partitions, so 5 container each
with 4 core has been submitted. It almost processed all the records but it
never exit the job and in the application master container i am