Which Spark release are you using ?

Can you pastebin the stack trace w.r.t. ExecutorLostFailure ?

Thanks

On Mon, Jun 8, 2015 at 8:52 PM, Sourav Mazumder <sourav.mazumde...@gmail.com
> wrote:

> Hi,
>
> I am trying to run a SQL form a JDBC driver using Spark's Thrift Server.
>
> I'm doing a join between a Hive Table of size around 100 GB and another
> Hive Table with 10 KB, with a filter on a particular column
>
> The query takes more than 45 minutes and then I get ExecutorLostFailure.
> That is because of memory as once I increase the memory the failure happens
> but after a long time.
>
> I'm having executor memory 20 GB, Spark DRiver Memory 2 GB, Executor
> Instances 2 and Executor Core 2.
>
> Running the job using Yarn with master as 'yarn-client'.
>
> Any idea if I'm missing any other configuration ?
>
> Regards,
> Sourav
>

Reply via email to