Did this happen immediately after you start the cluster or after ran
some queries?

Is this in local mode or cluster mode?

On Fri, Sep 11, 2015 at 3:00 AM, Jagat Singh <jagatsi...@gmail.com> wrote:
> Hi,
>
> We have queries which were running fine on 1.4.1 system.
>
> We are testing upgrade and even simple query like
>
> val t1= sqlContext.sql("select count(*) from table")
>
> t1.show
>
> This works perfectly fine on 1.4.1 but throws OOM error in 1.5.0
>
> Are there any changes in default memory settings from 1.4.1 to 1.5.0
>
> Thanks,
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to