A very simple Spark SQL COUNT operation succeeds in spark-shell for 1.3.1 and
fails with a series of out-of-memory errors in 1.4.0. 

This gist <https://gist.github.com/ssimeonov/a49b75dc086c3ac6f3c4>  
includes the code and the full output from the 1.3.1 and 1.4.0 runs,
including the command line showing how spark-shell is started.

Should the 1.4.0 spark-shell be started with different options to avoid this
problem?

Thanks,
Sim




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/1-4-0-regression-out-of-memory-errors-on-small-data-tp23595.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to