I have a 3 nodes ec2, each assigned 18G for the spark-executor-mem, So after
I run my spark batch job, I got two rdd from different forks, but with the
exact same format. And when i perform union operations, I got executors
disassociate error and the whole spark job fail and quit. Memory shouldn't
be a problem (can tell from the UI). what worth mentioning is that one rdd
is significant bigger than the other one (much bigger), does anyone have any
idea why?
Thanks
Edwin



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/apache-spark-union-function-cause-executors-disassociate-Lost-executor-1-on-172-32-1-12-remote-Akka--tp15442.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to