wrong?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Wordcount-on-large-file-stucks-and-throws-OOM-exception-tp12747p12809.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
finished
successfully.
But is this the desired behaviour of Spark, that available driver memory
limits the size of the result set?
Or is my explanation wrong?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Running-Wordcount-on-large-file-stucks