Hello mingyu,
That is a reasonable way of doing this. Spark Streaming natively does
not support sticky because Spark launches tasks based on data
locality. If there is no locality (example reduce tasks can run
anywhere), location is randomly assigned. So the cogroup or join
introduces a locality
I found a workaround.
I can make my auxiliary data a RDD. Partition it and cache it.
Later, I can cogroup it with other RDDs and Spark will try to keep the
cached RDD partitions where they are and not shuffle them.
--
View this message in context:
Also, Setting spark.locality.wait=100 did not work for me.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-make-spark-partition-sticky-i-e-stay-with-node-tp21322p21325.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
I posted an question on stackoverflow and haven't gotten any answer yet.
http://stackoverflow.com/questions/28079037/how-to-make-spark-partition-sticky-i-e-stay-with-node
Is there a way to make a partition stay with a node in Spark Streaming? I
need these since I have to load large amount