Hello everyone, Is there is a way to specify rack awareness in Spark? For example, if I want to use AggregatebyKey, is there a way to let Spark aggregate within the same rack first, then aggregate between rack? I'm interested in this because I am trying to figure whether there is a way to deal with limp inter-rack network. I'm have searched through mailing list and StackOverflow, but all of them are talking about rack awareness in HDFS instead of Spark. Thanks a lot!
Ruiyang -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org