I have not used this, only watched a presentation of it in spark summit 2013.
https://github.com/radlab/sparrow https://spark-summit.org/talk/ousterhout-next-generation-spark-scheduling-with-sparrow/ Pure conjecture from your high scheduling latency and the size of your cluster, it seems one way to look at. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Improving-Spark-multithreaded-performance-tp8359p8411.html Sent from the Apache Spark User List mailing list archive at Nabble.com.
