One detail, even forcing partitions (/repartition/), spark is still holding
some tasks; if I increase the load of the system (increasing
/spark.streaming.receiver.maxRate/), even if all workers are used, the one
with the receiver gets twice as many tasks compared with the other workers. 

Total delay keeps growing in this scenario, even if there are workers that
are not 100% loaded :-/

What is the load distribution criteria/policy in Spark? Is there any
documentation? Anything will help, thanks :-)





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-scheduling-control-tp16778p16825.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to