In a standalone cluster, is there way to specify the stage to be running on a
faster worker?

That stage is reading HDFS file and then doing some filter operations.  The
tasks are assigned to the slower worker also, but the slower worker delays
to launch because it's running some tasks from other stages.

So I think it may be better to assign stage to a worker. Any suggestions?

And will the cluster on Yarn help?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/slower-worker-node-in-the-cluster-tp9125.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to