Hi all,

Since the data I want to process is not on HDFS, I try to use sc.makeRDD() to 
ensure all items of a partition is located on one node, then the task can be 
launched on that node.

Now comes the problem, sometimes, the task is already assigned to some 
executors, then other executors are launched. Thus, when deal with partition on 
node A, spark may launched a executor on node B, which is not my expect.

How can I ensure only until all executors are launched, then assign task? If 
anyone have some idea, please let me know. Thanks so much!



Best Regards,
Qian
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to