Hi
I have a 1.0.0 cluster with multiple worker nodes that deploy a number of 
external tasks, through getRuntime().exec.  Currently I have no control on how 
many nodes get deployed for a given task. At times scheduler evenly distributes 
the executors among all nodes and at other times it  only uses 1 node. (The 
difficulty with the latter is that the deployed tasks run out of memory, at 
which point kernel intervenes and kills them.) I've tried setting 
spark.cores.max to available number of cores, spark.deploy.spreadOut to true, 
spark.scheduler.mode to FAIR, etc., to no avail. Is there a non-documented 
parameter or a priming procedure to do this?
Cheers,

[http://www.cisco.com/web/europe/images/email/signature/logo05.jpg]

Nastooh Avessta
ENGINEER.SOFTWARE ENGINEERING
nave...@cisco.com
Phone: +1 604 647 1527

Cisco Systems Limited
595 Burrard Street, Suite 2123 Three Bentall Centre, PO Box 49121
VANCOUVER
BRITISH COLUMBIA
V7X 1J1
CA
Cisco.com<http://www.cisco.com/>





[Think before you print.]Think before you print.

This email may contain confidential and privileged material for the sole use of 
the intended recipient. Any review, use, distribution or disclosure by others 
is strictly prohibited. If you are not the intended recipient (or authorized to 
receive for the recipient), please contact the sender by reply email and delete 
all copies of this message.
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/index.html

Cisco Systems Canada Co, 181 Bay St., Suite 3400, Toronto, ON, Canada, M5J 2T3. 
Phone: 416-306-7000; Fax: 416-306-7099. 
Preferences<http://www.cisco.com/offer/subscribe/?sid=000478326> - 
Unsubscribe<http://www.cisco.com/offer/unsubscribe/?sid=000478327> - 
Privacy<http://www.cisco.com/web/siteassets/legal/privacy.html>

Reply via email to