I think it depends on your job. My personal experiences when I run TB data.
spark got loss connection failure if I use big JVM with large memory, but with 
more executors with small memory, it can run very smoothly. I was running spark 
on yarn.

Thanks.

Zhan Zhang


On Aug 21, 2014, at 3:42 PM, soroka21 <sorok...@gmail.com> wrote:

> Hi,
> I have relatively big worker nodes. What would be the best worker
> configuration for them? Should I use all memory for JVM and utilize all
> cores when running my jobs?
> Each node has 2x10 cores CPU and 160GB of RAM. Cluster has 4 nodes connected
> with 10G network.
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Configuration-for-big-worker-nodes-tp12614.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to