Hi Jestin,
I saw most of setup usually setup along master and slave in a same node.
Because I think master doesn't do as much job as slave does and resource is
expensive we need to use it.
BTW In my setup I setup along master and slave.
I have 5 nodes and 3 of which are master and slave running
Hi Justine.
As I understand you are using Spark in standalone mode meaning that you
start your master and slaves/worker processes.
You can specify the number of works for each node in
$SPARK_HOME/conf/spark-env.sh file as below
# Options for the daemons used in the standalone deploy mode
export
Hi, I'm doing performance testing and currently have 1 master node and 4
worker nodes and am submitting in client mode from a 6th cluster node.
I know we can have a master and worker on the same node. Speaking in terms
of performance and practicality, is it possible/suggested to have another