Re: Spark Standalone Cluster: Having a master and worker on the same node

2016-07-28 Thread Chanh Le
Hi Jestin,
I saw most of setup usually setup along master and slave in a same node.
Because I think master doesn't do as much job as slave does and resource is
expensive we need to use it.
BTW In my setup I setup along master and slave.
I have  5 nodes and 3 of which are master and slave running alongside.
Hope it can help!.


Regards,
Chanh





On Thu, Jul 28, 2016 at 12:19 AM, Jestin Ma 
wrote:

> Hi, I'm doing performance testing and currently have 1 master node and 4
> worker nodes and am submitting in client mode from a 6th cluster node.
>
> I know we can have a master and worker on the same node. Speaking in terms
> of performance and practicality, is it possible/suggested to have another
> working running on either the 6th node or the master node?
>
> Thank you!
>
>


Re: Spark Standalone Cluster: Having a master and worker on the same node

2016-07-27 Thread Mich Talebzadeh
Hi Justine.

As I understand you are using Spark in standalone mode meaning that you
start your master and slaves/worker processes.

You can specify the number of works for each node in
$SPARK_HOME/conf/spark-env.sh file as below

# Options for the daemons used in the standalone deploy mode
export SPARK_WORKER_INSTANCES=3 ##, to set the number of worker processes
per node

And you specify the host for master and slaves in conf/slaves file

When you start start-master.sh and start-slaves.sh, you will see the worker
processes

Now if you have localhost in slaves file you will start worker processes in
your master node so to speak. There is nothing wrong with that as long as
your master node has resources for spark app.

Once you stared you will see something like below using jps commad:

21697 Worker
18242 Master
21496 Worker
21597 Worker

Where is your edge (where you are submitting your Spark app)?


HTH




Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 27 July 2016 at 18:19, Jestin Ma  wrote:

> Hi, I'm doing performance testing and currently have 1 master node and 4
> worker nodes and am submitting in client mode from a 6th cluster node.
>
> I know we can have a master and worker on the same node. Speaking in terms
> of performance and practicality, is it possible/suggested to have another
> working running on either the 6th node or the master node?
>
> Thank you!
>
>


Spark Standalone Cluster: Having a master and worker on the same node

2016-07-27 Thread Jestin Ma
Hi, I'm doing performance testing and currently have 1 master node and 4
worker nodes and am submitting in client mode from a 6th cluster node.

I know we can have a master and worker on the same node. Speaking in terms
of performance and practicality, is it possible/suggested to have another
working running on either the 6th node or the master node?

Thank you!