Re: Spark-on-YARN architecture

2015-03-10 Thread Sean Owen
I suppose you just provision enough resource to run both on that
node... but it really shouldn't matter. The RM and your AM aren't
communicating heavily.

On Tue, Mar 10, 2015 at 10:23 AM, Harika Matha  wrote:
> Thanks for the quick reply.
>
> I am running the application in YARN client mode.
> And I want to run the AM on the same node as RM inorder use the node which
> otherwise would run AM.
>
> How can I get AM run on the same node as RM?
>
>
> On Tue, Mar 10, 2015 at 3:49 PM, Sean Owen  wrote:
>>
>> In YARN cluster mode, there is no Spark master, since YARN is your
>> resource manager. Yes you could force your AM somehow to run on the
>> same node as the RM, but why -- what do think is faster about that?
>>
>> On Tue, Mar 10, 2015 at 10:06 AM, Harika  wrote:
>> > Hi all,
>> >
>> > I have Spark cluster setup on YARN with 4 nodes(1 master and 3 slaves).
>> > When
>> > I run an application, YARN chooses, at random, one Application Master
>> > from
>> > among the slaves. This means that my final computation is  being carried
>> > only on two slaves. This decreases the performance of the cluster.
>> >
>> > 1. Is this the correct way of configuration? What is the architecture of
>> > Spark on YARN?
>> > 2. Is there a way in which I can run Spark master, YARN application
>> > master
>> > and resource manager on a single node?(so that I can use three other
>> > nodes
>> > for the computation)
>> >
>> > Thanks
>> > Harika
>> >
>> >
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-architecture-tp21986.html
>> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
>> >
>> > -
>> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> > For additional commands, e-mail: user-h...@spark.apache.org
>> >
>
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark-on-YARN architecture

2015-03-10 Thread Harika Matha
Thanks for the quick reply.

I am running the application in YARN client mode.
And I want to run the AM on the same node as RM inorder use the node which
otherwise would run AM.

How can I get AM run on the same node as RM?


On Tue, Mar 10, 2015 at 3:49 PM, Sean Owen  wrote:

> In YARN cluster mode, there is no Spark master, since YARN is your
> resource manager. Yes you could force your AM somehow to run on the
> same node as the RM, but why -- what do think is faster about that?
>
> On Tue, Mar 10, 2015 at 10:06 AM, Harika  wrote:
> > Hi all,
> >
> > I have Spark cluster setup on YARN with 4 nodes(1 master and 3 slaves).
> When
> > I run an application, YARN chooses, at random, one Application Master
> from
> > among the slaves. This means that my final computation is  being carried
> > only on two slaves. This decreases the performance of the cluster.
> >
> > 1. Is this the correct way of configuration? What is the architecture of
> > Spark on YARN?
> > 2. Is there a way in which I can run Spark master, YARN application
> master
> > and resource manager on a single node?(so that I can use three other
> nodes
> > for the computation)
> >
> > Thanks
> > Harika
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-architecture-tp21986.html
> > Sent from the Apache Spark User List mailing list archive at Nabble.com.
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> > For additional commands, e-mail: user-h...@spark.apache.org
> >
>


Re: Spark-on-YARN architecture

2015-03-10 Thread Sean Owen
In YARN cluster mode, there is no Spark master, since YARN is your
resource manager. Yes you could force your AM somehow to run on the
same node as the RM, but why -- what do think is faster about that?

On Tue, Mar 10, 2015 at 10:06 AM, Harika  wrote:
> Hi all,
>
> I have Spark cluster setup on YARN with 4 nodes(1 master and 3 slaves). When
> I run an application, YARN chooses, at random, one Application Master from
> among the slaves. This means that my final computation is  being carried
> only on two slaves. This decreases the performance of the cluster.
>
> 1. Is this the correct way of configuration? What is the architecture of
> Spark on YARN?
> 2. Is there a way in which I can run Spark master, YARN application master
> and resource manager on a single node?(so that I can use three other nodes
> for the computation)
>
> Thanks
> Harika
>
>
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-architecture-tp21986.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark-on-YARN architecture

2015-03-10 Thread Harika
Hi all,

I have Spark cluster setup on YARN with 4 nodes(1 master and 3 slaves). When
I run an application, YARN chooses, at random, one Application Master from
among the slaves. This means that my final computation is  being carried
only on two slaves. This decreases the performance of the cluster. 

1. Is this the correct way of configuration? What is the architecture of
Spark on YARN?
2. Is there a way in which I can run Spark master, YARN application master
and resource manager on a single node?(so that I can use three other nodes
for the computation)

Thanks
Harika





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-architecture-tp21986.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org