I feel he wanted to ask about workers. In that case, pplease launch workers
on Node 3,4,5 (and/or Node 8,9,10 etc).
You need to go to each worker and start worker daemon with master URL:Port
(typically7077) as parameter (so workers can talk to master).

You shoud be able to see 1 masterr and N workers in UI which typically
starts on Master URL:8080.

Once you do that,you follow Akhil's instruction above to get a sqlContexxt
and set master property properly and runyour app.
HTH

On Mon, Jun 15, 2015 at 7:02 PM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> I'm assuming by spark-client you mean the spark driver program. In that
> case you can pick any machine (say Node 7), create your driver program in
> it and use spark-submit to submit it to the cluster or if you create the
> SparkContext within your driver program (specifying all the properties)
> then you may simply run it with sbt run.
>
> Thanks
> Best Regards
>
> On Sun, Jun 14, 2015 at 6:17 AM, MrAsanjar . <afsan...@gmail.com> wrote:
>
>> I have following hadoop & spark cluster nodes configuration:
>> Nodes 1 & 2 are resourceManager and nameNode respectivly
>> Nodes 3, 4, and 5 each includes nodeManager & dataNode
>> Node 7 is Spark-master configured to run yarn-client or yarn-master modes
>> I have tested it and it works fine.
>> Is there any instuctions on how to setup spark client in a cluster mode?
>> I am not sure if I am doing it right.
>> Thanks in advance
>>
>
>


-- 
Best Regards,
Ayan Guha

Reply via email to