spark yarn client mode

2016-01-19 Thread Sanjeev Verma
Hi

Do I need to install spark on all the yarn cluster node if I want to submit
the job to yarn client?
is there any way exists in which I can spawn a spark job executors on the
cluster nodes where I have not installed spark.

Thanks
Sanjeev


Re: spark yarn client mode

2016-01-19 Thread 刘虓
Hi,
No,you don't need to.
However,when submitting jobs certain resources will be uploaded to
hdfs,which could be a performance issue
read the log and you will understand:

15/12/29 11:10:06 INFO Client: Uploading resource
file:/data/spark/spark152/lib/spark-assembly-1.5.2-hadoop2.6.0.jar -> hdfs

15/12/29 11:10:08 INFO Client: Uploading resource
file:/data/spark/spark152/python/lib/pyspark.zip -> hdfs

15/12/29 11:10:08 INFO Client: Uploading resource
file:/data/spark/spark152/python/lib/py4j-0.8.2.1-src.zip -> hdfs

15/12/29 11:10:08 INFO Client: Uploading resource
file:/data/tmp/spark-86791975-2cef-4663-aacd-5da95e58cd91/__spark_conf__6261788210225867171.zip
-> hdfs

2016-01-19 19:43 GMT+08:00 Sanjeev Verma :

> Hi
>
> Do I need to install spark on all the yarn cluster node if I want to
> submit the job to yarn client?
> is there any way exists in which I can spawn a spark job executors on the
> cluster nodes where I have not installed spark.
>
> Thanks
> Sanjeev
>


Re: strange behavior in spark yarn-client mode

2016-01-14 Thread Marcelo Vanzin
On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma
 wrote:
> now it spawn a single executors with 1060M size, I am not able to understand
> why this time it executes executors with 1G+overhead not 2G what I
> specified.

Where are you looking for the memory size for the container?

-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: strange behavior in spark yarn-client mode

2016-01-14 Thread Marcelo Vanzin
Please reply to the list.

The web ui does not show the total size of the executor's heap. It
shows the amount of memory available for caching data, which is, give
or take, 60% of the heap by default.

On Thu, Jan 14, 2016 at 11:03 AM, Sanjeev Verma
 wrote:
> I am looking into the web ui of spark application master(tab executors).
>
> On Fri, Jan 15, 2016 at 12:08 AM, Marcelo Vanzin 
> wrote:
>>
>> On Thu, Jan 14, 2016 at 10:17 AM, Sanjeev Verma
>>  wrote:
>> > now it spawn a single executors with 1060M size, I am not able to
>> > understand
>> > why this time it executes executors with 1G+overhead not 2G what I
>> > specified.
>>
>> Where are you looking for the memory size for the container?
>>
>> --
>> Marcelo
>
>



-- 
Marcelo

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



strange behavior in spark yarn-client mode

2016-01-14 Thread Sanjeev Verma
I am seeing a strange behaviour while running spark in yarn client mode.I
am observing this on the single node yarn cluster.in spark-default I have
configured the executors memory as 2g and started the spark shell as follows

bin/spark-shell --master yarn-client

which trigger the 2 executors on the node with 1060MB of memory, I am able
to figure out that if you wont specify the num-executors it will span 2
executors on the node by defaults.


now when i try to run again it with the

bin/spark-shell --master yarn-client --num-executors 1

now it spawn a single executors with 1060M size, I am not able to
understand why this time it executes executors with 1G+overhead not 2G what
I specified.

why I am seeing this strange behavior?


Re: Spark (yarn-client mode) Hangs in final stages of Collect or Reduce

2015-02-09 Thread nitin
Have you checked the corresponding executor logs as well? I think information
provided by you here is less to actually understand your issue.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-yarn-client-mode-Hangs-in-final-stages-of-Collect-or-Reduce-tp21551p21557.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org