Re: [spark standalone mode] force spark to launch driver in a specific worker in cluster mode

2019-07-25 Thread Shamshad Ansari
spark.driver.host (local hostname) Hostname or IP address for the driver.
This is used for communicating with the executors and the standalone Master.




On Fri, Jul 26, 2019 at 12:43 AM Latha Appanna 
wrote:

> Hello,
>
> I'm looking for ways to configure spark-master to launch *driver* in a
> specific  spark-worker in *cluster* deploy mode.  Say, I have master1,
> worker1 and worker2. I want spark-master to always launch driver in worker2
> in deploymode cluster and in spark standalone mode. Please let me know what
> spark configurations need to be set to achieve this.
>
>
> Thanks & Regards,
> Latha
>
>


[spark standalone mode] force spark to launch driver in a specific worker in cluster mode

2019-07-25 Thread Latha Appanna
Hello,

I'm looking for ways to configure spark-master to launch *driver* in a
specific  spark-worker in *cluster* deploy mode.  Say, I have master1,
worker1 and worker2. I want spark-master to always launch driver in worker2
in deploymode cluster and in spark standalone mode. Please let me know what
spark configurations need to be set to achieve this.


Thanks & Regards,
Latha


Re: Core allocation is scattered

2019-07-25 Thread Srikanth Sriram
Hello,

Below is my understanding.

The default configuration parameters which will be considered by the spark
job if these are not configured at the time of submitting job to the
required values.

# - SPARK_EXECUTOR_INSTANCES, Number of workers to start (Default: 2)
# - SPARK_EXECUTOR_CORES, Number of cores for the workers (Default: 1).
# - SPARK_EXECUTOR_MEMORY, Memory per Worker (e.g. 1000M, 2G) (Default: 1G)

SPARK_EXECUTOR_INSTANCES -> indicates the number of workers to be started,
it means for a job maximum this many number of executors it can ask/take
from the cluster resource manager.

SPARK_EXECUTOR_CORES -> indicates the number of cores in each executor, it
means the spark TaskScheduler will ask this many cores to be
allocated/blocked in each of the executor machine.

SPARK_EXECUTOR_MEMORY -> indicates the maximum amount of RAM/MEMORY it
requires in each executor.

All these details are asked by the TastScheduler to the cluster manager (it
may be a spark standalone, yarn, mesos and can be kubernetes supported
starting from spark 2.0) to provide before actually the job execution
starts.

Also, please note that, initial number of executor instances is dependent
on "--num-executors" but when the data is more to be processed and
"spark.dynamicAllocation.enabled" set true, then it will be dynamically add
more executors based on "spark.dynamicAllocation.initialExecutors".

Note: Always "spark.dynamicAllocation.initialExecutors" should be
configured greater than "--num-executors".
spark.dynamicAllocation.initialExecutors
spark.dynamicAllocation.minExecutors Initial number of executors to run if
dynamic allocation is enabled.

If `--num-executors` (or `spark.executor.instances`) is set and larger than
this value, it will be used as the initial number of executors.
spark.executor.memory 1g Amount of memory to use per executor process, in
the same format as JVM memory strings with a size unit suffix ("k", "m",
"g" or "t") (e.g. 512m, 2g).
spark.executor.cores 1 in YARN mode, all the available cores on the worker
in standalone and Mesos coarse-grained modes. The number of cores to use on
each executor. In standalone and Mesos coarse-grained modes, for more
detail, see this description

.

On Thu, Jul 25, 2019 at 5:54 PM Amit Sharma  wrote:

> I have cluster with 26 nodes having 16 cores on each. I am running a spark
> job with 20 cores but i did not understand why my application get 1-2 cores
> on couple of machines why not it just run on two nodes like node1=16 cores
> and node 2=4 cores . but cores are allocated like node1=2 node
> =1-node 14=1 like that. Is there any conf property i need to
> change. I know with dynamic allocation we can use below but without dynamic
> allocation is there any?
> --conf "spark.dynamicAllocation.maxExecutors=2"
>
>
> Thanks
> Amit
>


-- 
Regards,
Srikanth Sriram


Re: Core allocation is scattered

2019-07-25 Thread 15313776907
This may be within your yarn constraints, but you can look at the configuration 
parameters of your yarn


On 7/25/2019 20:23,Amit Sharma wrote:
I have cluster with 26 nodes having 16 cores on each. I am running a spark job 
with 20 cores but i did not understand why my application get 1-2 cores on 
couple of machines why not it just run on two nodes like node1=16 cores and 
node 2=4 cores . but cores are allocated like node1=2 node =1-node 14=1 
like that. Is there any conf property i need to change. I know with dynamic 
allocation we can use below but without dynamic allocation is there any?
--conf "spark.dynamicAllocation.maxExecutors=2"





Thanks
Amit

Core allocation is scattered

2019-07-25 Thread Amit Sharma
I have cluster with 26 nodes having 16 cores on each. I am running a spark
job with 20 cores but i did not understand why my application get 1-2 cores
on couple of machines why not it just run on two nodes like node1=16 cores
and node 2=4 cores . but cores are allocated like node1=2 node
=1-node 14=1 like that. Is there any conf property i need to
change. I know with dynamic allocation we can use below but without dynamic
allocation is there any?
--conf "spark.dynamicAllocation.maxExecutors=2"


Thanks
Amit


spark config about spark.yarn.appMasterEnv

2019-07-25 Thread zenglong chen
Hi all,
   Can spark set worker node environment like linux,eg:
--conf
 spark.yarn.appMasterEnv.PYTHONPATH=./feature-server:$PYTHONPATH  ?
It is not working like linux shell.
I just want to add a path to PYTHONPATH on worker node rather than cover it.
Thanks for any answer!


Can pyspark use --archives to upload self-defined module than --py-files?

2019-07-25 Thread zenglong chen
Hi,all,
 '--py-files' upload zip files and spark python will import module like
'./xxx.zip/module_a/a.py'.
So can i use '--archives a.zip#a' to upload zip file and spark worker
python will import like ''./a/module/xx“?
What is the difference between '--py-files' and '--archives a.zip#a' to
upload self-defined module?
Thanks for answer!