You can start spark-shell with these properties:
--conf spark.dynamicAllocation.enabled=true --conf
spark.dynamicAllocation.initialExecutors=2 --conf
spark.dynamicAllocation.minExecutors=2 --conf
spark.dynamicAllocation.maxExecutors=5

On Fri, May 31, 2019 at 5:30 AM Qian He <hq.ja...@gmail.com> wrote:

> Sometimes it's convenient to start a spark-shell on cluster, like
> ./spark/bin/spark-shell --master yarn --deploy-mode client --num-executors
> 100 --executor-memory 15g --executor-cores 4 --driver-memory 10g --queue
> myqueue
> However, with command like this, those allocated resources will be
> occupied until the console exits.
>
> Just wandering if it is possible to start a spark-shell with
> dynamicAllocation enabled? If it is, how to specify the configs? Can anyone
> give an quick example? Thanks!
>


-- 
Thanks
Deepak
www.bigdatabig.com
www.keosha.net

Reply via email to