om:* Kevin Peng <kpe...@gmail.com>
> *Sent:* Wednesday, March 15, 2017 1:35 PM
> *To:* mohini kalamkar
> *Cc:* user@spark.apache.org
> *Subject:* Re: Setting Optimal Number of Spark Executor Instances
>
> Mohini,
>
> We set that parameter before we went and playe
mber of executors, I was under the
impression that the default dynamic allocation would pick the optimal number
of executors for us and that this situation wouldn't happen. Is there
something I am missing?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Setting-Op
Mohini,
We set that parameter before we went and played with the number of
executors and that didn't seem to help at all.
Thanks,
KP
On Tue, Mar 14, 2017 at 3:37 PM, mohini kalamkar
wrote:
> Hi,
>
> try using this parameter --conf spark.sql.shuffle.partitions=1000
iew this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Setting-Optimal-Number-of-Spark-
> Executor-Instances-tp28493.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> --
. Is there
something I am missing?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Setting-Optimal-Number-of-Spark-Executor-Instances-tp28493.html
Sent from the Apache Spark User List mailing list archive at Nabble.com