Re: Setting Optimal Number of Spark Executor Instances

2017-03-15 Thread Rohit Karlupia
om:* Kevin Peng <kpe...@gmail.com> > *Sent:* Wednesday, March 15, 2017 1:35 PM > *To:* mohini kalamkar > *Cc:* user@spark.apache.org > *Subject:* Re: Setting Optimal Number of Spark Executor Instances > > Mohini, > > We set that parameter before we went and playe

Re: Setting Optimal Number of Spark Executor Instances

2017-03-15 Thread Yong Zhang
mber of executors, I was under the impression that the default dynamic allocation would pick the optimal number of executors for us and that this situation wouldn't happen. Is there something I am missing? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Setting-Op

Re: Setting Optimal Number of Spark Executor Instances

2017-03-15 Thread Kevin Peng
Mohini, We set that parameter before we went and played with the number of executors and that didn't seem to help at all. Thanks, KP On Tue, Mar 14, 2017 at 3:37 PM, mohini kalamkar wrote: > Hi, > > try using this parameter --conf spark.sql.shuffle.partitions=1000

Re: Setting Optimal Number of Spark Executor Instances

2017-03-14 Thread mohini kalamkar
iew this message in context: http://apache-spark-user-list. > 1001560.n3.nabble.com/Setting-Optimal-Number-of-Spark- > Executor-Instances-tp28493.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --

Setting Optimal Number of Spark Executor Instances

2017-03-14 Thread kpeng1
. Is there something I am missing? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Setting-Optimal-Number-of-Spark-Executor-Instances-tp28493.html Sent from the Apache Spark User List mailing list archive at Nabble.com