[ https://issues.apache.org/jira/browse/SPARK-823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen updated SPARK-823: ---------------------------- Component/s: (was: Documentation) (was: PySpark) (was: Spark Core) Scheduler > spark.default.parallelism's default is inconsistent across scheduler backends > ----------------------------------------------------------------------------- > > Key: SPARK-823 > URL: https://issues.apache.org/jira/browse/SPARK-823 > Project: Spark > Issue Type: Bug > Components: Scheduler > Affects Versions: 0.8.0, 0.7.3, 0.9.1 > Reporter: Josh Rosen > Priority: Minor > > The [0.7.3 configuration > guide|http://spark-project.org/docs/latest/configuration.html] says that > {{spark.default.parallelism}}'s default is 8, but the default is actually > max(totalCoreCount, 2) for the standalone scheduler backend, 8 for the Mesos > scheduler, and {{threads}} for the local scheduler: > https://github.com/mesos/spark/blob/v0.7.3/core/src/main/scala/spark/scheduler/cluster/StandaloneSchedulerBackend.scala#L157 > https://github.com/mesos/spark/blob/v0.7.3/core/src/main/scala/spark/scheduler/mesos/MesosSchedulerBackend.scala#L317 > https://github.com/mesos/spark/blob/v0.7.3/core/src/main/scala/spark/scheduler/local/LocalScheduler.scala#L150 > Should this be clarified in the documentation? Should the Mesos scheduler > backend's default be revised? -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org