[ https://issues.apache.org/jira/browse/SPARK-6321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Rosen resolved SPARK-6321. ------------------------------- Resolution: Won't Fix I'm going to resolve this specific issue as "Won't Fix." While we do plan to do some form of dynamic selection of parallelism, I think that it is more likely to be driven by the size / characteristics of the input data. If you search JIRA you'll be able to find newer tickets describing some 1.6.0-targeted proposals for this. > Adapt the number of partitions used by the Exchange rule to the cluster > specifications > -------------------------------------------------------------------------------------- > > Key: SPARK-6321 > URL: https://issues.apache.org/jira/browse/SPARK-6321 > Project: Spark > Issue Type: Improvement > Components: SQL > Affects Versions: 1.3.1 > Reporter: Óscar Puertas > > Currently, the exchange rule is using a default value of 200 fixed > partitions. In my opinion, it would be nice if we can set that default value > to something more related to the cluster specifications instead of a magic > number. > My proposal is to use the default.parallelism value in the Spark > configuration. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org