The DataFrames parallelism currently controlled through configuration option
spark.sql.shuffle.partitions.  The default value is 200

I have raised an Improvement Jira to make it possible to specify the number
of partitions in https://issues.apache.org/jira/browse/SPARK-9872



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-DataFrames-uses-too-many-partition-tp24214p24223.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to