Please see below link for the ways available
https://spark.apache.org/docs/1.3.1/sql-programming-guide.html#performance-tuning
For example, reduce spark.sql.shuffle.partitions from 200 to 10 could
improve the performance significantly
--
View this message in context:
http://apache-spark-user-l
Hi,
Thank you for your reply. It surely going to help.
Regards,
Abhishek Dubey
From: Cheng, Hao [mailto:hao.ch...@intel.com]
Sent: Monday, March 02, 2015 6:52 PM
To: Abhishek Dubey; user@spark.apache.org
Subject: RE: Performance tuning in Spark SQL.
This is actually a quite open question
This is actually a quite open question, from my understanding, there're
probably ways to tune like:
*SQL Configurations like:
Configuration Key
Default Value
spark.sql.autoBroadcastJoinThreshold
10 * 1024 * 1024
spark.sql.defaultSizeInBytes
10 * 1024 * 1024 + 1
spark.sql.
You have sent four questions that are very general in nature. They might be
better answered if you googled for those topics: there is a wealth of
materials available.
2015-03-02 2:01 GMT-08:00 dubey_a :
> What are the ways to tune query performance in Spark SQL?
>
>
>
> --
> View this message in