Hi Herman,
We are very happy to receive your mail. Indeed, we can revert to the
old behaviour of Spark SQL (the performance and the DAG are the same in
both version).
Many thanks and have a nice weekend,
Tien-Dung
PS: In order to revert, the setting value should be "true".
On Fri, Feb 12, 2016
Hi Tien-Dung,
1.6 plans single distinct aggregates like multiple distinct aggregates;
this inherently causes some overhead but is more stable in case of high
cardinalities. You can revert to the old behavior by setting the
spark.sql.specializeSingleDistinctAggPlanning option to false. See also:
ht
Hi folks,
I have compared the performance of Spark SQL version 1.6.0 and version
1.5.2. In a simple case, Spark 1.6.0 is quite faster than Spark 1.5.2.
However in a more complex query - in our case it is an aggregation query
with grouping sets, Spark SQL version 1.6.0 is very much slower than Spar