Repository: spark
Updated Branches:
  refs/heads/master d74dee133 -> d29d1e879


[SPARK-22159][SQL] Make config names consistently end with "enabled".

## What changes were proposed in this pull request?
spark.sql.execution.arrow.enable and 
spark.sql.codegen.aggregate.map.twolevel.enable -> enabled

## How was this patch tested?
N/A

Author: Reynold Xin <r...@databricks.com>

Closes #19384 from rxin/SPARK-22159.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d29d1e87
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d29d1e87
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/d29d1e87

Branch: refs/heads/master
Commit: d29d1e87995e02cb57ba3026c945c3cd66bb06e2
Parents: d74dee1
Author: Reynold Xin <r...@databricks.com>
Authored: Thu Sep 28 15:59:05 2017 -0700
Committer: gatorsmile <gatorsm...@gmail.com>
Committed: Thu Sep 28 15:59:05 2017 -0700

----------------------------------------------------------------------
 .../src/main/scala/org/apache/spark/sql/internal/SQLConf.scala   | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/d29d1e87/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
----------------------------------------------------------------------
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index d00c672..358cf62 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -668,7 +668,7 @@ object SQLConf {
       .createWithDefault(40)
 
   val ENABLE_TWOLEVEL_AGG_MAP =
-    buildConf("spark.sql.codegen.aggregate.map.twolevel.enable")
+    buildConf("spark.sql.codegen.aggregate.map.twolevel.enabled")
       .internal()
       .doc("Enable two-level aggregate hash map. When enabled, records will 
first be " +
         "inserted/looked-up at a 1st-level, small, fast map, and then fallback 
to a " +
@@ -908,7 +908,7 @@ object SQLConf {
     .createWithDefault(false)
 
   val ARROW_EXECUTION_ENABLE =
-    buildConf("spark.sql.execution.arrow.enable")
+    buildConf("spark.sql.execution.arrow.enabled")
       .internal()
       .doc("Make use of Apache Arrow for columnar data transfers. Currently 
available " +
         "for use with pyspark.sql.DataFrame.toPandas with the following data 
types: " +


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to