[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-11 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302704915
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
 ##
 @@ -19,6 +19,7 @@ package org.apache.spark.sql.execution
 
 import org.apache.spark.TestUtils.assertSpilled
 import org.apache.spark.sql.{AnalysisException, QueryTest, Row}
+import org.apache.spark.sql.internal.SQLConf
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.SQLConf
   +import 
org.apache.spark.sql.internal.SQLConf.{WINDOW_EXEC_BUFFER_IN_MEMORY_THRESHOLD, 
WINDOW_EXEC_BUFFER_SPILL_THRESHOLD}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-11 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302704994
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
 ##
 @@ -477,8 +478,8 @@ class SQLWindowFunctionSuite extends QueryTest with 
SharedSQLContext {
 |WINDOW w1 AS (ORDER BY x ROWS BETWEEN UNBOUNDED PRECEDiNG AND CURRENT 
RoW)
   """.stripMargin)
 
-withSQLConf("spark.sql.windowExec.buffer.in.memory.threshold" -> "1",
-  "spark.sql.windowExec.buffer.spill.threshold" -> "2") {
+withSQLConf(SQLConf.WINDOW_EXEC_BUFFER_IN_MEMORY_THRESHOLD.key -> "1",
+  SQLConf.WINDOW_EXEC_BUFFER_SPILL_THRESHOLD.key -> "2") {
 
 Review comment:
   ```
   -withSQLConf(SQLConf.WINDOW_EXEC_BUFFER_IN_MEMORY_THRESHOLD.key -> "1",
   -  SQLConf.WINDOW_EXEC_BUFFER_SPILL_THRESHOLD.key -> "2") {
   +withSQLConf(WINDOW_EXEC_BUFFER_IN_MEMORY_THRESHOLD.key -> "1",
   +  WINDOW_EXEC_BUFFER_SPILL_THRESHOLD.key -> "2") {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-11 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302704450
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/RuntimeConfigSuite.scala
 ##
 @@ -60,8 +61,8 @@ class RuntimeConfigSuite extends SparkFunSuite {
 val conf = newConf()
 
 // SQL configs
-assert(!conf.isModifiable("spark.sql.sources.schemaStringLengthThreshold"))
-assert(conf.isModifiable("spark.sql.streaming.checkpointLocation"))
+
assert(!conf.isModifiable(StaticSQLConf.SCHEMA_STRING_LENGTH_THRESHOLD.key))
+assert(conf.isModifiable(SQLConf.CHECKPOINT_LOCATION.key))
 
 Review comment:
   ```
   -
assert(!conf.isModifiable(StaticSQLConf.SCHEMA_STRING_LENGTH_THRESHOLD.key))
   -assert(conf.isModifiable(SQLConf.CHECKPOINT_LOCATION.key))
   +assert(!conf.isModifiable(SCHEMA_STRING_LENGTH_THRESHOLD.key))
   +assert(conf.isModifiable(CHECKPOINT_LOCATION.key))
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-11 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302704340
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/RuntimeConfigSuite.scala
 ##
 @@ -19,6 +19,7 @@ package org.apache.spark.sql
 
 import org.apache.spark.SparkFunSuite
 import org.apache.spark.internal.config
+import org.apache.spark.sql.internal.{SQLConf, StaticSQLConf}
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.{SQLConf, StaticSQLConf}
   +import org.apache.spark.sql.internal.SQLConf.CHECKPOINT_LOCATION
   +import 
org.apache.spark.sql.internal.StaticSQLConf.SCHEMA_STRING_LENGTH_THRESHOLD
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-11 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302702361
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala
 ##
 @@ -534,7 +534,8 @@ private[hive] class TestHiveSparkSession(
   }
 
   // Clean out the Hive warehouse between each suite
-  val warehouseDir = new File(new 
URI(sparkContext.conf.get("spark.sql.warehouse.dir")).getPath)
+  val warehouseDir = new File(new URI(
+sparkContext.conf.get(WAREHOUSE_PATH.key)).getPath)
 
 Review comment:
   IIRC, I gave one-liner. 
(https://github.com/apache/spark/pull/25059#discussion_r302307831).
   Why do you keep two lines?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302329225
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -1030,6 +1037,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val ENABLE_VECTORIZED_HASH_MAP =
 
 Review comment:
   Since this is `.internal()`, this will not be exposed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302329089
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -234,6 +234,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val IN_MEMORY_TABLE_SCAN_STATISTICS_ENABLED =
+buildConf("spark.sql.inMemoryTableScanStatistics.enable")
 
 Review comment:
   There are more instance `.enable`. We may change them together.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302329029
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -234,6 +234,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val IN_MEMORY_TABLE_SCAN_STATISTICS_ENABLED =
+buildConf("spark.sql.inMemoryTableScanStatistics.enable")
 
 Review comment:
   I also did the same mistake. :)
   We cannot change the name because this is already used configuration name.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302307714
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala
 ##
 @@ -57,9 +57,9 @@ object TestHive
   new SparkConf()
 .set("spark.sql.test", "")
 .set(SQLConf.CODEGEN_FALLBACK.key, "false")
-.set("spark.sql.hive.metastore.barrierPrefixes",
+.set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,
   "org.apache.spark.sql.hive.execution.PairSerDe")
-.set("spark.sql.warehouse.dir", 
TestHiveContext.makeWarehouseDir().toURI.getPath)
+.set(StaticSQLConf.WAREHOUSE_PATH.key, 
TestHiveContext.makeWarehouseDir().toURI.getPath)
 
 Review comment:
   ```
   -.set(StaticSQLConf.WAREHOUSE_PATH.key, 
TestHiveContext.makeWarehouseDir().toURI.getPath)
   +.set(WAREHOUSE_PATH.key, 
TestHiveContext.makeWarehouseDir().toURI.getPath)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302307831
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala
 ##
 @@ -534,7 +534,8 @@ private[hive] class TestHiveSparkSession(
   }
 
   // Clean out the Hive warehouse between each suite
-  val warehouseDir = new File(new 
URI(sparkContext.conf.get("spark.sql.warehouse.dir")).getPath)
+  val warehouseDir = new File(new URI(
+sparkContext.conf.get(StaticSQLConf.WAREHOUSE_PATH.key)).getPath)
 
 Review comment:
   ```
   -  val warehouseDir = new File(new URI(
   -sparkContext.conf.get(StaticSQLConf.WAREHOUSE_PATH.key)).getPath)
   +  val warehouseDir = new File(new 
URI(sparkContext.conf.get(WAREHOUSE_PATH.key)).getPath)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302307645
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala
 ##
 @@ -44,7 +44,7 @@ import org.apache.spark.sql.execution.{QueryExecution, 
SQLExecution}
 import org.apache.spark.sql.execution.command.CacheTableCommand
 import org.apache.spark.sql.hive._
 import org.apache.spark.sql.hive.client.HiveClient
-import org.apache.spark.sql.internal.{SessionState, SharedState, SQLConf, 
WithTestConf}
+import org.apache.spark.sql.internal._
 
 Review comment:
   ? What we need is `WAREHOUSE_PATH`.
   
   ```
   -import org.apache.spark.sql.internal._
   -import org.apache.spark.sql.internal.StaticSQLConf.CATALOG_IMPLEMENTATION
   +import org.apache.spark.sql.internal.{SessionState, SharedState, SQLConf, 
WithTestConf}
   +import org.apache.spark.sql.internal.StaticSQLConf.{CATALOG_IMPLEMENTATION, 
WAREHOUSE_PATH}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302307132
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala
 ##
 @@ -33,6 +33,7 @@ import org.apache.spark.sql.catalyst.catalog._
 import org.apache.spark.sql.execution.command.DDLUtils
 import org.apache.spark.sql.expressions.Window
 import org.apache.spark.sql.hive.test.{HiveTestUtils, TestHiveContext}
+import org.apache.spark.sql.internal.{SQLConf, StaticSQLConf}
 
 Review comment:
   Please import like the following and update the rest of this file.
   ```
   -import org.apache.spark.sql.internal.{SQLConf, StaticSQLConf}
   +import org.apache.spark.sql.internal.SQLConf.SHUFFLE_PARTITIONS
   +import org.apache.spark.sql.internal.StaticSQLConf.WAREHOUSE_PATH
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302306529
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala
 ##
 @@ -203,11 +206,11 @@ class HiveExternalCatalogVersionsSuite extends 
SparkSubmitTestUtils {
   "--class", PROCESS_TABLES.getClass.getName.stripSuffix("$"),
   "--name", "HiveExternalCatalog backward compatibility test",
   "--master", "local[2]",
-  "--conf", "spark.ui.enabled=false",
-  "--conf", "spark.master.rest.enabled=false",
-  "--conf", "spark.sql.hive.metastore.version=1.2.1",
-  "--conf", "spark.sql.hive.metastore.jars=maven",
-  "--conf", s"spark.sql.warehouse.dir=${wareHousePath.getCanonicalPath}",
+  "--conf", s"${UI.UI_ENABLED.key}=false",
+  "--conf", s"${config.MASTER_REST_SERVER_ENABLED}=false",
+  "--conf", s"${HiveUtils.HIVE_METASTORE_VERSION.key}=1.2.1",
+  "--conf", s"${HiveUtils.HIVE_METASTORE_JARS.key}=maven",
+  "--conf", 
s"${StaticSQLConf.WAREHOUSE_PATH.key}=${wareHousePath.getCanonicalPath}",
 
 Review comment:
   ```
   -  "--conf", s"${UI.UI_ENABLED.key}=false",
   -  "--conf", s"${config.MASTER_REST_SERVER_ENABLED}=false",
   +  "--conf", s"${UI_ENABLED.key}=false",
   +  "--conf", s"${MASTER_REST_SERVER_ENABLED.key}=false",
  "--conf", s"${HiveUtils.HIVE_METASTORE_VERSION.key}=1.2.1",
  "--conf", s"${HiveUtils.HIVE_METASTORE_JARS.key}=maven",
   -  "--conf", 
s"${StaticSQLConf.WAREHOUSE_PATH.key}=${wareHousePath.getCanonicalPath}",
   +  "--conf", s"${WAREHOUSE_PATH.key}=${wareHousePath.getCanonicalPath}",
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302306447
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala
 ##
 @@ -184,11 +187,11 @@ class HiveExternalCatalogVersionsSuite extends 
SparkSubmitTestUtils {
   val args = Seq(
 "--name", "prepare testing tables",
 "--master", "local[2]",
-"--conf", "spark.ui.enabled=false",
-"--conf", "spark.master.rest.enabled=false",
-"--conf", "spark.sql.hive.metastore.version=1.2.1",
-"--conf", "spark.sql.hive.metastore.jars=maven",
-"--conf", s"spark.sql.warehouse.dir=${wareHousePath.getCanonicalPath}",
+"--conf", s"${UI.UI_ENABLED.key}=false",
+"--conf", s"${config.MASTER_REST_SERVER_ENABLED}=false",
+"--conf", s"${HiveUtils.HIVE_METASTORE_VERSION.key}=1.2.1",
+"--conf", s"${HiveUtils.HIVE_METASTORE_JARS.key}=maven",
+"--conf", 
s"${StaticSQLConf.WAREHOUSE_PATH.key}=${wareHousePath.getCanonicalPath}",
 
 Review comment:
   ```
   -"--conf", 
s"${StaticSQLConf.WAREHOUSE_PATH.key}=${wareHousePath.getCanonicalPath}",
   +"--conf", 
s"${WAREHOUSE_PATH.key}=${wareHousePath.getCanonicalPath}",
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302306288
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala
 ##
 @@ -27,9 +27,12 @@ import scala.util.control.NonFatal
 import org.apache.hadoop.conf.Configuration
 
 import org.apache.spark.{SecurityManager, SparkConf, TestUtils}
+import org.apache.spark.internal.config
+import org.apache.spark.internal.config.UI
 
 Review comment:
   ```
   -import org.apache.spark.internal.config
   -import org.apache.spark.internal.config.UI
   +import org.apache.spark.internal.config.MASTER_REST_SERVER_ENABLED
   +import org.apache.spark.internal.config.UI.UI_ENABLED
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302306328
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala
 ##
 @@ -27,9 +27,12 @@ import scala.util.control.NonFatal
 import org.apache.hadoop.conf.Configuration
 
 import org.apache.spark.{SecurityManager, SparkConf, TestUtils}
+import org.apache.spark.internal.config
+import org.apache.spark.internal.config.UI
 import org.apache.spark.sql.{QueryTest, Row, SparkSession}
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.catalog.CatalogTableType
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.StaticSQLConf
   +import org.apache.spark.sql.internal.StaticSQLConf.WAREHOUSE_PATH
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302306401
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala
 ##
 @@ -184,11 +187,11 @@ class HiveExternalCatalogVersionsSuite extends 
SparkSubmitTestUtils {
   val args = Seq(
 "--name", "prepare testing tables",
 "--master", "local[2]",
-"--conf", "spark.ui.enabled=false",
-"--conf", "spark.master.rest.enabled=false",
-"--conf", "spark.sql.hive.metastore.version=1.2.1",
-"--conf", "spark.sql.hive.metastore.jars=maven",
-"--conf", s"spark.sql.warehouse.dir=${wareHousePath.getCanonicalPath}",
+"--conf", s"${UI.UI_ENABLED.key}=false",
+"--conf", s"${config.MASTER_REST_SERVER_ENABLED}=false",
 
 Review comment:
   ```
   -"--conf", s"${UI.UI_ENABLED.key}=false",
   -"--conf", s"${config.MASTER_REST_SERVER_ENABLED}=false",
   +"--conf", s"${UI_ENABLED.key}=false",
   +"--conf", s"${MASTER_REST_SERVER_ENABLED.key}=false",
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302305891
 
 

 ##
 File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogVersionsSuite.scala
 ##
 @@ -184,11 +187,11 @@ class HiveExternalCatalogVersionsSuite extends 
SparkSubmitTestUtils {
   val args = Seq(
 "--name", "prepare testing tables",
 "--master", "local[2]",
-"--conf", "spark.ui.enabled=false",
-"--conf", "spark.master.rest.enabled=false",
-"--conf", "spark.sql.hive.metastore.version=1.2.1",
-"--conf", "spark.sql.hive.metastore.jars=maven",
-"--conf", s"spark.sql.warehouse.dir=${wareHousePath.getCanonicalPath}",
+"--conf", s"${UI.UI_ENABLED.key}=false",
+"--conf", s"${config.MASTER_REST_SERVER_ENABLED}=false",
 
 Review comment:
   `key`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302305340
 
 

 ##
 File path: 
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
 ##
 @@ -659,7 +660,7 @@ class SingleSessionSuite extends HiveThriftJdbcTest {
   override def mode: ServerMode.Value = ServerMode.binary
 
   override protected def extraConf: Seq[String] =
-"--conf spark.sql.hive.thriftServer.singleSession=true" :: Nil
+s"--conf ${StaticSQLConf.HIVE_THRIFT_SERVER_SINGLESESSION.key}=true" :: Nil
 
 Review comment:
   ```
   -s"--conf ${StaticSQLConf.HIVE_THRIFT_SERVER_SINGLESESSION.key}=true" :: 
Nil
   +s"--conf ${HIVE_THRIFT_SERVER_SINGLESESSION.key}=true" :: Nil
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302305265
 
 

 ##
 File path: 
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
 ##
 @@ -44,6 +44,7 @@ import org.apache.spark.{SparkException, SparkFunSuite}
 import org.apache.spark.internal.Logging
 import org.apache.spark.sql.hive.HiveUtils
 import org.apache.spark.sql.hive.test.HiveTestUtils
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.StaticSQLConf
   +import 
org.apache.spark.sql.internal.StaticSQLConf.HIVE_THRIFT_SERVER_SINGLESESSION
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302304884
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousSuite.scala
 ##
 @@ -25,6 +25,7 @@ import org.apache.spark.sql.execution.streaming._
 import org.apache.spark.sql.execution.streaming.continuous._
 import org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStream
 import org.apache.spark.sql.functions._
+import org.apache.spark.sql.internal.SQLConf
 import 
org.apache.spark.sql.internal.SQLConf.CONTINUOUS_STREAMING_EPOCH_BACKLOG_QUEUE_SIZE
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.SQLConf
   -import 
org.apache.spark.sql.internal.SQLConf.CONTINUOUS_STREAMING_EPOCH_BACKLOG_QUEUE_SIZE
   +import 
org.apache.spark.sql.internal.SQLConf.{CONTINUOUS_STREAMING_EPOCH_BACKLOG_QUEUE_SIZE,
 MIN_BATCHES_TO_RETAIN}
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302304550
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/continuous/ContinuousAggregationSuite.scala
 ##
 @@ -20,6 +20,7 @@ package org.apache.spark.sql.streaming.continuous
 import org.apache.spark.sql.AnalysisException
 import org.apache.spark.sql.execution.streaming.sources.ContinuousMemoryStream
 import org.apache.spark.sql.functions._
+import org.apache.spark.sql.internal.SQLConf
 
 Review comment:
   ditto
   ```
   -import org.apache.spark.sql.internal.StaticSQLConf
   +import org.apache.spark.sql.internal.StaticSQLConf.STREAMING_QUERY_LISTENERS
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302304287
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQueryListenersConfSuite.scala
 ##
 @@ -21,6 +21,7 @@ import org.scalatest.BeforeAndAfter
 
 import org.apache.spark.SparkConf
 import org.apache.spark.sql.execution.streaming._
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   ditto.
   ```
   -import org.apache.spark.sql.internal.StaticSQLConf
   +import org.apache.spark.sql.internal.StaticSQLConf.STREAMING_QUERY_LISTENERS
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302304058
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/sources/v2/DataSourceV2SQLSuite.scala
 ##
 @@ -25,6 +25,7 @@ import org.apache.spark.sql.{AnalysisException, QueryTest}
 import org.apache.spark.sql.catalog.v2.Identifier
 import org.apache.spark.sql.catalyst.analysis.{NoSuchTableException, 
TableAlreadyExistsException}
 import org.apache.spark.sql.execution.datasources.v2.orc.OrcDataSourceV2
+import org.apache.spark.sql.internal.SQLConf
 
 Review comment:
   Please update like the following and update the rest of this file.
   ```
   -import org.apache.spark.sql.internal.SQLConf
   +import org.apache.spark.sql.internal.SQLConf.DEFAULT_V2_CATALOG
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303628
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala
 ##
 @@ -26,6 +26,7 @@ import org.apache.spark.sql.AnalysisException
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.catalog.BucketSpec
 import org.apache.spark.sql.catalyst.parser.ParseException
+import org.apache.spark.sql.internal.SQLConf
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.SQLConf
   +import org.apache.spark.sql.internal.SQLConf.BUCKETING_MAX_BUCKETS
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303676
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala
 ##
 @@ -252,7 +253,7 @@ class CreateTableAsSelectSuite
 
 val maxNrBuckets: Int = 20
 val catalog = spark.sessionState.catalog
-withSQLConf("spark.sql.sources.bucketing.maxBuckets" -> 
maxNrBuckets.toString) {
+withSQLConf(SQLConf.BUCKETING_MAX_BUCKETS.key -> maxNrBuckets.toString) {
 
 Review comment:
   ```
   -withSQLConf(SQLConf.BUCKETING_MAX_BUCKETS.key -> maxNrBuckets.toString) 
{
   +withSQLConf(BUCKETING_MAX_BUCKETS.key -> maxNrBuckets.toString) {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303246
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListenerSuite.scala
 ##
 @@ -37,6 +37,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LocalRelation
 import org.apache.spark.sql.catalyst.util.quietly
 import org.apache.spark.sql.execution.{LeafExecNode, QueryExecution, 
SparkPlanInfo, SQLExecution}
 import org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   Please remove this since we already have `UI_RETAINED_EXECUTIONS`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303385
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListenerSuite.scala
 ##
 @@ -647,7 +648,7 @@ class SQLAppStatusListenerMemoryLeakSuite extends 
SparkFunSuite {
   .setMaster("local")
   .setAppName("test")
   .set(config.TASK_MAX_FAILURES, 1) // Don't retry the tasks to run this 
test quickly
-  .set("spark.sql.ui.retainedExecutions", "50") // Set it to 50 to run 
this test quickly
+  .set(StaticSQLConf.UI_RETAINED_EXECUTIONS.key, "50") // Set it to 50 to 
run this test quickly
 
 Review comment:
   ```
   -  .set(StaticSQLConf.UI_RETAINED_EXECUTIONS.key, "50") // Set it to 50 
to run this test quickly
   +  .set(UI_RETAINED_EXECUTIONS.key, "50") // Set it to 50 to run this 
test quickly
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303246
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListenerSuite.scala
 ##
 @@ -37,6 +37,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LocalRelation
 import org.apache.spark.sql.catalyst.util.quietly
 import org.apache.spark.sql.execution.{LeafExecNode, QueryExecution, 
SparkPlanInfo, SQLExecution}
 import org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   Please remove this since we already imported `UI_RETAINED_EXECUTIONS`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303246
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/ui/SQLAppStatusListenerSuite.scala
 ##
 @@ -37,6 +37,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LocalRelation
 import org.apache.spark.sql.catalyst.util.quietly
 import org.apache.spark.sql.execution.{LeafExecNode, QueryExecution, 
SparkPlanInfo, SQLExecution}
 import org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   Please remove this since we already imported `UI_RETAINED_EXECUTIONS` in the 
next line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302303010
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/StateStoreCoordinatorSuite.scala
 ##
 @@ -124,7 +125,7 @@ class StateStoreCoordinatorSuite extends SparkFunSuite 
with SharedSparkContext {
   import spark.implicits._
   coordRef = spark.streams.stateStoreCoordinator
   implicit val sqlContext = spark.sqlContext
-  spark.conf.set("spark.sql.shuffle.partitions", "1")
+  spark.conf.set(SQLConf.SHUFFLE_PARTITIONS.key, "1")
 
 Review comment:
   ```
   -  spark.conf.set(SQLConf.SHUFFLE_PARTITIONS.key, "1")
   +  spark.conf.set(SHUFFLE_PARTITIONS.key, "1")
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302302967
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/state/StateStoreCoordinatorSuite.scala
 ##
 @@ -27,6 +27,7 @@ import org.apache.spark.scheduler.ExecutorCacheTaskLocation
 import org.apache.spark.sql.SparkSession
 import org.apache.spark.sql.execution.streaming.{MemoryStream, 
StreamingQueryWrapper}
 import org.apache.spark.sql.functions.count
+import org.apache.spark.sql.internal.SQLConf
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.SQLConf
   +import org.apache.spark.sql.internal.SQLConf.SHUFFLE_PARTITIONS
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302302208
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala
 ##
 @@ -665,17 +666,17 @@ abstract class SQLViewSuite extends QueryTest with 
SQLTestUtils {
 sql(s"CREATE VIEW view${idx + 1} AS SELECT * FROM view$idx")
   }
 
-  withSQLConf("spark.sql.view.maxNestedViewDepth" -> "10") {
+  withSQLConf(SQLConf.MAX_NESTED_VIEW_DEPTH.key -> "10") {
 val e = intercept[AnalysisException] {
   sql("SELECT * FROM view10")
 }.getMessage
 assert(e.contains("The depth of view `default`.`view0` exceeds the 
maximum view " +
   "resolution depth (10). Analysis is aborted to avoid errors. 
Increase the value " +
-  "of spark.sql.view.maxNestedViewDepth to work around this."))
+  s"of ${SQLConf.MAX_NESTED_VIEW_DEPTH.key} to work around this."))
   }
 
   val e = intercept[IllegalArgumentException] {
-withSQLConf("spark.sql.view.maxNestedViewDepth" -> "0") {}
+withSQLConf(SQLConf.MAX_NESTED_VIEW_DEPTH.key -> "0") {}
 
 Review comment:
   ditto.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302302186
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala
 ##
 @@ -665,17 +666,17 @@ abstract class SQLViewSuite extends QueryTest with 
SQLTestUtils {
 sql(s"CREATE VIEW view${idx + 1} AS SELECT * FROM view$idx")
   }
 
-  withSQLConf("spark.sql.view.maxNestedViewDepth" -> "10") {
+  withSQLConf(SQLConf.MAX_NESTED_VIEW_DEPTH.key -> "10") {
 val e = intercept[AnalysisException] {
   sql("SELECT * FROM view10")
 }.getMessage
 assert(e.contains("The depth of view `default`.`view0` exceeds the 
maximum view " +
   "resolution depth (10). Analysis is aborted to avoid errors. 
Increase the value " +
-  "of spark.sql.view.maxNestedViewDepth to work around this."))
+  s"of ${SQLConf.MAX_NESTED_VIEW_DEPTH.key} to work around this."))
 
 Review comment:
   ditto.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302302093
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala
 ##
 @@ -20,6 +20,7 @@ package org.apache.spark.sql.execution
 import org.apache.spark.sql._
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.analysis.NoSuchTableException
+import org.apache.spark.sql.internal.SQLConf
 
 Review comment:
   ```
   -import org.apache.spark.sql.internal.SQLConf
   +import org.apache.spark.sql.internal.SQLConf.MAX_NESTED_VIEW_DEPTH
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302302154
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala
 ##
 @@ -665,17 +666,17 @@ abstract class SQLViewSuite extends QueryTest with 
SQLTestUtils {
 sql(s"CREATE VIEW view${idx + 1} AS SELECT * FROM view$idx")
   }
 
-  withSQLConf("spark.sql.view.maxNestedViewDepth" -> "10") {
+  withSQLConf(SQLConf.MAX_NESTED_VIEW_DEPTH.key -> "10") {
 
 Review comment:
   ```
   -  withSQLConf(SQLConf.MAX_NESTED_VIEW_DEPTH.key -> "10") {
   +  withSQLConf(MAX_NESTED_VIEW_DEPTH.key -> "10") {
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302301828
 
 

 ##
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala
 ##
 @@ -26,6 +26,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
 import org.apache.spark.sql.catalyst.rules.Rule
 import org.apache.spark.sql.execution._
 import org.apache.spark.sql.execution.vectorized.OnHeapColumnVector
+import org.apache.spark.sql.internal.StaticSQLConf
 
 Review comment:
   Can we change like the following if we need only one configuration import?
   ```
   - import org.apache.spark.sql.internal.StaticSQLConf
   + import org.apache.spark.sql.internal.StaticSQLConf.SPARK_SESSION_EXTENSIONS
   ```
   
   Then, the other will be shorter.
   ```
   - .config(StaticSQLConf.SPARK_SESSION_EXTENSIONS.key, 
classOf[MyExtensions].getCanonicalName)
   + .config(SPARK_SESSION_EXTENSIONS.key, 
classOf[MyExtensions].getCanonicalName)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302300020
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/HashAggregateExec.scala
 ##
 @@ -559,29 +560,24 @@ case class HashAggregateExec(
   private def enableTwoLevelHashMap(ctx: CodegenContext): Unit = {
 if (!checkIfFastHashMapSupported(ctx)) {
   if (modes.forall(mode => mode == Partial || mode == PartialMerge) && 
!Utils.isTesting) {
-logInfo("spark.sql.codegen.aggregate.map.twolevel.enabled is set to 
true, but"
+logInfo(s"${SQLConf.ENABLE_TWOLEVEL_AGG_MAP.key} is set to true, but"
   + " current version of codegened fast hashmap does not support this 
aggregate.")
   }
 } else {
   isFastHashMapEnabled = true
 
   // This is for testing/benchmarking only.
   // We enforce to first level to be a vectorized hashmap, instead of the 
default row-based one.
-  isVectorizedHashMapEnabled = sqlContext.getConf(
-"spark.sql.codegen.aggregate.map.vectorized.enable", "false") == "true"
+  isVectorizedHashMapEnabled = sqlContext.conf.enableVectorizedHashMap
 }
   }
 
   private def doProduceWithKeys(ctx: CodegenContext): String = {
 val initAgg = ctx.addMutableState(CodeGenerator.JAVA_BOOLEAN, "initAgg")
 if (sqlContext.conf.enableTwoLevelAggMap) {
   enableTwoLevelHashMap(ctx)
-} else {
-  sqlContext.getConf("spark.sql.codegen.aggregate.map.vectorized.enable", 
null) match {
-case "true" =>
-  logWarning("Two level hashmap is disabled but vectorized hashmap is 
enabled.")
-case _ =>
-  }
+} else if (sqlContext.conf.enableVectorizedHashMap) {
+  logWarning("Two level hashmap is disabled but vectorized hashmap is 
enabled.")
 
 Review comment:
   Yep. This is a correct update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302299080
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -1030,6 +1037,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val ENABLE_VECTORIZED_HASH_MAP =
+buildConf("spark.sql.codegen.aggregate.map.vectorized.enable")
 
 Review comment:
   `.enable")` -> `.enabled")`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302299013
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -234,6 +234,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val IN_MEMORY_TABLE_SCAN_STATISTICS_ENABLED =
+buildConf("spark.sql.inMemoryTableScanStatistics.enable")
 
 Review comment:
   `.enable")` -> `.enabled")`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302299080
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -1030,6 +1037,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val ENABLE_VECTORIZED_HASH_MAP =
+buildConf("spark.sql.codegen.aggregate.map.vectorized.enable")
 
 Review comment:
   `.enable")` -> `.enabled")`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] Use ConfigEntry for hardcoded configs in SQL

2019-07-10 Thread GitBox
dongjoon-hyun commented on a change in pull request #25059: [SPARK-28257][SQL] 
Use ConfigEntry for hardcoded configs in SQL
URL: https://github.com/apache/spark/pull/25059#discussion_r302299013
 
 

 ##
 File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
 ##
 @@ -234,6 +234,13 @@ object SQLConf {
   .booleanConf
   .createWithDefault(true)
 
+  val IN_MEMORY_TABLE_SCAN_STATISTICS_ENABLED =
+buildConf("spark.sql.inMemoryTableScanStatistics.enable")
 
 Review comment:
   `.enable")` -> `.enabled")`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org