dtenedor commented on code in PR #36745:
URL: https://github.com/apache/spark/pull/36745#discussion_r891452022


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -2881,6 +2881,15 @@ object SQLConf {
       .booleanConf
       .createWithDefault(true)
 
+  val DEFAULT_COLUMN_ALLOWED_PROVIDERS =
+    buildConf("spark.sql.defaultColumn.allowedProviders")
+      .internal()
+      .doc("List of table providers wherein SQL commands are permitted to 
assign DEFAULT column " +
+        "values. Comma-separated list, whitespace ignored, case-insensitive.")
+      .version("3.4.0")
+      .stringConf
+      .createWithDefault("csv,json,orc,parquet")

Review Comment:
   @cloud-fan @gengliangwang this is a good point. I was thinking the default 
value of this conf contains the four data sources that we actually have support 
for scanning the default values. The primary purpose is to serve as a mechanism 
for banning default values with the other data sources, users are not expected 
to have to change this.
   
   As Gengliang mentions, it could be a possible escape hatch if we discover a 
bug in one data source later, and it's also useful for testing. Maybe we can 
extend the description of this conf to mention that in the normal/expected 
case, users don't have to change anything.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to