Github user gengliangwang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22134#discussion_r210968896
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
 ---
    @@ -609,7 +609,13 @@ object DataSource extends Logging {
     
       /** Given a provider name, look up the data source class definition. */
       def lookupDataSource(provider: String, conf: SQLConf): Class[_] = {
    -    val provider1 = backwardCompatibilityMap.getOrElse(provider, provider) 
match {
    +    val customBackwardCompatibilityMap =
    +      conf.getAllConfs
    +        .filter(_._1.startsWith("spark.sql.datasource.map"))
    +        .map{ case (k, v) => (k.replaceFirst("^spark.sql.datasource.map.", 
""), v) }
    +    val compatibilityMap = backwardCompatibilityMap ++ 
customBackwardCompatibilityMap
    --- End diff --
    
    I have the same concern as @tgravescs . It seems tricky to unset the 
default mapping.
    
    For example, if by default we map `com.databricks.spark.avro` to  internal 
avro, then to unset it we have to set
    `spark.sql.datasource.map.com.databricks.spark.avro -> 
com.databricks.spark.avro` .
    
    Currently we only have to deal with Avro and CSV, so I think it is ok to 
have one single straightforward configuration like 
https://github.com/apache/spark/pull/22133 proposed.



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to