Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22134#discussion_r210910385
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
 ---
    @@ -609,7 +609,13 @@ object DataSource extends Logging {
     
       /** Given a provider name, look up the data source class definition. */
       def lookupDataSource(provider: String, conf: SQLConf): Class[_] = {
    -    val provider1 = backwardCompatibilityMap.getOrElse(provider, provider) 
match {
    +    val customBackwardCompatibilityMap =
    +      conf.getAllConfs
    +        .filter(_._1.startsWith("spark.sql.datasource.map"))
    +        .map{ case (k, v) => (k.replaceFirst("^spark.sql.datasource.map.", 
""), v) }
    +    val compatibilityMap = backwardCompatibilityMap ++ 
customBackwardCompatibilityMap
    --- End diff --
    
    so at this point we are leaving com.databricks.spark.avro -> internal avro 
as the default and users have to set it back to com.databricks.spark.avro or do 
they set it empty?   Although if set to empty I think it will return empty 
below which will cause an issue.
    
    We should have a test case for empty and perhaps have a check for it below 
in that case.
    
    What about documentation? Is there a jira for documenting all avro stuff?  
If we do leave it as default we definitely want a release note with change in 
behavior.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to