[ 
https://issues.apache.org/jira/browse/SPARK-24924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16568199#comment-16568199
 ] 

Thomas Graves commented on SPARK-24924:
---------------------------------------

Hmm, so we are adding this for ease of upgrading I guess (so user doesn't have 
to change their code), but at the same time we aren't adding the 
spark.read.avro syntax so it break in that case or they get a different 
implementation by default?   

This doesn't make sense to me.  Personally I don't like having some other add 
on package names in our code at all and here we are mapping what the user 
thought they would get to our internal implementation which could very well be 
different.  I would rather just plain error out saying these conflict, either 
update or change your external package to use a different name.  There is also 
the case one might be able to argue its breaking api compatilibity since .avro 
option went away, buts it a third party library so you can probably get away 
with that. 

> Add mapping for built-in Avro data source
> -----------------------------------------
>
>                 Key: SPARK-24924
>                 URL: https://issues.apache.org/jira/browse/SPARK-24924
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: Dongjoon Hyun
>            Assignee: Dongjoon Hyun
>            Priority: Minor
>             Fix For: 2.4.0
>
>
> This issue aims to the followings.
>  # Like `com.databricks.spark.csv` mapping, we had better map 
> `com.databricks.spark.avro` to built-in Avro data source.
>  # Remove incorrect error message, `Please find an Avro package at ...`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to