[ 
https://issues.apache.org/jira/browse/PHOENIX-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15971077#comment-15971077
 ] 

Josh Mahonin commented on PHOENIX-3792:
---------------------------------------

Overall looks good.

{quote}
+    <artifactId>spark-avro_2.11</artifactId>
{quote}

We should replace 2.11 above with `${scala.binary.version}` as per the other 
dependencies. It seems spark-avro_2.10 exists as well.


It might be worth considering supporting the {{SKIP_NORMALIZE_IDENTIFIER}} as a 
parameter that can be passed in dynamically in a {{df.save()}} call. Adding 
another default option to the {{saveToPhoenix()}} method is an option, although 
it might make sense just to create a new method that accepts a parameter map. 
The parameter handling lives here:
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DefaultSource.scala#L44-L47




> Provide way to skip normalization of column names in phoenix-spark integration
> ------------------------------------------------------------------------------
>
>                 Key: PHOENIX-3792
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3792
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>             Fix For: 4.11.0
>
>         Attachments: PHOENIX-3792.patch
>
>
> If the user is reading an AVRO file and writing to a Phoenix table with case 
> sensitive column names, then we should provide the user with an option to 
> skip the normalisation as it seems there is no way to escape double quotes 
> for the column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to