[
https://issues.apache.org/jira/browse/PHOENIX-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611320#comment-14611320
]
Josh Mahonin commented on PHOENIX-2088:
---------------------------------------
Just a note that this will affect the phoenix-spark module here:
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/main/scala/org/apache/phoenix/spark/ConfigurationUtil.scala#L55-L64
I believe the motivation behind that code was to recreate the ColumnInfo
objects from a serializable type, and there was a convenient utility method
there to provide that. Does the mapreduce integration not need that capability
any more? I suspect whatever works for mapreduce can be made to work with
spark, but if you see the 'PhoenixRecordWritable' class, it uses the ColumnInfo
object to derive the types for writing:
https://github.com/apache/phoenix/blob/master/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRecordWritable.scala#L33-L74
> Prevent splitting and recombining select expressions for MR integration
> -----------------------------------------------------------------------
>
> Key: PHOENIX-2088
> URL: https://issues.apache.org/jira/browse/PHOENIX-2088
> Project: Phoenix
> Issue Type: Bug
> Reporter: James Taylor
> Assignee: Thomas D'Silva
> Attachments: PHOENIX-2088-wip.patch
>
>
> We currently send in the select expressions for the MR integration with a
> delimiter separated string, split based on the delimiter, and then recombine
> again using a comma separator. This is problematic because the delimiter
> character may appear in a select expression, thus breaking this logic.
> Instead, we should use a comma as the delimiter and avoid splitting and
> recombining as it's not necessary in that case. Instead, the entire string
> can be used as-is in that case to form the select expressions.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)