[
https://issues.apache.org/jira/browse/PHOENIX-2784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15267146#comment-15267146
]
James Taylor commented on PHOENIX-2784:
---------------------------------------
[~ndimiduk] - in JDBC the Timestamp type is derived from the Date type. Hence
it's fine to all a Date to be used where a Timestamp is, you'll just have
millisecond precision. We encourage Phoenix users to use Date instead of
Timestamp because it performs much better and 99% of the time you don't need
nano precision (which is what Timestamp gives you above and beyond what you get
from Date).
> phoenix-spark: Allow coercion of DATE fields to TIMESTAMP when loading
> DataFrames
> ---------------------------------------------------------------------------------
>
> Key: PHOENIX-2784
> URL: https://issues.apache.org/jira/browse/PHOENIX-2784
> Project: Phoenix
> Issue Type: Improvement
> Affects Versions: 4.7.0
> Reporter: Josh Mahonin
> Assignee: Josh Mahonin
> Priority: Minor
> Attachments: PHOENIX-2784.patch
>
>
> The Phoenix DATE type is internally represented as an 8 bytes, which can
> store a full 'yyyy-MM-dd hh:mm:ss' time component. However, Spark SQL follows
> the SQL Date spec and keeps only the 'yyyy-MM-dd' portion as a 4 byte type.
> When loading Phoenix DATE columns using the Spark DataFrame API, the
> 'hh:mm:ss' component is lost.
> This patch allows setting a new 'dateAsTimestamp' option when loading a
> DataFrame, which will coerce the underlying Date object to a Timestamp so
> that the full time component is loaded.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)