[ 
https://issues.apache.org/jira/browse/CALCITE-796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14624038#comment-14624038
 ] 

Julian Hyde commented on CALCITE-796:
-------------------------------------

Agreed, we should do this.

How many total bits do we need for a Timestamp? I reckon 96 bits. 64 for the 
millis since epoch, and 32 for nanos (20 would be sufficient, since we need a 
value between 0 and 999,999.) So, storing in a Java long is not an option.

Using Timestamp.toString() will not work: it will format the value in the JVM's 
timezone, whereas SQL timestamp values are zoneless. For example, the value 
Timestamp(0) must be transmitted as "1970-01-01 00:00:00" (and extra digits 
after a decimal point if you like) in all locales.

JSON has just has a "number" type which, if I understand the specification, is 
capable of transmitting integers larger than 2 ^ 63 losslessly. Maybe we could 
have a Jackson serializer that converts a Timestamp to a JSON 94 bit integer 
and a deserializer that reverses the process.

> Avatica remote service truncates java.sql.Timestamp truncates to milliseconds
> -----------------------------------------------------------------------------
>
>                 Key: CALCITE-796
>                 URL: https://issues.apache.org/jira/browse/CALCITE-796
>             Project: Calcite
>          Issue Type: Bug
>            Reporter: Lukas Lalinsky
>            Assignee: Julian Hyde
>
> TypedValue in Avatica read/writes java.sql.Timestamp even though natively the 
> type supports nanosecond precision (and Phoenix does use the full precision). 
> The JSON serialization protocol should count with this.
> I'd suggest serializing java.sql.Timestamp with `toString()` and 
> deserializing with `valueOf()` if it's a string. Alternatively, it could be 
> stored as a decimal number or just total number of nanoseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to