Hey,

Consider the following query:
/*select cast((timestamp '2008-2-23 10:20:30.123') as time) from foo;*/

we drop the millisecond portion of the of the time component while casting timestamp to time and the logic responsible for this is here <https://github.com/julianhyde/optiq/blob/retired/core/src/main/java/org/eigenbase/rex/RexBuilder.java#L485> where we look at the scale of the type and truncate the millisecond portion. The way I can get it to not truncate is use the following query

/*select cast((timestamp '2008-2-23 10:20:30.123') as time(0, 3)) from foo;*/

In the above query once I specify the precision and scale we no longer truncate the milliseconds. I wanted to bring up two issues

1. I am not sure if its SQL standard but having precision associated with time/date types is a bit weird? Oracle and Postgres throw parse errors when I try to specify precision for time. They only allow users to specify scale. Should we change the parser to make sure that we don't let users specify precision and only be able to specify scale?

2. Does it make sense to do the following:
In the case the user hasn't specified a precision we propagate the value as is.
We perform any truncation only in the case when user has specified a scale?

The above approach is what is taken by Postgres, however Oracle by default truncates the fractional second component by default unless you specify scale.

Please let me know if I should go ahead and file JIRA's for the above issues.

Thanks
Mehant



Reply via email to