Hi, I also noticed this issue. Actually, it was already mentioned several times. There is an existing JIRA(SPARK-17914).
I am going to submit a PR to fix this in a few days. Best, Anton On Jun 5, 2017 21:42, "verbamour" <verbam...@gmail.com> wrote: > Greetings, > > I am using Hive compatibility in Spark 2.1.1 and it appears that the CAST > string to TIMESTAMP improperly trims the sub-second value. In particular, > leading zeros in the decimal portion appear to be dropped. > > Steps to reproduce: > 1. From `spark-shell` issue: `spark.sql("SELECT CAST('2017-04-05 > 16:00:48.0297580' AS TIMESTAMP)").show(100, false)` > > 2. Note erroneous result (i.e. ".0297580" becomes ".29758") > ``` > +----------------------------------------------+ > |CAST(2017-04-05 16:00:48.0297580 AS TIMESTAMP)| > +----------------------------------------------+ > |2017-04-05 16:00:48.29758 | > +----------------------------------------------+ > ``` > > I am not currently plugged into the JIRA system for Spark, so if this is > truly a bug please bring it to the attention of the appropriate > authorities. > > Cheers, > -tom > > > > -- > View this message in context: http://apache-spark-user-list. > 1001560.n3.nabble.com/Incorrect-CAST-to-TIMESTAMP- > in-Hive-compatibility-tp28744.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe e-mail: user-unsubscr...@spark.apache.org > >