Parquet format seems to be comparatively better for analytic load, it has
performance & compression benefits for large analytic workload.
A workaround could be to use long datatype to store epoch timestamp value.
If you already have existing parquet files (impala tables) then you may need
to consider doing some migration.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/timestamp-not-implemented-yet-tp15414p15571.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to