That is a pretty reasonable workaround.  Also, please feel free to file a
JIRA when you find gaps in functionality like this that are impacting your
workloads:

https://issues.apache.org/jira/browse/SPARK/

On Wed, Oct 1, 2014 at 5:09 PM, barge.nilesh <barge.nil...@gmail.com> wrote:

> Parquet format seems to be comparatively better for analytic load, it has
> performance & compression benefits for large analytic workload.
> A workaround could be to use long datatype to store epoch timestamp value.
> If you already have existing parquet files (impala tables) then you may
> need
> to consider doing some migration.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/timestamp-not-implemented-yet-tp15414p15571.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to