>
>
>    1. In Spark 1.3.0, timestamp support was added, also Spark SQL uses
>    its own Parquet support to handle both read path and write path when
>    dealing with Parquet tables declared in Hive metastore, as long as you’re
>    not writing to a partitioned table. So yes, you can.
>
> Ah, I had missed the part about being partitioned or not. Is this related
to the work being done on ParquetRelation2 ?

We will indeed write to a partitioned table : do neither the read nor the
write path go through Spark SQL's parquet support in that case ? Is there a
JIRA/PR I can monitor to see when this would change ?

Thanks

Reply via email to