>
> Yes, recently we improved ParquetRelation2 quite a bit. Spark SQL uses its
> own Parquet support to read partitioned Parquet tables declared in Hive
> metastore. Only writing to partitioned tables is not covered yet. These
> improvements will be included in Spark 1.3.0.
>
> Just created SPARK-5948 to track writing to partitioned Parquet tables.
>
Ok, this is still a little confusing.

Since I am able in 1.2.0 to write to a partitioned Hive by registering my
SchemaRDD and calling INSERT into "the hive partitionned table" SELECT "the
registrered", what is the write-path in this case ? Full Hive with a
SparkSQL<->Hive bridge ?
If that were the case, why wouldn't SKEWED ON be honored (see another
thread I opened).

Thanks

Reply via email to