[ 
https://issues.apache.org/jira/browse/SPARK-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian resolved SPARK-14459.
--------------------------------
       Resolution: Fixed
    Fix Version/s: 2.0.0

Issue resolved by pull request 12239
[https://github.com/apache/spark/pull/12239]

> SQL partitioning must match existing tables, but is not checked.
> ----------------------------------------------------------------
>
>                 Key: SPARK-14459
>                 URL: https://issues.apache.org/jira/browse/SPARK-14459
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0
>            Reporter: Ryan Blue
>            Assignee: Ryan Blue
>             Fix For: 2.0.0
>
>
> Writing into partitioned Hive tables has unexpected results because the 
> table's partitioning is not detected and applied during the analysis phase. 
> For example, if I have two tables, {{source}} and {{partitioned}}, with the 
> same column types:
> {code}
> CREATE TABLE source (id bigint, data string, part string);
> CREATE TABLE partitioned (id bigint, data string) PARTITIONED BY (part 
> string);
> // copy from source to partitioned
> sqlContext.table("source").write.insertInto("partitioned")
> {code}
> Copying from {{source}} to {{partitioned}} succeeds, but results in 0 rows. 
> This works if I explicitly partition by adding 
> {{...write.partitionBy("part").insertInto(...)}}. This work-around isn't 
> obvious and is prone to error because the {{partitionBy}} must match the 
> table's partitioning, though it is not checked.
> I think when relations are resolved, the partitioning should be checked and 
> updated if it isn't set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to