[ 
https://issues.apache.org/jira/browse/SPARK-16178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dongjoon Hyun closed SPARK-16178.
---------------------------------
    Resolution: Won't Fix

> SQL - Hive writer should not require partition names to match table partitions
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-16178
>                 URL: https://issues.apache.org/jira/browse/SPARK-16178
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Ryan Blue
>
> SPARK-14459 added a check that the {{partition}} metadata on 
> {{InsertIntoTable}} must match the table's partition column names. But if 
> {{partitionBy}} is used to set up partition columns, those columns may not be 
> named or the names may not match.
> For example:
> {code}
> // Tables:
> // CREATE TABLE src (id string, date int, hour int, timestamp bigint);
> // CREATE TABLE dest (id string, timestamp bigint, c1 string, c2 int)
> //   PARTITIONED BY (utc_dateint int, utc_hour int);
> spark.table("src").write.partitionBy("date", "hour").insertInto("dest")
> {code}
> The call to partitionBy correctly places the date and hour columns at the end 
> of the logical plan, but the names don't match the "utc_" prefix and the 
> write fails. But the analyzer will verify the types and insert an {{Alias}} 
> so the query is actually valid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to