[ 
https://issues.apache.org/jira/browse/SPARK-33844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angerszhu updated SPARK-33844:
------------------------------
    Description: 
 

After hive-2.3 we will set COLUMN_NAME_DELIMITER to special char when col name 
cntains ','since column list and column types in serde is splited by 
COLUMN_NAME_DELIMITER.
 In spark-2.4.0 + hive-1.2.1 we will failed when INSERT OVERWRITE DIR when 
query result schema columan name contains ','as
{code:java}
 org.apache.hadoop.hive.serde2.SerDeException: 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: columns has 14 elements 
while columns.types has 11 elements! at 
org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.extractColumnInfo(LazySerDeParameters.java:146)
 at 
org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.<init>(LazySerDeParameters.java:85)
 at 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:125)
 at 
org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:119)
 at 
org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103)
 at 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
 at 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:287)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:219)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:218)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at 
org.apache.spark.scheduler.Task.run(Task.scala:121) at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$12.apply(Executor.scala:461)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:467) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748){code}

 Since this problem has been solved by 
[https://github.com/apache/hive/blob/6f4c35c9e904d226451c465effdc5bfd31d395a0/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java#L1044-L1075]
 But I think we can do this in Spark side to make all version work well.

 

  was:
 


After hive-2.3 we will set COLUMN_NAME_DELIMITER to special char when col name 
cntains ','since column list and column types in serde is splited by 
COLUMN_NAME_DELIMITER.
In spark-2.4.0 + hive-1.2.1 we will failed when INSERT OVERWRITE DIR when query 
result schema columan name contains 
','as```org.apache.hadoop.hive.serde2.SerDeException: 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: columns has 14 elements 
while columns.types has 11 elements! at 
org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.extractColumnInfo(LazySerDeParameters.java:146)
 at 
org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.<init>(LazySerDeParameters.java:85)
 at 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:125)
 at 
org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:119)
 at 
org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103)
 at 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
 at 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:287)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:219)
 at 
org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:218)
 at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at 
org.apache.spark.scheduler.Task.run(Task.scala:121) at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$12.apply(Executor.scala:461)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:467) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)```
Since this problem has been solved by 
https://github.com/apache/hive/blob/6f4c35c9e904d226451c465effdc5bfd31d395a0/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java#L1044-L1075
But I think we cna do this in Spark side to make all version work well.


    val hiveColumns = table.schema.map(toHiveColumn)    // Note: In Hive the 
schema and partition columns must be disjoint sets    val (partCols, schema) = 
table.schema.map(toHiveColumn).partition \{ c =>    val (partCols, schema) = 
hiveColumns.partition { c =>      
table.partitionColumnNames.contains(c.getName)    }    
hiveTable.setFields(schema.asJava)    hiveTable.setPartCols(partCols.asJava)    
if (hiveColumns.exists(_.getName.contains(","))) \{      
hiveTable.setSerdeParam(serdeConstants.COLUMN_NAME_DELIMITER,        
String.valueOf(SerDeUtils.COLUMN_COMMENTS_DELIMITER))    }    
     test("insert overwrite directory with comma col name") {    withTempDir { 
dir =>      val path = dir.toURI.getPath
      val v1 =        s"""           | INSERT OVERWRITE DIRECTORY '${path}'     
      | STORED AS TEXTFILE           | SELECT 1 as a, 'c' as b, if(1 = 1, 
"true", "false")         """.stripMargin
      sql(v1).explain(true)
      sql(v1).show()
      //      checkAnswer(      //        
spark.read.json(dir.getCanonicalPath),      //        sql("SELECT 1 as a, 'c' 
as b"))    }  }


> InsertIntoDir failed since query column name contains ',' cause column type 
> and column names size not equal
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-33844
>                 URL: https://issues.apache.org/jira/browse/SPARK-33844
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.4.0
>            Reporter: angerszhu
>            Priority: Major
>
>  
> After hive-2.3 we will set COLUMN_NAME_DELIMITER to special char when col 
> name cntains ','since column list and column types in serde is splited by 
> COLUMN_NAME_DELIMITER.
>  In spark-2.4.0 + hive-1.2.1 we will failed when INSERT OVERWRITE DIR when 
> query result schema columan name contains ','as
> {code:java}
>  org.apache.hadoop.hive.serde2.SerDeException: 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: columns has 14 elements 
> while columns.types has 11 elements! at 
> org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.extractColumnInfo(LazySerDeParameters.java:146)
>  at 
> org.apache.hadoop.hive.serde2.lazy.LazySerDeParameters.<init>(LazySerDeParameters.java:85)
>  at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:125)
>  at 
> org.apache.spark.sql.hive.execution.HiveOutputWriter.<init>(HiveFileFormat.scala:119)
>  at 
> org.apache.spark.sql.hive.execution.HiveFileFormat$$anon$1.newInstance(HiveFileFormat.scala:103)
>  at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
>  at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:287)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:219)
>  at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:218)
>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at 
> org.apache.spark.scheduler.Task.run(Task.scala:121) at 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$12.apply(Executor.scala:461)
>  at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:467) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}
>  Since this problem has been solved by 
> [https://github.com/apache/hive/blob/6f4c35c9e904d226451c465effdc5bfd31d395a0/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java#L1044-L1075]
>  But I think we can do this in Spark side to make all version work well.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to