[ 
https://issues.apache.org/jira/browse/SPARK-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15703897#comment-15703897
 ] 

Jerryjung commented on SPARK-18220:
-----------------------------------

Same error occurred!
spark-sql> CREATE  TABLE zz  as select * from d_c.dcoc_ircs_op_brch;
16/11/29 11:09:28 INFO SparkSqlParser: Parsing command: CREATE  TABLE zz  as 
select * from d_c.dcoc_ircs_op_brch
16/11/29 11:09:28 INFO HiveMetaStore: 0: get_database: d_c
16/11/29 11:09:28 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_database: d_c
16/11/29 11:09:28 INFO HiveMetaStore: 0: get_table : db=d_c 
tbl=dcoc_ircs_op_brch
16/11/29 11:09:28 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_table : db=d_c tbl=dcoc_ircs_op_brch
16/11/29 11:09:28 INFO HiveMetaStore: 0: get_table : db=d_c 
tbl=dcoc_ircs_op_brch
16/11/29 11:09:28 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_table : db=d_c tbl=dcoc_ircs_op_brch
16/11/29 11:09:28 INFO CatalystSqlParser: Parsing command: varchar(6)
16/11/29 11:09:28 INFO CatalystSqlParser: Parsing command: varchar(50)
16/11/29 11:09:28 INFO CatalystSqlParser: Parsing command: varchar(4)
16/11/29 11:09:28 INFO CatalystSqlParser: Parsing command: varchar(50)
16/11/29 11:09:28 INFO CatalystSqlParser: Parsing command: timestamp
16/11/29 11:09:30 INFO HiveMetaStore: 0: get_table : db=default tbl=zz
16/11/29 11:09:30 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_table : db=default tbl=zz
16/11/29 11:09:30 INFO HiveMetaStore: 0: get_database: default
16/11/29 11:09:30 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_database: default
16/11/29 11:09:30 INFO HiveMetaStore: 0: get_database: default
16/11/29 11:09:30 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_database: default
16/11/29 11:09:30 INFO HiveMetaStore: 0: get_table : db=default tbl=zz
16/11/29 11:09:30 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_table : db=default tbl=zz
16/11/29 11:09:30 INFO HiveMetaStore: 0: get_database: default
16/11/29 11:09:30 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_database: default
16/11/29 11:09:30 INFO HiveMetaStore: 0: create_table: Table(tableName:zz, 
dbName:default, owner:hadoop, createTime:1480385368, lastAccessTime:0, 
retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:ircs_op_brch_cd, 
type:string, comment:null), FieldSchema(name:ircs_op_brch_nm, type:string, 
comment:null), FieldSchema(name:cms_brch_cd, type:string, comment:null), 
FieldSchema(name:cms_brch_nm, type:string, comment:null), 
FieldSchema(name:etl_job_dtm, type:timestamp, comment:null)], 
location:hdfs://xxx/user/hive/warehouse/zz, 
inputFormat:org.apache.hadoop.mapred.TextInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), partitionKeys:[], 
parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"ircs_op_brch_cd","type":"string","nullable":true,"metadata":{}},{"name":"ircs_op_brch_nm","type":"string","nullable":true,"metadata":{}},{"name":"cms_brch_cd","type":"string","nullable":true,"metadata":{}},{"name":"cms_brch_nm","type":"string","nullable":true,"metadata":{}},{"name":"etl_job_dtm","type":"timestamp","nullable":true,"metadata":{}}]},
 spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=hive}, 
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, 
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
rolePrivileges:null))
... 
parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"ircs_op_brch_cd","type":"string","nullable":true,"metadata":{}},{"name":"ircs_op_brch_nm","type":"string","nullable":true,"metadata":{}},{"name":"cms_brch_cd","type":"string","nullable":true,"metadata":{}},{"name":"cms_brch_nm","type":"string","nullable":true,"metadata":{}},{"name":"etl_job_dtm","type":"timestamp","nullable":true,"metadata":{}}]},
 spark.sql.sources.schema.numParts=1, spark.sql.sources.provider=hive}, 
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, 
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
rolePrivileges:null))
...
16/11/29 11:09:30 INFO FileUtils: Creating directory if it doesn't exist: 
hdfs://xxx/user/hive/warehouse/zz
16/11/29 11:09:31 INFO HiveMetaStore: 0: get_table : db=default tbl=zz
16/11/29 11:09:31 INFO audit: ugi=hadoop        ip=unknown-ip-addr      
cmd=get_table : db=default tbl=zz
16/11/29 11:09:31 INFO CatalystSqlParser: Parsing command: string
16/11/29 11:09:31 INFO CatalystSqlParser: Parsing command: string
16/11/29 11:09:31 INFO CatalystSqlParser: Parsing command: string
16/11/29 11:09:31 INFO CatalystSqlParser: Parsing command: string
16/11/29 11:09:31 INFO CatalystSqlParser: Parsing command: timestamp
16/11/29 11:09:31 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 258.9 KB, free 398.7 MB)
16/11/29 11:09:31 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in 
memory (estimated size 23.1 KB, free 398.7 MB)
16/11/29 11:09:31 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 
80.80.11.97:39889 (size: 23.1 KB, free: 399.0 MB)
16/11/29 11:09:31 INFO SparkContext: Created broadcast 0 from processCmd at 
CliDriver.java:376
16/11/29 11:09:31 INFO FileUtils: Creating directory if it doesn't exist: 
hdfs://xxx/user/hive/warehouse/zz/.hive-staging_hive_2016-11-29_11-09-31_532_3872774957419759303-1
16/11/29 11:09:31 INFO deprecation: mapred.job.id is deprecated. Instead, use 
mapreduce.job.id
16/11/29 11:09:31 INFO deprecation: mapred.tip.id is deprecated. Instead, use 
mapreduce.task.id
16/11/29 11:09:31 INFO deprecation: mapred.task.id is deprecated. Instead, use 
mapreduce.task.attempt.id
16/11/29 11:09:31 INFO deprecation: mapred.task.is.map is deprecated. Instead, 
use mapreduce.task.ismap
16/11/29 11:09:31 INFO deprecation: mapred.task.partition is deprecated. 
Instead, use mapreduce.task.partition
16/11/29 11:09:31 INFO PerfLogger: <PERFLOG method=OrcGetSplits 
from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
16/11/29 11:09:31 INFO deprecation: mapred.input.dir is deprecated. Instead, 
use mapreduce.input.fileinputformat.inputdir
16/11/29 11:09:31 INFO OrcInputFormat: FooterCacheHitRatio: 0/0
16/11/29 11:09:31 INFO PerfLogger: </PERFLOG method=OrcGetSplits 
start=1480385371818 end=1480385371865 duration=47 
from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
16/11/29 11:09:31 INFO SparkContext: Starting job: processCmd at 
CliDriver.java:376
16/11/29 11:09:31 INFO DAGScheduler: Got job 0 (processCmd at 
CliDriver.java:376) with 2 output partitions
16/11/29 11:09:31 INFO DAGScheduler: Final stage: ResultStage 0 (processCmd at 
CliDriver.java:376)
16/11/29 11:09:31 INFO DAGScheduler: Parents of final stage: List()
16/11/29 11:09:31 INFO DAGScheduler: Missing parents: List()
16/11/29 11:09:31 INFO DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[3] at processCmd at CliDriver.java:376), which has no missing 
parents
16/11/29 11:09:32 INFO MemoryStore: Block broadcast_1 stored as values in 
memory (estimated size 75.8 KB, free 398.7 MB)
16/11/29 11:09:32 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in 
memory (estimated size 29.7 KB, free 398.6 MB)
16/11/29 11:09:32 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 
80.80.11.97:39889 (size: 29.7 KB, free: 398.9 MB)
16/11/29 11:09:32 INFO SparkContext: Created broadcast 1 from broadcast at 
DAGScheduler.scala:996
16/11/29 11:09:32 INFO DAGScheduler: Submitting 2 missing tasks from 
ResultStage 0 (MapPartitionsRDD[3] at processCmd at CliDriver.java:376)
16/11/29 11:09:32 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/11/29 11:09:32 INFO FairSchedulableBuilder: Added task set TaskSet_0.0 tasks 
to pool default
16/11/29 11:09:32 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 
localhost, executor driver, partition 0, ANY, 5987 bytes)
16/11/29 11:09:32 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 
localhost, executor driver, partition 1, ANY, 5987 bytes)
16/11/29 11:09:32 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/11/29 11:09:32 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/11/29 11:09:32 INFO HadoopRDD: Input split: 
hdfs://xxx/edw/warehouse/dw/db=d_c/tb=dcoc_ircs_op_brch/part-00000:0+815
16/11/29 11:09:32 INFO HadoopRDD: Input split: 
hdfs://xxx/edw/warehouse/dw/db=d_c/tb=dcoc_ircs_op_brch/part-00001:0+749
16/11/29 11:09:32 INFO OrcRawRecordMerger: min key = null, max key = null
16/11/29 11:09:32 INFO OrcRawRecordMerger: min key = null, max key = null
16/11/29 11:09:32 INFO ReaderImpl: Reading ORC rows from 
hdfs://xxx/edw/warehouse/dw/db=d_c/tb=dcoc_ircs_op_brch/part-00001 with 
{include: [true, true, true, true, true, true], offset: 0, length: 
9223372036854775807}
16/11/29 11:09:32 INFO ReaderImpl: Reading ORC rows from 
hdfs://xxx/edw/warehouse/dw/db=d_c/tb=dcoc_ircs_op_brch/part-00000 with 
{include: [true, true, true, true, true, true], offset: 0, length: 
9223372036854775807}
16/11/29 11:09:32 INFO CodeGenerator: Code generated in 337.683679 ms
16/11/29 11:09:32 ERROR Executor: Exception in task 1.0 in stage 0.0 (TID 1)
java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.io.HiveVarcharWritable cannot be cast to 
org.apache.hadoop.io.Text
        at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveWritableObject(WritableStringObjectInspector.java:41)
        at 
org.apache.spark.sql.hive.HiveInspectors$$anonfun$unwrapperFor$23.apply(HiveInspectors.scala:529)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:435)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:426)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.writeToFile(hiveWriterContainers.scala:185)
        at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:151)
        at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:151)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
16/11/29 11:09:32 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.io.HiveVarcharWritable cannot be cast to 
org.apache.hadoop.io.Text
        at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveWritableObject(WritableStringObjectInspector.java:41)
        at 
org.apache.spark.sql.hive.HiveInspectors$$anonfun$unwrapperFor$23.apply(HiveInspectors.scala:529)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:435)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:426)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.writeToFile(hiveWriterContainers.scala:185)
        at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:151)
        at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:151)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
16/11/29 11:09:32 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
localhost, executor driver): java.lang.ClassCastException: 
org.apache.hadoop.hive.serde2.io.HiveVarcharWritable cannot be cast to 
org.apache.hadoop.io.Text
        at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveWritableObject(WritableStringObjectInspector.java:41)
        at 
org.apache.spark.sql.hive.HiveInspectors$$anonfun$unwrapperFor$23.apply(HiveInspectors.scala:529)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:435)
        at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:426)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at 
org.apache.spark.sql.hive.SparkHiveWriterContainer.writeToFile(hiveWriterContainers.scala:185)
        at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:151)
        at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$3.apply(InsertIntoHiveTable.scala:151)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

> ClassCastException occurs when using select query on ORC file
> -------------------------------------------------------------
>
>                 Key: SPARK-18220
>                 URL: https://issues.apache.org/jira/browse/SPARK-18220
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.1.0
>            Reporter: Jerryjung
>              Labels: orcfile, sql
>
> Error message is below.
> {noformat}
> ==========================================================
> 16/11/02 16:38:09 INFO ReaderImpl: Reading ORC rows from 
> hdfs://xxx/part-00022 with {include: [true], offset: 0, length: 
> 9223372036854775807}
> 16/11/02 16:38:09 INFO Executor: Finished task 17.0 in stage 22.0 (TID 42). 
> 1220 bytes result sent to driver
> 16/11/02 16:38:09 INFO TaskSetManager: Finished task 17.0 in stage 22.0 (TID 
> 42) in 116 ms on localhost (executor driver) (19/20)
> 16/11/02 16:38:09 ERROR Executor: Exception in task 10.0 in stage 22.0 (TID 
> 35)
> java.lang.ClassCastException: 
> org.apache.hadoop.hive.serde2.io.HiveVarcharWritable cannot be cast to 
> org.apache.hadoop.io.Text
>       at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.getPrimitiveWritableObject(WritableStringObjectInspector.java:41)
>       at 
> org.apache.spark.sql.hive.HiveInspectors$$anonfun$unwrapperFor$23.apply(HiveInspectors.scala:526)
>       at 
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
>       at 
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$14$$anonfun$apply$15.apply(TableReader.scala:419)
>       at 
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:435)
>       at 
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$2.apply(TableReader.scala:426)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:232)
>       at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:225)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:804)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:804)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>       at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>       at java.lang.Thread.run(Thread.java:745)
> ORC dump info.
> ==========================================================
> File Version: 0.12 with HIVE_8732
> 16/11/02 16:39:21 INFO orc.ReaderImpl: Reading ORC rows from 
> hdfs://XXX/part-00000 with {include: null, offset: 0, length: 
> 9223372036854775807}
> 16/11/02 16:39:21 INFO orc.RecordReaderFactory: Schema is not specified on 
> read. Using file schema.
> Rows: 7
> Compression: ZLIB
> Compression size: 262144
> Type: 
> struct<a:varchar(2),b(50),c:varchar(6),d:varchar(50),e:varchar(4),f:varchar(50),g:timestamp>
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to