[ 
https://issues.apache.org/jira/browse/SPARK-19809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16378018#comment-16378018
 ] 

Franck Tago commented on SPARK-19809:
-------------------------------------

Need a pointer on the following.  

Env : Spark 2.2.1

1- I set the property  spark.sql.hive.convertMetastoreOrc to true

2- My hive table has the following  schema

CREATE TABLE `ft_orc`(
 `int` int,
 `double` double,
 `big+int` bigint,
 `$tring` string,
 `(decimal)` decimal(15,8),
 `flo@t` float,
 `datetime` date,
 `timestamp` timestamp,
 `01` int)
 CLUSTERED BY (
 `int`)
 INTO 20 BUCKETS
 ROW FORMAT SERDE
 'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
 WITH SERDEPROPERTIES (
 'field.delim'=',',
 'serialization.format'=',')
 STORED AS INPUTFORMAT
 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
 OUTPUTFORMAT
 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' ;

 

I loaded the table  with 1 row of  data 

!image-2018-02-26-20-29-49-410.png!

I tried to  run the following simple statement  

scala> var res =spark.sql(" SELECT alias.`int` as a0, alias.`double` as a1, 
alias.`big+int` as a2, alias.`$tring` as a3, CAST(alias.`(decimal)` AS DOUBLE) 
as a4, CAST(alias.`flo@t` AS DOUBLE) as a5, CAST(alias.`datetime` AS TIMESTAMP) 
as a6, alias.`timestamp` as a7, alias.`01` as a8 FROM default.ft_orc alias" )
18/02/27 04:30:57 WARN HiveConf: HiveConf of name hive.conf.hidden.list does 
not exist
18/02/27 04:30:57 WARN HiveConf: HiveConf of name hive.conf.hidden.list does 
not exist
java.lang.IndexOutOfBoundsException
 at java.nio.Buffer.checkIndex(Buffer.java:540)
 at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:139)
 at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:374)
 at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.<init>(ReaderImpl.java:316)
 at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:187)
 at 
org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:68)
 at 
org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$getFileReader$2.apply(OrcFileOperator.scala:67)
 at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
 at 
scala.collection.TraversableOnce$class.collectFirst(TraversableOnce.scala:145)
 at scala.collection.AbstractIterator.collectFirst(Iterator.scala:1336)
 at 
org.apache.spark.sql.hive.orc.OrcFileOperator$.getFileReader(OrcFileOperator.scala:69)
 at 
org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$1.apply(OrcFileOperator.scala:77)
 at 
org.apache.spark.sql.hive.orc.OrcFileOperator$$anonfun$readSchema$1.apply(OrcFileOperator.scala:77)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at scala.collection.immutable.List.foreach(List.scala:381)
 at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
 at scala.collection.immutable.List.flatMap(List.scala:344)

 

Any pointer ? 

Should I file a separate Jira ? 

> NullPointerException on zero-size ORC file
> ------------------------------------------
>
>                 Key: SPARK-19809
>                 URL: https://issues.apache.org/jira/browse/SPARK-19809
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.6.3, 2.0.2, 2.1.1, 2.2.1
>            Reporter: Michał Dawid
>            Assignee: Dongjoon Hyun
>            Priority: Major
>             Fix For: 2.3.0
>
>         Attachments: image-2018-02-26-20-29-49-410.png
>
>
> When reading from hive ORC table if there are some 0 byte files we get 
> NullPointerException:
> {code}java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560)
>       at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1010)
>       at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
>       at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
>       at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>       at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
>       at 
> org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
>       at 
> org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
>       at 
> org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
>       at 
> org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
>       at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
>       at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
>       at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at 
> org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:209)
>       at 
> org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:129)
>       at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
>       at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
>       at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
>       at 
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to