[ 
https://issues.apache.org/jira/browse/SPARK-19809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16027987#comment-16027987
 ] 

Hyukjin Kwon commented on SPARK-19809:
--------------------------------------

Yea, I agree that it should be dependent on the format 
specification/implementation, whether it is malformed or not. I think Parquet 
itself treats 0 bytes files as malformed file because it should read footer but 
it throws an exception up to my knowledge. 

The former case looks filtering out the whole partitions in 
{{DataSourceScanExec}}. Parquet requires to read the footers and it throws an 
exception, for example, I manually updated the code path to not skip the 
partitions so that the parquet reader is actually being called as below:

{code}
java.lang.RuntimeException: file:/.../tmp.abc is not a Parquet file (too small)
        at 
org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:466)
        at 
org.apache.parquet.hadoop.ParquetFileReader.<init>(ParquetFileReader.java:568)
        at 
org.apache.parquet.hadoop.ParquetFileReader.open(ParquetFileReader.java:492)
        at 
org.apache.parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:166)
        at 
org.apache.parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:147)
{code}

If we don't specify the schema, it also throws an exception as below:

{code}
spark.read.parquet(".../tmp.abc").show()
{code}

{code}
java.io.IOException: Could not read footer for file: 
FileStatus{path=file:/.../tmp.abc; isDirectory=false; length=0; replication=0; 
blocksize=0; modification_time=0; access_time=0; owner=; group=; 
permission=rw-rw-rw-; isSymlink=false}
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:498)
        at 
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anonfun$readParquetFootersInParallel$1.apply(ParquetFileFormat.scala:485)
        at 
scala.collection.parallel.AugmentedIterableIterator$class.flatmap2combiner(RemainsIterator.scala:132)
        at 
scala.collection.parallel.immutable.ParVector$ParVectorIterator.flatmap2combiner(ParVector.scala:62)
        at 
scala.collection.parallel.ParIterableLike$FlatMap.leaf(ParIterableLike.scala:1072)
{code}

Assuming it is treated as a malformed file (per the ORC JIRA you pointed out 
above) for the current status, it looks a malformed file and it sounds we 
should be able to skip this in client side whether it should be dealt with 
{{spark.sql.files.ignoreCorruptFiles}} or not.

For example, I found a related JIRA - 
https://issues.apache.org/jira/browse/AVRO-1530 and 
https://issues.apache.org/jira/browse/HIVE-11977. _If I read this correctly_, 
Avro looks decided not to change the behaviour but Hive deals with it.

Only for this issue, I also agree that this could be a subset of the issues you 
pointed out.

> NullPointerException on empty ORC file
> --------------------------------------
>
>                 Key: SPARK-19809
>                 URL: https://issues.apache.org/jira/browse/SPARK-19809
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 1.6.3, 2.0.2, 2.1.1
>            Reporter: MichaƂ Dawid
>
> When reading from hive ORC table if there are some 0 byte files we get 
> NullPointerException:
> {code}java.lang.NullPointerException
>       at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$BISplitStrategy.getSplits(OrcInputFormat.java:560)
>       at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1010)
>       at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1048)
>       at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
>       at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:66)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>       at scala.collection.immutable.List.foreach(List.scala:318)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>       at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>       at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:66)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:242)
>       at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:240)
>       at scala.Option.getOrElse(Option.scala:120)
>       at org.apache.spark.rdd.RDD.partitions(RDD.scala:240)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190)
>       at 
> org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
>       at 
> org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
>       at 
> org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
>       at 
> org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
>       at 
> org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
>       at 
> org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
>       at 
> org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
>       at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
>       at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
>       at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:497)
>       at 
> org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:209)
>       at 
> org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:129)
>       at 
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
>       at 
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
>       at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
>       at 
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>       at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745){code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to