Greetings,

I am getting following exception on joining a few parquet files.
SPARK-12089 description has details of the overflow condition which is
marked as fixed in 1.6.1. I recall seeing another issue related to csv
files creating same exception.

Any pointers on how to debug this or possible workarounds? Google
searches and JIRA comments point to either a > 2GB record size (less
likely) or RDD sizes being too large.

I had upgraded to Spark 1.6.1 due to Serialization errors from
Catalyst while reading Parquet files.

Related JIRA Issue => https://issues.apache.org/jira/browse/SPARK-12089

Related PR => https://github.com/apache/spark/pull/10142


java.lang.NegativeArraySizeException
        at 
org.apache.spark.sql.catalyst.expressions.codegen.BufferHolder.grow(BufferHolder.java:45)
        at 
org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter.write(UnsafeRowWriter.java:196)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
Source)
        at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown
Source)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
        at 
org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:164)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
        at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:89)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Thanks.

-- 
Regards
Ravindra

Reply via email to