Hi Sean,

Thank you for the fast response.
 
Thanks & Regards, 
Meethu M


On Monday, 9 June 2014 6:04 PM, Sean Owen <so...@cloudera.com> wrote:
 


Have a search online / at the Spark JIRA. This was a known upstream
bug in Hadoop.

https://issues.apache.org/jira/browse/SPARK-1861


On Mon, Jun 9, 2014 at 7:54 AM, MEETHU MATHEW <meethu2...@yahoo.co.in> wrote:
> Hi,
> I am getting ArrayIndexOutOfBoundsException while reading from bz2 files  in
> HDFS.I have come across the same issue in JIRA at
> https://issues.apache.org/jira/browse/SPARK-1861, but it seems to be
> resolved. I have tried the workaround suggested(SPARK_WORKER_CORES=1),but
> its still showing error.What may be the possible reason that I am getting
> the same error again?
> I am using Spark1.0.0 with hadoop 1.2.1.
> java.lang.ArrayIndexOutOfBoundsException: 900000
> at
> org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.getAndMoveToFrontDecode(CBZip2InputStream.java:897)
> at
> org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.initBlock(CBZip2InputStream.java:499)
> at
> org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.changeStateToProcessABlock(CBZip2InputStream.java:330)
> at
> org.apache.hadoop.io.compress.bzip2.CBZip2InputStream.read(CBZip2InputStream.java:394)
> at
> org.apache.hadoop.io.compress.BZip2Codec$BZip2CompressionInputStream.read(BZip2Codec.java:422)
> at java.io.InputStream.read(InputStream.java:101)
> at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:205)
> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:169)
> at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:176)
> at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:43)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:198)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:181)
> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
> at
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
> at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:350)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at
> org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:303)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:200)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:175)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:175)
> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1160)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:174)
>
> Thanks & Regards,
> Meethu M

Reply via email to