Have written a spark job which reads in from a sequenceFile and writes the
data into the Parquet format.
===========================================================================
===============================================================
JavaPairRDD<NullWritable, BytesWritable> distData =
sc.sequenceFile(infile,NullWritable.class ,BytesWritable.class);
ParquetThriftBytesOutputFormat.setThriftClass(job,
collector.adapters.NetflowRecordIdl.class);
ParquetThriftBytesOutputFormat.setOutputPath(job , outPath);
ParquetThriftBytesOutputFormat.setWriteSupportClass(job,
ThriftBytesWriteSupport.class);
ParquetThriftBytesOutputFormat.setTProtocolClass(job,org.apache.thrift.pr
otocol.TCompactProtocol.class);
distData.saveAsNewAPIHadoopFile(outfile, NullWritable.class,
BytesWritable.class,
ParquetThriftBytesOutputFormat.class,job.getConfiguration());
===========================================================================
=============================================
Am getting the following exception ::
14/09/29 07:22:37 INFO DAGScheduler: Failed to run saveAsNewAPIHadoopFile
at parNetflow.java:88
Exception in thread "main" org.apache.spark.SparkException: Job aborted
due to stage failure: Task 0.0:0 failed 4 times, most recent failure:
Exception failure in TID 7 on host hp-bld-1197:
java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be
cast to java.lang.Void
parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:37)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$12.apply(PairRDDFunctions.sc
ala:718)
org.apache.spark.rdd.PairRDDFunctions$$anonfun$12.apply(PairRDDFunctions.sc
ala:699)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
org.apache.spark.scheduler.Task.run(Task.scala:51)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1
145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:
615)
java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGSched
uler$$failJobAndIndependentStages(DAGScheduler.scala:1049)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGSche
duler.scala:1033)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGSche
duler.scala:1031)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:
59)
at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1031)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.appl
y(DAGScheduler.scala:635)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.appl
y(DAGScheduler.scala:635)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.sc
ala:635)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2
.applyOrElse(DAGScheduler.scala:1234)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDi
spatcher.scala:386)
at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:
1339)
at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.jav
a:107)
Any insights on why is this happening ? Changing the second parameter (
NullWritable.class ) to java.lang.Void in this code does not change the
exception, so the problem doesn¹t relate to it.
Thanks
Preeti