Not sure if this will help, but try clearing your jar cache (for sbt
~/.ivy2 and for maven ~/.m2) directories.

Thanks
Best Regards

On Wed, Apr 15, 2015 at 9:33 PM, Manoj Samel <manojsamelt...@gmail.com>
wrote:

> Env - Spark 1.3 Hadoop 2.3, Kerbeos
>
>  xx.saveAsTextFile(path, codec) gives following trace. Same works with
> Spark 1.2 in same environment
>
> val codec = classOf[<some codec class>]
>
> val a = sc.textFile("/some_hdfs_file")
>
> a.saveAsTextFile("/some_other_hdfs_file", codec) fails with following
> trace in Spark 1.3, works in Spark 1.2 in same env
>
> 15/04/14 18:06:15 INFO scheduler.TaskSetManager: Lost task 1.3 in stage
> 2.0 (TID 17) on executor XYZ: java.lang.SecurityException (JCE cannot
> authenticate the provider BC) [duplicate 7]
> 15/04/14 18:06:15 INFO cluster.YarnScheduler: Removed TaskSet 2.0, whose
> tasks have all completed, from pool
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
> in stage 2.0 failed 4 times, most recent failure: Lost task 0.3 in stage
> 2.0 (TID 16, nodeXYZ): java.lang.SecurityException: JCE cannot authenticate
> the provider BC
> at javax.crypto.Cipher.getInstance(Cipher.java:642)
> at javax.crypto.Cipher.getInstance(Cipher.java:580)
> xxxx some codec calls xxxx
> at
> org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:136)
> at org.apache.spark.SparkHadoopWriter.open(SparkHadoopWriter.scala:91)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:1068)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$13.apply(PairRDDFunctions.scala:1059)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
> at org.apache.spark.scheduler.Task.run(Task.scala:64)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:744)
> Caused by: java.util.jar.JarException:
> file:/abc/filecache/11/spark-assembly-1.3.0-hadoop2.3.0.jar has unsigned
> entries - org/apache/spark/SparkHadoopWriter$.class
> at javax.crypto.JarVerifier.verifySingleJar(JarVerifier.java:462)
> at javax.crypto.JarVerifier.verifyJars(JarVerifier.java:322)
> at javax.crypto.JarVerifier.verify(JarVerifier.java:250)
> at javax.crypto.JceSecurity.verifyProviderJar(JceSecurity.java:161)
> at javax.crypto.JceSecurity.getVerificationResult(JceSecurity.java:187)
> at javax.crypto.Cipher.getInstance(Cipher.java:638)
> ... 16 more
>
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> <http://org.apache.spark.scheduler.dagscheduler.org/>
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1203)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1191)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1191)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>

Reply via email to