Hi Eric and Michael:

  I run into this problem with Spark 1.4.1 too.  The error stack is:

  java.lang.NoClassDefFoundError: Could not initialize class
org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$
        at
org.apache.spark.sql.execution.SparkPlan.newPredicate(SparkPlan.scala:180)
        at
org.apache.spark.sql.execution.Filter.conditionEvaluator$lzycompute(basicOperators.scala:55)
        at
org.apache.spark.sql.execution.Filter.conditionEvaluator(basicOperators.scala:55)
        at
org.apache.spark.sql.execution.Filter$$anonfun$2.apply(basicOperators.scala:58)
        at
org.apache.spark.sql.execution.Filter$$anonfun$2.apply(basicOperators.scala:57)
        at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
        at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:70)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

 

java.lang.AssertionError: assertion failed: List(package expressions,
package expressions)
        at scala.reflect.internal.Symbols$Symbol.suchThat(Symbols.scala:1678)
        at
scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:44)
        at
scala.reflect.internal.Mirrors$RootsBase.getModuleOrClass(Mirrors.scala:61)
        at
scala.reflect.internal.Mirrors$RootsBase.staticModuleOrClass(Mirrors.scala:72)
        at 
scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:161)
        at 
scala.reflect.internal.Mirrors$RootsBase.staticModule(Mirrors.scala:21)
        at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$typecreator1$1.apply(CodeGenerator.scala:46)
        at
scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe$lzycompute(TypeTags.scala:231)
        at scala.reflect.api.TypeTags$WeakTypeTagImpl.tpe(TypeTags.scala:231)
        at scala.reflect.api.TypeTags$class.typeOf(TypeTags.scala:335)
        at scala.reflect.api.Universe.typeOf(Universe.scala:59)
        at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.<init>(CodeGenerator.scala:46)
        at
org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$.<init>(GeneratePredicate.scala:25)
        at
org.apache.spark.sql.catalyst.expressions.codegen.GeneratePredicate$.<clinit>(GeneratePredicate.scala)
        at
org.apache.spark.sql.execution.SparkPlan.newPredicate(SparkPlan.scala:180)
        at
org.apache.spark.sql.execution.Filter.conditionEvaluator$lzycompute(basicOperators.scala:55)
        at
org.apache.spark.sql.execution.Filter.conditionEvaluator(basicOperators.scala:55)
        at
org.apache.spark.sql.execution.Filter$$anonfun$2.apply(basicOperators.scala:58)
        at
org.apache.spark.sql.execution.Filter$$anonfun$2.apply(basicOperators.scala:57)
        at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
        at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$17.apply(RDD.scala:686)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70)
        at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:70)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

       The exception were not thrown in every-time the SQL ran;  Do you have
any idea about this?




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SparkSQL-exception-on-spark-sql-codegen-tp19022p26603.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to