Seems it is a bug. I have opened
https://issues.apache.org/jira/browse/SPARK-2339 to track it.

Thank you for reporting it.

Yin


On Tue, Jul 1, 2014 at 12:06 PM, Subacini B <subac...@gmail.com> wrote:

> Hi All,
>
> Running this join query
>  sql("SELECT * FROM  A_TABLE A JOIN  B_TABLE B WHERE
> A.status=1").collect().foreach(println)
>
> throws
>
> Exception in thread "main" org.apache.spark.SparkException: Job aborted
> due to stage failure: Task 1.0:3 failed 4 times, most recent failure:
> Exception failure in TID 12 on host X.X.X.X: 
> *org.apache.spark.sql.catalyst.errors.package$TreeNodeException:
> No function to evaluate expression. type: UnresolvedAttribute, tree:
> 'A.status*
>
> org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute.eval(unresolved.scala:59)
>
> org.apache.spark.sql.catalyst.expressions.Equals.eval(predicates.scala:147)
>
> org.apache.spark.sql.catalyst.expressions.And.eval(predicates.scala:100)
>
> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>
> org.apache.spark.sql.execution.Filter$$anonfun$2$$anonfun$apply$1.apply(basicOperators.scala:52)
>         scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:390)
>         scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>
> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$1.apply(Aggregate.scala:137)
>
> org.apache.spark.sql.execution.Aggregate$$anonfun$execute$1$$anonfun$1.apply(Aggregate.scala:134)
>         org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>         org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
>
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>         org.apache.spark.scheduler.Task.run(Task.scala:51)
>
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>         java.lang.Thread.run(Thread.java:695)
> Driver stacktrace:
>
> Can someone help me.
>
> Thanks in advance.
>
>

Reply via email to