Hi All,

I am facing following problem on Spark-1.2 rc1 where I get Treenode
exception (unresolved attributes) :-

https://issues.apache.org/jira/browse/SPARK-2063

To avoid this, I do something following :-

val newSchemaRDD = sqlContext.applySchema(existingSchemaRDD,
existingSchemaRDD.schema)

It seems to work with above code but I see that PROJECTIONS and PREDICATES
aren't pushed down in my
InMemoryTabularScan(spark.sql.inMemoryColumnarStorage.partitionPruning =
true) and get a performance hit. When I see the logical plan while
debugging, I see something like :-

...
Filter(col1#38=2,col2#39=3)
  PhysicalRDD[...]
    InMemoryTabularScan[col1,col2,col3,col4,col5(InMEmoryRelation...)]

while I expect it to be :-

...
PhysicalRDD[...]
 
InMemoryTabularScan[col1#38=2,col2#39=3,col3,col4,col5(InMEmoryRelation...)]


Predicates and Projections do get pushed down when I don't create new RDD by
applying schema again and using the existing schema RDD further(in case of
simple queries) but then for complex queries, I get TreenodeException
(Unresolved Attributes) as I mentioned.

Let me know if you need any more info around my problem.

Thanks in Advance
-Nitin



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/PhysicalRDD-problem-tp20589.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to