Thanks Cheng.
For the time being , As a work around, I had applied the schema
to Queryresult1, and then registered the result as temp table. Although
that works, but I was not sure of performance impact, as that might block
some optimisation in some scenarios.
This flow (on spark 1.1 ) works:
This workaround looks good to me. In this way, all queries are still
executed lazily within a single DAG, and Spark SQL is capable to
optimize the query plan as a whole.
On 9/29/14 11:26 AM, twinkle sachdeva wrote:
Thanks Cheng.
For the time being , As a work around, I had applied the schema
H Twinkle,
The failure is caused by case sensitivity. The temp table actually
stores the original un-analyzed logical plan, thus field names remain
capital (F1, F2, etc.). I believe this issue has already been fixed by
PR #2382 https://github.com/apache/spark/pull/2382. As a workaround,
you
Hi,
I am using Hive Context to fire the sql queries inside spark. I have
created a schemaRDD( Let's call it cachedSchema ) inside my code.
If i fire a sql query ( Query 1 ) on top of it, then it works.
But if I refer to Query1's result inside another sql, that fails. Note that
I have already