and
beeline
Things would be better once SPARK-6966 is merged into 1.4.0 when you can use
1. use the --jars parameter for spark-shell, spark-sql, etc or
2. sc.addJar to add the driver after starting spark-shell.
Good Luck,
Anand Mohan
--
View this message in context:
http://apache-spark-user
We have our Analytics App built on Spark 1.1 Core, Parquet, Avro and Spray.
We are using Kryo serializer for the Avro objects read from Parquet and we
are using our custom Kryo registrator (along the lines of ADAM
Filed https://issues.apache.org/jira/browse/SPARK-3891
Thanks,
Anand Mohan
On Thu, Oct 9, 2014 at 7:13 PM, Michael Armbrust mich...@databricks.com
wrote:
Please file a JIRA:https://issues.apache.org/jira/browse/SPARK/
https://www.google.com/url?q=https%3A%2F%2Fissues.apache.org%2Fjira
to work and it ends up reading the whole Parquet data for each
query.(which slows down a lot)
Please see attached the screenshot of this.
Hive itself doesnt seem to have any issues with the projection pushdown.
So this is weird. Is this due to any configuration problem?
Thanks in advance,
Anand Mohan