Re: spark1.3.1 using mysql error!

2015-04-25 Thread Anand Mohan
and beeline Things would be better once SPARK-6966 is merged into 1.4.0 when you can use 1. use the --jars parameter for spark-shell, spark-sql, etc or 2. sc.addJar to add the driver after starting spark-shell. Good Luck, Anand Mohan -- View this message in context: http://apache-spark-user

OutOfMemory error in Spark Core

2015-01-15 Thread Anand Mohan
We have our Analytics App built on Spark 1.1 Core, Parquet, Avro and Spray. We are using Kryo serializer for the Avro objects read from Parquet and we are using our custom Kryo registrator (along the lines of ADAM

Re: Spark SQL Percentile UDAF

2014-10-09 Thread Anand Mohan Tumuluri
Filed https://issues.apache.org/jira/browse/SPARK-3891 Thanks, Anand Mohan On Thu, Oct 9, 2014 at 7:13 PM, Michael Armbrust mich...@databricks.com wrote: Please file a JIRA:https://issues.apache.org/jira/browse/SPARK/ https://www.google.com/url?q=https%3A%2F%2Fissues.apache.org%2Fjira

Spark SQL HiveContext Projection Pushdown

2014-10-08 Thread Anand Mohan
to work and it ends up reading the whole Parquet data for each query.(which slows down a lot) Please see attached the screenshot of this. Hive itself doesnt seem to have any issues with the projection pushdown. So this is weird. Is this due to any configuration problem? Thanks in advance, Anand Mohan