I am running hive queries using HiveContext from my Spark code. No matter
which query I run and how much data it is, it always generates 31
partitions. Anybody knows the reason? Is there a predefined/configurable
setting for it? I essentially need more partitions.

I using this code snippet to execute hive query:

/var pairedRDD = hqlContext.sql(hql).rdd.map(...)/

Thanks,
Nitin



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Hive-query-through-HiveContext-always-creating-31-partitions-tp26671.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to