HI, I am using 1.5.2. I have a dataframe which is partitioned based on the country. So I have around 150 partition in the dataframe. When I run sparksql and use country = 'UK' it still reads all partitions and not able to prune other partitions. Thus all the queries run for similar times independent of what country I pass. Is it desired?
Is there a way to fix this in 1.5.2 by using some parameter or is it fixed in latest versions? Thanks -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Partition-pruning-in-spark-1-5-2-tp26682.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org