[ https://issues.apache.org/jira/browse/SPARK-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16189481#comment-16189481 ]
Saktheesh Balaraj commented on SPARK-14172: ------------------------------------------- Similar problem is observed while joining 2 hive tables based on partition columns. *Example* Table A having 1000 partitions (date partition and hour sub-partition) and Table B having 2 partition. when joining 2 tables based on partition it's going for full table scan in table A i.e using all 1000 partitions instead of taking 2 partitions from Table A and join with Table B. *select * from tableA a, tableB b where a.trans_date=b.trans_date and a.trans_hour=b.trans_hour;* (Here trans_date is the partition and trans_hour is the sub-partition on both the tables) *Workaround: * selecting 2 partitions from table B and then do lookup on Table A step1: select trans_date and a.trans_hour from table B step2: select * from tableA where trans_date=<substitute from step1> and a.trans_hour =<substitute from step1> > Hive table partition predicate not passed down correctly > -------------------------------------------------------- > > Key: SPARK-14172 > URL: https://issues.apache.org/jira/browse/SPARK-14172 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 1.6.1 > Reporter: Yingji Zhang > Priority: Critical > > When the hive sql contains nondeterministic fields, spark plan will not push > down the partition predicate to the HiveTableScan. For example: > {code} > -- consider following query which uses a random function to sample rows > SELECT * > FROM table_a > WHERE partition_col = 'some_value' > AND rand() < 0.01; > {code} > The spark plan will not push down the partition predicate to HiveTableScan > which ends up scanning all partitions data from the table. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org