[jira] [Updated] (SPARK-22680) SparkSQL scan all partitions when the specified partitions are not exists in parquet formatted table
[ https://issues.apache.org/jira/browse/SPARK-22680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaochen Ouyang updated SPARK-22680: Summary: SparkSQL scan all partitions when the specified partitions are not exists in parquet formatted table (was: SparkSQL scan all partitions when specified partition is not exists in parquet formatted table) > SparkSQL scan all partitions when the specified partitions are not exists in > parquet formatted table > > > Key: SPARK-22680 > URL: https://issues.apache.org/jira/browse/SPARK-22680 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.2, 2.2.0 > Environment: spark2.0.2 spark2.2.0 >Reporter: Xiaochen Ouyang > > 1. spark-sql --master local[2] > 2. create external table test (id int,name string) partitioned by (country > string,province string, day string,hour int) stored as parquet localtion > '/warehouse/test'; > 3.produce data into table test > 4. select count(1) from test where country = '185' and province = '021' and > day = '2017-11-12' and hour = 10; if the 4 filter conditions are not exists > in HDFS and MetaStore[mysql] , this sql will scan all partitions in table test -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-22680) SparkSQL scan all partitions when the specified partitions are not exists in parquet formatted table
[ https://issues.apache.org/jira/browse/SPARK-22680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon updated SPARK-22680: - Labels: bulk-closed (was: ) > SparkSQL scan all partitions when the specified partitions are not exists in > parquet formatted table > > > Key: SPARK-22680 > URL: https://issues.apache.org/jira/browse/SPARK-22680 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.2, 2.2.0 > Environment: spark2.0.2 spark2.2.0 >Reporter: Xiaochen Ouyang >Priority: Major > Labels: bulk-closed > > 1. spark-sql --master local[2] > 2. create external table test (id int,name string) partitioned by (country > string,province string, day string,hour int) stored as parquet localtion > '/warehouse/test'; > 3.produce data into table test > 4. select count(1) from test where country = '185' and province = '021' and > day = '2017-11-12' and hour = 10; if the 4 filter conditions are not exists > in HDFS and MetaStore[mysql] , this sql will scan all partitions in table test -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org