[jira] [Commented] (SPARK-31648) Filtering is supported only on partition keys of type string Issue

2020-05-06 Thread Rajesh Tadi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100789#comment-17100789
 ] 

Rajesh Tadi commented on SPARK-31648:
-

[~angerszhuuu] I have tried creating a table using Spark-SQL and Dataframes in 
Scala as well. Both the ways I see the same issue.

> Filtering is supported only on partition keys of type string Issue
> --
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-31648) Filtering is supported only on partition keys of type string Issue

2020-05-06 Thread Rajesh Tadi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100629#comment-17100629
 ] 

Rajesh Tadi edited comment on SPARK-31648 at 5/6/20, 9:52 AM:
--

[~angerszhuuu] Below is the SQL I have used.

 

select * from testdb.partbuck_test where country_cd='India';

 

My table structure will look similar as below.

Schema:

 
||col_name||data_type||comment||
|ID|bigint|null|
|NAME|string|null|
|COUNTRY_CD|string|null|
|# Partition Information| | |
|# col_name|data_type|comment|
|COUNTRY_CD|string|null|

 

 


was (Author: rajesh tadi):
[~angerszhuuu] Below is the SQL I have used.

 

select * from testdb.partbuck_test where country_cd='India';

 

My table structure will look similar as below.

Schema:

++--+--+
 |col_name                            |data_type                                
                                                                    |comment|
++--+--+

 |ID                                       |bigint                              
                                                                             
|null         |

 |NAME                                 |string                                  
                                                                         |null  
       |

 |.                    |...                 
                                                                                
          |null         |

 |.                    |...                 
                                                                                
          |null         |

 |.                    |...                 
                                                                                
          |null         |

 |COUNTRY_CD                    |string                                         
                                                                  |null         
|

|# Partition Information       |                                                
                                                                    |           
    |
|# col_name                         |data_type                                  
                                                                  |comment|
 |COUNTRY_CD                    |string                                         
                                                                  |null         
|

++--+--+

 

> Filtering is supported only on partition keys of type string Issue
> --
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-31648) Filtering is supported only on partition keys of type string Issue

2020-05-06 Thread Rajesh Tadi (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17100629#comment-17100629
 ] 

Rajesh Tadi commented on SPARK-31648:
-

[~angerszhuuu] Below is the SQL I have used.

 

select * from testdb.partbuck_test where country_cd='India';

 

My table structure will look similar as below.

Schema:

++--+--+
 |col_name                            |data_type                                
                                                                    |comment|
++--+--+

 |ID                                       |bigint                              
                                                                             
|null         |

 |NAME                                 |string                                  
                                                                         |null  
       |

 |.                    |...                 
                                                                                
          |null         |

 |.                    |...                 
                                                                                
          |null         |

 |.                    |...                 
                                                                                
          |null         |

 |COUNTRY_CD                    |string                                         
                                                                  |null         
|

|# Partition Information       |                                                
                                                                    |           
    |
|# col_name                         |data_type                                  
                                                                  |comment|
 |COUNTRY_CD                    |string                                         
                                                                  |null         
|

++--+--+

 

> Filtering is supported only on partition keys of type string Issue
> --
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string Issue

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

Description: 
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

 

 

  was:
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string


> Filtering is supported only on partition keys of type string Issue
> --
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string Issue

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

Description: 
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

  was:
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

 

[^Spark Bug.txt]


> Filtering is supported only on partition keys of type string Issue
> --
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string Issue

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

Description: 
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

 

[^Spark Bug.txt]

  was:
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

Summary: Filtering is supported only on partition keys of type string 
Issue  (was: Filtering is supported only on partition keys of type string)

> Filtering is supported only on partition keys of type string Issue
> --
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
>  
> [^Spark Bug.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

Description: 
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but I still see the same issue.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

  was:
When I submit a SQL with partition filter I see the below error. I tried 
setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to false 
but no luck.

java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive.

java.lang.reflect.InvocationTargetException: 
org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string

org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported only 
on partition keys of type string


> Filtering is supported only on partition keys of type string
> 
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but I still see the same issue.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

Attachment: Spark Bug.txt

> Filtering is supported only on partition keys of type string
> 
>
> Key: SPARK-31648
> URL: https://issues.apache.org/jira/browse/SPARK-31648
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.4.4
>Reporter: Rajesh Tadi
>Priority: Major
> Attachments: Spark Bug.txt
>
>
> When I submit a SQL with partition filter I see the below error. I tried 
> setting Spark Configuration spark.sql.hive.manageFilesourcePartitions to 
> false but no luck.
> java.lang.RuntimeException: Caught Hive MetaException attempting to get 
> partition metadata by filter from Hive.
> java.lang.reflect.InvocationTargetException: 
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string
> org.apache.hadoop.hive.metastore.api.MetaException: Filtering is supported 
> only on partition keys of type string



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

  Docs Text: 
java.lang.RuntimeException: Caught Hive MetaException attempting to get 
partition metadata by filter from Hive. You can set the Spark configuration 
setting spark.sql.hive.manageFilesourcePartitions to false to work around this 
problem, however this will result in degraded performance. Please report a bug: 
https://issues.apache.org/jira/browse/SPARK
at 
org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:775)
at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:679)
at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:677)
at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:677)
at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1221)
at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1214)
at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at 
org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1214)
at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitionsByFilter(ExternalCatalogWithListener.scala:254)
at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:962)
at 
org.apache.spark.sql.execution.datasources.CatalogFileIndex.filterPartitions(CatalogFileIndex.scala:73)
at 
org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:63)
at 
org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:27)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:259)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:259)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:258)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at 

[jira] [Updated] (SPARK-31648) Filtering is supported only on partition keys of type string

2020-05-06 Thread Rajesh Tadi (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-31648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Tadi updated SPARK-31648:

Docs Text:   (was: java.lang.RuntimeException: Caught Hive MetaException 
attempting to get partition metadata by filter from Hive. You can set the Spark 
configuration setting spark.sql.hive.manageFilesourcePartitions to false to 
work around this problem, however this will result in degraded performance. 
Please report a bug: https://issues.apache.org/jira/browse/SPARK
at 
org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:775)
at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:679)
at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:677)
at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
at 
org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:677)
at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1221)
at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1214)
at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at 
org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1214)
at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitionsByFilter(ExternalCatalogWithListener.scala:254)
at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:962)
at 
org.apache.spark.sql.execution.datasources.CatalogFileIndex.filterPartitions(CatalogFileIndex.scala:73)
at 
org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:63)
at 
org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:27)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:259)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:259)
at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:258)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:264)
at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
at 

[jira] [Created] (SPARK-31648) Filtering is supported only on partition keys of type string

2020-05-05 Thread Rajesh Tadi (Jira)
Rajesh Tadi created SPARK-31648:
---

 Summary: Filtering is supported only on partition keys of type 
string
 Key: SPARK-31648
 URL: https://issues.apache.org/jira/browse/SPARK-31648
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.4.4
 Environment: java.lang.RuntimeException: Caught Hive MetaException 
attempting to get partition metadata by filter from Hive. You can set the Spark 
configuration setting spark.sql.hive.manageFilesourcePartitions to false to 
work around this problem, however this will result in degraded performance. 
Please report a bug: https://issues.apache.org/jira/browse/SPARK
 at 
org.apache.spark.sql.hive.client.Shim_v0_13.getPartitionsByFilter(HiveShim.scala:775)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:679)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$getPartitionsByFilter$1.apply(HiveClientImpl.scala:677)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:275)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:213)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:212)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:258)
 at 
org.apache.spark.sql.hive.client.HiveClientImpl.getPartitionsByFilter(HiveClientImpl.scala:677)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1221)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$listPartitionsByFilter$1.apply(HiveExternalCatalog.scala:1214)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
 at 
org.apache.spark.sql.hive.HiveExternalCatalog.listPartitionsByFilter(HiveExternalCatalog.scala:1214)
 at 
org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitionsByFilter(ExternalCatalogWithListener.scala:254)
 at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitionsByFilter(SessionCatalog.scala:962)
 at 
org.apache.spark.sql.execution.datasources.CatalogFileIndex.filterPartitions(CatalogFileIndex.scala:73)
 at 
org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:63)
 at 
org.apache.spark.sql.execution.datasources.PruneFileSourcePartitions$$anonfun$apply$1.applyOrElse(PruneFileSourcePartitions.scala:27)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:259)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:259)
 at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:258)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:264)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDown(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.transformDown(AnalysisHelper.scala:149)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDown(LogicalPlan.scala:29)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:264)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:329)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
 at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:327)
 at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:264)
 at