[ 
https://issues.apache.org/jira/browse/SPARK-20954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenchen Fan resolved SPARK-20954.
---------------------------------
       Resolution: Fixed
    Fix Version/s: 2.2.0

Issue resolved by pull request 18245
[https://github.com/apache/spark/pull/18245]

> DESCRIBE showing 1 extra row of "| # col_name  | data_type  | comment  |"
> -------------------------------------------------------------------------
>
>                 Key: SPARK-20954
>                 URL: https://issues.apache.org/jira/browse/SPARK-20954
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Garros Chan
>             Fix For: 2.2.0
>
>
> I am trying to do DESCRIBE on a table but seeing 1 extra row being auto-added 
> to the result. You can see there is this 1 extra row with "| # col_name  | 
> data_type  | comment  |" ; however, select and select count(*) only shows 1 
> row.
> I searched online a long time and do not find any useful information.
> Is this a bug?
> hdp106m2:/usr/hdp/2.5.0.2-3/spark2 # ./bin/beeline
> Beeline version 1.2.1.spark2 by Apache Hive
> [INFO] Unable to bind key for unsupported operation: backward-delete-word
> [INFO] Unable to bind key for unsupported operation: backward-delete-word
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> [INFO] Unable to bind key for unsupported operation: up-history
> [INFO] Unable to bind key for unsupported operation: down-history
> beeline> !connect jdbc:hive2://localhost:10016
> Connecting to jdbc:hive2://localhost:10016
> Enter username for jdbc:hive2://localhost:10016: hive
> Enter password for jdbc:hive2://localhost:10016: ****
> 17/06/01 14:13:04 INFO Utils: Supplied authorities: localhost:10016
> 17/06/01 14:13:04 INFO Utils: Resolved authority: localhost:10016
> 17/06/01 14:13:04 INFO HiveConnection: Will try to open client transport with 
> JDBC Uri: jdbc:hive2://localhost:10016
> Connected to: Spark SQL (version 2.2.1-SNAPSHOT)
> Driver: Hive JDBC (version 1.2.1.spark2)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> 0: jdbc:hive2://localhost:10016> describe garros.hivefloat;
> +-------------+------------+----------+--+
> |  col_name   | data_type  | comment  |
> +-------------+------------+----------+--+
> | # col_name  | data_type  | comment  |
> | c1          | float      | NULL     |
> +-------------+------------+----------+--+
> 2 rows selected (0.396 seconds)
> 0: jdbc:hive2://localhost:10016> select * from garros.hivefloat;
> +---------------------+--+
> |         c1          |
> +---------------------+--+
> | 123.99800109863281  |
> +---------------------+--+
> 1 row selected (0.319 seconds)
> 0: jdbc:hive2://localhost:10016> select count(*) from garros.hivefloat;
> +-----------+--+
> | count(1)  |
> +-----------+--+
> | 1         |
> +-----------+--+
> 1 row selected (0.783 seconds)
> 0: jdbc:hive2://localhost:10016> describe formatted garros.hiveint;
> +-------------------------------+-------------------------------------------------------------+----------+--+
> |           col_name            |                          data_type          
>                 | comment  |
> +-------------------------------+-------------------------------------------------------------+----------+--+
> | # col_name                    | data_type                                   
>                 | comment  |
> | c1                            | int                                         
>                 | NULL     |
> |                               |                                             
>                 |          |
> | # Detailed Table Information  |                                             
>                 |          |
> | Database                      | garros                                      
>                 |          |
> | Table                         | hiveint                                     
>                 |          |
> | Owner                         | root                                        
>                 |          |
> | Created                       | Thu Feb 09 17:40:36 EST 2017                
>                 |          |
> | Last Access                   | Wed Dec 31 19:00:00 EST 1969                
>                 |          |
> | Type                          | MANAGED                                     
>                 |          |
> | Provider                      | hive                                        
>                 |          |
> | Properties                    | [serialization.format=1]                    
>                 |          |
> | Location                      | 
> hdfs://HDP106/apps/hive/warehouse/garros.db/hiveint         |          |
> | Serde Library                 | 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe          |          |
> | InputFormat                   | org.apache.hadoop.mapred.TextInputFormat    
>                 |          |
> | OutputFormat                  | 
> org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat  |          |
> | Partition Provider            | Catalog                                     
>                 |          |
> +-------------------------------+-------------------------------------------------------------+----------+--+
> 17 rows selected (0.304 seconds)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to