[ 
https://issues.apache.org/jira/browse/SPARK-26051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708233#comment-16708233
 ] 

Dilip Biswal commented on SPARK-26051:
--------------------------------------

[~xiejuntao1...@163.com] Hello, i took a quick look at this. `22222d` is parsed 
as a DOUBLE_LITERAL. Thats the reason its not allowed as a column name. Can you 
check other systems ? I checked hive and db2 and both of these systems do not 
allow numeric literals as column names.
{quote}

db2 => create table t1(22222d int) 
DB21034E  The command was processed as an SQL statement because it was not a 
valid Command Line Processor command.  During SQL processing it returned:
SQL0103N  The numeric literal "22222d" is not valid.  SQLSTATE=42604
{quote}

> Can't create table with column name '22222d'
> --------------------------------------------
>
>                 Key: SPARK-26051
>                 URL: https://issues.apache.org/jira/browse/SPARK-26051
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.3.1
>            Reporter: Xie Juntao
>            Priority: Minor
>
> I can't create table in which the column name is '22222d' when I use 
> spark-sql. It seems a SQL parser bug because it's ok for creating table with 
> the column name ''22222m".
> {code:java}
> spark-sql> create table t1(22222d int);
> Error in query:
> no viable alternative at input 'create table t1(22222d'(line 1, pos 16)
> == SQL ==
> create table t1(22222d int)
> ----------------^^^
> spark-sql> create table t1(22222m int);
> 18/11/14 09:13:53 INFO HiveMetaStore: 0: get_database: global_temp
> 18/11/14 09:13:53 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
> global_temp
> 18/11/14 09:13:53 WARN ObjectStore: Failed to get database global_temp, 
> returning NoSuchObjectException
> 18/11/14 09:13:55 INFO HiveMetaStore: 0: get_database: default
> 18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
> default
> 18/11/14 09:13:55 INFO HiveMetaStore: 0: get_database: default
> 18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
> default
> 18/11/14 09:13:55 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
> 18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : 
> db=default tbl=t1
> 18/11/14 09:13:55 INFO HiveMetaStore: 0: get_database: default
> 18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
> default
> 18/11/14 09:13:55 INFO HiveMetaStore: 0: create_table: Table(tableName:t1, 
> dbName:default, owner:root, createTime:1542158033, lastAccessTime:0, 
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:22222m, type:int, 
> comment:null)], 
> location:file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1,
>  inputFormat:org.apache.hadoop.mapred.TextInputFormat, 
> outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
> serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
> parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
> skewedColValueLocationMaps:{})), partitionKeys:[], 
> parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"22222m","type":"integer","nullable":true,"metadata":{}}]},
>  spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.3.1}, 
> viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, 
> privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
> rolePrivileges:null))
> 18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_table: 
> Table(tableName:t1, dbName:default, owner:root, createTime:1542158033, 
> lastAccessTime:0, retention:0, 
> sd:StorageDescriptor(cols:[FieldSchema(name:22222m, type:int, comment:null)], 
> location:file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1,
>  inputFormat:org.apache.hadoop.mapred.TextInputFormat, 
> outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
> compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
> serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
> parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
> parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
> skewedColValueLocationMaps:{})), partitionKeys:[], 
> parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"22222m","type":"integer","nullable":true,"metadata":{}}]},
>  spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.3.1}, 
> viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, 
> privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
> rolePrivileges:null))
> 18/11/14 09:13:55 WARN HiveMetaStore: Location: 
> file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1 
> specified for non-external table:t1
> 18/11/14 09:13:55 INFO FileUtils: Creating directory if it doesn't exist: 
> file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1
> Time taken: 2.15 seconds
> 18/11/14 09:13:56 INFO SparkSQLCLIDriver: Time taken: 2.15 seconds{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to