Xie Juntao created SPARK-26051:
----------------------------------

             Summary: Can't create table with column name '22222d'
                 Key: SPARK-26051
                 URL: https://issues.apache.org/jira/browse/SPARK-26051
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.3.1
            Reporter: Xie Juntao


I can't create table in which the column name is '22222d' when I use spark-sql. 
It seems a SQL parser bug because it's ok for creating table with the column 
name ''22222m".
{code:java}
spark-sql> create table t1(22222d int);
Error in query:
no viable alternative at input 'create table t1(22222d'(line 1, pos 16)

== SQL ==
create table t1(22222d int)
----------------^^^

spark-sql> create table t1(22222m int);
18/11/14 09:13:53 INFO HiveMetaStore: 0: get_database: global_temp
18/11/14 09:13:53 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
global_temp
18/11/14 09:13:53 WARN ObjectStore: Failed to get database global_temp, 
returning NoSuchObjectException
18/11/14 09:13:55 INFO HiveMetaStore: 0: get_database: default
18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
default
18/11/14 09:13:55 INFO HiveMetaStore: 0: get_database: default
18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
default
18/11/14 09:13:55 INFO HiveMetaStore: 0: get_table : db=default tbl=t1
18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_table : 
db=default tbl=t1
18/11/14 09:13:55 INFO HiveMetaStore: 0: get_database: default
18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=get_database: 
default
18/11/14 09:13:55 INFO HiveMetaStore: 0: create_table: Table(tableName:t1, 
dbName:default, owner:root, createTime:1542158033, lastAccessTime:0, 
retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:22222m, type:int, 
comment:null)], 
location:file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1, 
inputFormat:org.apache.hadoop.mapred.TextInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), partitionKeys:[], 
parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"22222m","type":"integer","nullable":true,"metadata":{}}]},
 spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.3.1}, 
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, 
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
rolePrivileges:null))
18/11/14 09:13:55 INFO audit: ugi=root ip=unknown-ip-addr cmd=create_table: 
Table(tableName:t1, dbName:default, owner:root, createTime:1542158033, 
lastAccessTime:0, retention:0, 
sd:StorageDescriptor(cols:[FieldSchema(name:22222m, type:int, comment:null)], 
location:file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1, 
inputFormat:org.apache.hadoop.mapred.TextInputFormat, 
outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, 
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, 
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, 
parameters:{serialization.format=1}), bucketCols:[], sortCols:[], 
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], 
skewedColValueLocationMaps:{})), partitionKeys:[], 
parameters:{spark.sql.sources.schema.part.0={"type":"struct","fields":[{"name":"22222m","type":"integer","nullable":true,"metadata":{}}]},
 spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.3.1}, 
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, 
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, 
rolePrivileges:null))
18/11/14 09:13:55 WARN HiveMetaStore: Location: 
file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1 specified 
for non-external table:t1
18/11/14 09:13:55 INFO FileUtils: Creating directory if it doesn't exist: 
file:/opt/UQuery/spark_/spark-2.3.1-bin-hadoop2.7/spark-warehouse/t1
Time taken: 2.15 seconds
18/11/14 09:13:56 INFO SparkSQLCLIDriver: Time taken: 2.15 seconds{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to