[jira] [Comment Edited] (SPARK-27060) DDL Commands are accepting Keywords like create, drop as tableName

2019-03-07 Thread Sachin Ramachandra Setty (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786532#comment-16786532
 ] 

Sachin Ramachandra Setty edited comment on SPARK-27060 at 3/7/19 9:10 AM:
--

I verified in PostgreSQL. PostgreSQL also not accepting keywords as tableName.

PostgreSQL Behaviour :
--PostgreSQL 9.6
--'\\' is a delimiter

CREATE TABLE create(
 user_id serial PRIMARY KEY,
 username VARCHAR (50) UNIQUE NOT NULL,
);

Error(s), warning(s):

42601: syntax error at or near "create"


was (Author: sachin1729):
I verified in PostgreSQL as well and issue is not happening. PostgreSQL is also 
not accepting keywords as tableName.  

PostgreSQL Behaviour :
--PostgreSQL 9.6
--'\\' is a delimiter

CREATE TABLE create(
 user_id serial PRIMARY KEY,
 username VARCHAR (50) UNIQUE NOT NULL,
);

Error(s), warning(s):

42601: syntax error at or near "create"

> DDL Commands are accepting Keywords like create, drop as tableName
> --
>
> Key: SPARK-27060
> URL: https://issues.apache.org/jira/browse/SPARK-27060
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.3.2, 2.4.0
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Seems to be a compatibility issue compared to other components such as hive 
> and mySql. 
> DDL commands are successful even though the tableName is same as keyword. 
> Tested with columnNames as well and issue exists. 
> Whereas, Hive-Beeline is throwing ParseException and not accepting keywords 
> as tableName or columnName and mySql is accepting keywords only as columnName.
> Spark-Behaviour :
> {code}
> Connected to: Spark SQL (version 2.3.2.0101)
> CLI_DBMS_APPID
> Beeline version 1.2.1.spark_2.3.2.0101 by Apache Hive
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table create(id int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.255 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table drop(int int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.257 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table drop;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.236 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table create;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.168 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table tab1(float float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.111 seconds)
> 0: jdbc:hive2://10.18.XXX:23040/default> create table double(double float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.093 seconds)
> {code}
> Hive-Behaviour :
> {code}
> Connected to: Apache Hive (version 3.1.0)
> Driver: Hive JDBC (version 3.1.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0 by Apache Hive
> 0: jdbc:hive2://10.18.XXX:21066/> create table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> create table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> create table tab1(float float);
> Error: Error while compiling statement: FAILED: ParseException line 1:18 
> cannot recognize input near 'float' 'float' ')' in column name or constraint 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> drop table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> drop table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> mySql :
> CREATE TABLE CREATE(ID integer);
> Error: near "CREATE": syntax error
> CREATE TABLE DROP(ID integer);
> Error: near "DROP": syntax error
> CREATE TABLE TAB1(FLOAT FLOAT);
> Success
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27060) DDL Commands are accepting Keywords like create, drop as tableName

2019-03-07 Thread Sachin Ramachandra Setty (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16786532#comment-16786532
 ] 

Sachin Ramachandra Setty commented on SPARK-27060:
--

I verified in PostgreSQL as well and issue is not happening. PostgreSQL is also 
not accepting keywords as tableName.  

PostgreSQL Behaviour :
--PostgreSQL 9.6
--'\\' is a delimiter

CREATE TABLE create(
 user_id serial PRIMARY KEY,
 username VARCHAR (50) UNIQUE NOT NULL,
);

Error(s), warning(s):

42601: syntax error at or near "create"

> DDL Commands are accepting Keywords like create, drop as tableName
> --
>
> Key: SPARK-27060
> URL: https://issues.apache.org/jira/browse/SPARK-27060
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.3.2, 2.4.0
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Seems to be a compatibility issue compared to other components such as hive 
> and mySql. 
> DDL commands are successful even though the tableName is same as keyword. 
> Tested with columnNames as well and issue exists. 
> Whereas, Hive-Beeline is throwing ParseException and not accepting keywords 
> as tableName or columnName and mySql is accepting keywords only as columnName.
> Spark-Behaviour :
> {code}
> Connected to: Spark SQL (version 2.3.2.0101)
> CLI_DBMS_APPID
> Beeline version 1.2.1.spark_2.3.2.0101 by Apache Hive
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table create(id int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.255 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table drop(int int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.257 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table drop;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.236 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table create;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.168 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table tab1(float float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.111 seconds)
> 0: jdbc:hive2://10.18.XXX:23040/default> create table double(double float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.093 seconds)
> {code}
> Hive-Behaviour :
> {code}
> Connected to: Apache Hive (version 3.1.0)
> Driver: Hive JDBC (version 3.1.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0 by Apache Hive
> 0: jdbc:hive2://10.18.XXX:21066/> create table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> create table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> create table tab1(float float);
> Error: Error while compiling statement: FAILED: ParseException line 1:18 
> cannot recognize input near 'float' 'float' ')' in column name or constraint 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> drop table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> drop table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> mySql :
> CREATE TABLE CREATE(ID integer);
> Error: near "CREATE": syntax error
> CREATE TABLE DROP(ID integer);
> Error: near "DROP": syntax error
> CREATE TABLE TAB1(FLOAT FLOAT);
> Success
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-27060) DDL Commands are accepting Keywords like create, drop as tableName

2019-03-05 Thread Sachin Ramachandra Setty (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784575#comment-16784575
 ] 

Sachin Ramachandra Setty edited comment on SPARK-27060 at 3/5/19 3:40 PM:
--

I verified this issue with Spark 2.3.2 and Spark 2.4.0 versions


was (Author: sachin1729):
I verified this issue with 2.3.2 and 2.4.0 .

> DDL Commands are accepting Keywords like create, drop as tableName
> --
>
> Key: SPARK-27060
> URL: https://issues.apache.org/jira/browse/SPARK-27060
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.3.2, 2.4.0
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Seems to be a compatibility issue compared to other components such as hive 
> and mySql. 
> DDL commands are successful even though the tableName is same as keyword. 
> Tested with columnNames as well and issue exists. 
> Whereas, Hive-Beeline is throwing ParseException and not accepting keywords 
> as tableName or columnName and mySql is accepting keywords only as columnName.
> Spark-Behaviour :
> Connected to: Spark SQL (version 2.3.2.0101)
> CLI_DBMS_APPID
> Beeline version 1.2.1.spark_2.3.2.0101 by Apache Hive
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table create(id int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.255 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table drop(int int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.257 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table drop;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.236 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table create;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.168 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table tab1(float float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.111 seconds)
> 0: jdbc:hive2://10.18.XXX:23040/default> create table double(double float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.093 seconds)
> Hive-Behaviour :
> Connected to: Apache Hive (version 3.1.0)
> Driver: Hive JDBC (version 3.1.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0 by Apache Hive
> 0: jdbc:hive2://10.18.XXX:21066/> create table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> create table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> create table tab1(float float);
> Error: Error while compiling statement: FAILED: ParseException line 1:18 
> cannot recognize input near 'float' 'float' ')' in column name or constraint 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> drop table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> drop table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> mySql :
> CREATE TABLE CREATE(ID integer);
> Error: near "CREATE": syntax error
> CREATE TABLE DROP(ID integer);
> Error: near "DROP": syntax error
> CREATE TABLE TAB1(FLOAT FLOAT);
> Success



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27060) DDL Commands are accepting Keywords like create, drop as tableName

2019-03-05 Thread Sachin Ramachandra Setty (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784575#comment-16784575
 ] 

Sachin Ramachandra Setty commented on SPARK-27060:
--

I verified this issue with 2.3.2 and 2.4.0 .

> DDL Commands are accepting Keywords like create, drop as tableName
> --
>
> Key: SPARK-27060
> URL: https://issues.apache.org/jira/browse/SPARK-27060
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.3.2, 2.4.0
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Seems to be a compatibility issue compared to other components such as hive 
> and mySql. 
> DDL commands are successful even though the tableName is same as keyword. 
> Tested with columnNames as well and issue exists. 
> Whereas, Hive-Beeline is throwing ParseException and not accepting keywords 
> as tableName or columnName and mySql is accepting keywords only as columnName.
> Spark-Behaviour :
> Connected to: Spark SQL (version 2.3.2.0101)
> CLI_DBMS_APPID
> Beeline version 1.2.1.spark_2.3.2.0101 by Apache Hive
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table create(id int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.255 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table drop(int int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.257 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table drop;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.236 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table create;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.168 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table tab1(float float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.111 seconds)
> 0: jdbc:hive2://10.18.XXX:23040/default> create table double(double float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.093 seconds)
> Hive-Behaviour :
> Connected to: Apache Hive (version 3.1.0)
> Driver: Hive JDBC (version 3.1.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0 by Apache Hive
> 0: jdbc:hive2://10.18.XXX:21066/> create table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> create table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> create table tab1(float float);
> Error: Error while compiling statement: FAILED: ParseException line 1:18 
> cannot recognize input near 'float' 'float' ')' in column name or constraint 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> drop table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> drop table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> mySql :
> CREATE TABLE CREATE(ID integer);
> Error: near "CREATE": syntax error
> CREATE TABLE DROP(ID integer);
> Error: near "DROP": syntax error
> CREATE TABLE TAB1(FLOAT FLOAT);
> Success



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27060) DDL Commands are accepting Keywords like create, drop as tableName

2019-03-05 Thread Sachin Ramachandra Setty (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-27060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16784560#comment-16784560
 ] 

Sachin Ramachandra Setty commented on SPARK-27060:
--

cc [~srowen] 

> DDL Commands are accepting Keywords like create, drop as tableName
> --
>
> Key: SPARK-27060
> URL: https://issues.apache.org/jira/browse/SPARK-27060
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.3.2, 2.4.0
>Reporter: Sachin Ramachandra Setty
>Priority: Major
> Fix For: 2.3.2, 2.4.0
>
>
> Seems to be a compatibility issue compared to other components such as hive 
> and mySql. 
> DDL commands are successful even though the tableName is same as keyword. 
> Tested with columnNames as well and issue exists. 
> Whereas, Hive-Beeline is throwing ParseException and not accepting keywords 
> as tableName or columnName and mySql is accepting keywords only as columnName.
> Spark-Behaviour :
> Connected to: Spark SQL (version 2.3.2.0101)
> CLI_DBMS_APPID
> Beeline version 1.2.1.spark_2.3.2.0101 by Apache Hive
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table create(id int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.255 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table drop(int int);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.257 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table drop;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.236 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> drop table create;
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.168 seconds)
> 0: jdbc:hive2://10.18.3.XXX:23040/default> create table tab1(float float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.111 seconds)
> 0: jdbc:hive2://10.18.XXX:23040/default> create table double(double float);
> +-+--+
> | Result  |
> +-+--+
> +-+--+
> No rows selected (0.093 seconds)
> Hive-Behaviour :
> Connected to: Apache Hive (version 3.1.0)
> Driver: Hive JDBC (version 3.1.0)
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> Beeline version 3.1.0 by Apache Hive
> 0: jdbc:hive2://10.18.XXX:21066/> create table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> create table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:13 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> create table tab1(float float);
> Error: Error while compiling statement: FAILED: ParseException line 1:18 
> cannot recognize input near 'float' 'float' ')' in column name or constraint 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18XXX:21066/> drop table create(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'create' '(' 'id' in table name 
> (state=42000,code=4)
> 0: jdbc:hive2://10.18.XXX:21066/> drop table drop(id int);
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> cannot recognize input near 'drop' '(' 'id' in table name 
> (state=42000,code=4)
> mySql :
> CREATE TABLE CREATE(ID integer);
> Error: near "CREATE": syntax error
> CREATE TABLE DROP(ID integer);
> Error: near "DROP": syntax error
> CREATE TABLE TAB1(FLOAT FLOAT);
> Success



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-27060) DDL Commands are accepting Keywords like create, drop as tableName

2019-03-05 Thread Sachin Ramachandra Setty (JIRA)
Sachin Ramachandra Setty created SPARK-27060:


 Summary: DDL Commands are accepting Keywords like create, drop as 
tableName
 Key: SPARK-27060
 URL: https://issues.apache.org/jira/browse/SPARK-27060
 Project: Spark
  Issue Type: Improvement
  Components: SQL
Affects Versions: 2.4.0, 2.3.2
Reporter: Sachin Ramachandra Setty
 Fix For: 2.4.0, 2.3.2


Seems to be a compatibility issue compared to other components such as hive and 
mySql. 
DDL commands are successful even though the tableName is same as keyword. 

Tested with columnNames as well and issue exists. 

Whereas, Hive-Beeline is throwing ParseException and not accepting keywords as 
tableName or columnName and mySql is accepting keywords only as columnName.


Spark-Behaviour :

Connected to: Spark SQL (version 2.3.2.0101)
CLI_DBMS_APPID
Beeline version 1.2.1.spark_2.3.2.0101 by Apache Hive
0: jdbc:hive2://10.18.3.XXX:23040/default> create table create(id int);
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.255 seconds)
0: jdbc:hive2://10.18.3.XXX:23040/default> create table drop(int int);
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.257 seconds)
0: jdbc:hive2://10.18.3.XXX:23040/default> drop table drop;
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.236 seconds)
0: jdbc:hive2://10.18.3.XXX:23040/default> drop table create;
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.168 seconds)
0: jdbc:hive2://10.18.3.XXX:23040/default> create table tab1(float float);
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.111 seconds)
0: jdbc:hive2://10.18.XXX:23040/default> create table double(double float);
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.093 seconds)



Hive-Behaviour :

Connected to: Apache Hive (version 3.1.0)
Driver: Hive JDBC (version 3.1.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 3.1.0 by Apache Hive

0: jdbc:hive2://10.18.XXX:21066/> create table create(id int);
Error: Error while compiling statement: FAILED: ParseException line 1:13 cannot 
recognize input near 'create' '(' 'id' in table name (state=42000,code=4)

0: jdbc:hive2://10.18.XXX:21066/> create table drop(id int);
Error: Error while compiling statement: FAILED: ParseException line 1:13 cannot 
recognize input near 'drop' '(' 'id' in table name (state=42000,code=4)

0: jdbc:hive2://10.18XXX:21066/> create table tab1(float float);
Error: Error while compiling statement: FAILED: ParseException line 1:18 cannot 
recognize input near 'float' 'float' ')' in column name or constraint 
(state=42000,code=4)

0: jdbc:hive2://10.18XXX:21066/> drop table create(id int);
Error: Error while compiling statement: FAILED: ParseException line 1:11 cannot 
recognize input near 'create' '(' 'id' in table name (state=42000,code=4)

0: jdbc:hive2://10.18.XXX:21066/> drop table drop(id int);
Error: Error while compiling statement: FAILED: ParseException line 1:11 cannot 
recognize input near 'drop' '(' 'id' in table name (state=42000,code=4)

mySql :
CREATE TABLE CREATE(ID integer);
Error: near "CREATE": syntax error

CREATE TABLE DROP(ID integer);
Error: near "DROP": syntax error

CREATE TABLE TAB1(FLOAT FLOAT);
Success








--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-25834) stream stream Outer join with update mode is not throwing exception

2018-10-25 Thread Sachin Ramachandra Setty (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-25834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Ramachandra Setty updated SPARK-25834:
-
Description: 
Execute the below program and can see there is no AnalysisException thrown

import java.sql.Timestamp
 import org.apache.spark.sql.functions.\{col, expr}
 import org.apache.spark.sql.streaming.Trigger

val lines_stream1 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test11").
 option("includeTimestamp", true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data"),col("timestamp") as("recordTime")).
 select("data","recordTime").
 withWatermark("recordTime", "20 seconds ")

val lines_stream2 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test22").
 option("includeTimestamp", value = true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data1"),col("timestamp") as("recordTime1")).
 select("data1","recordTime1").
 withWatermark("recordTime1", "20 seconds ")

val query = lines_stream1.join(lines_stream2, expr (
 """
 | data == data1 and
 | recordTime1 >= recordTime and
 | recordTime1 <= recordTime + interval 20 seconds
 """.stripMargin),"*left*").
 writeStream.
 option("truncate","false").
 outputMode("*update*").
 format("console").
 trigger(Trigger.ProcessingTime ("2 second")).
 start()

query.awaitTermination()

As per the document 
[https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html#stream-stream-joins]
 
 joins are only supported in append mode

*As of Spark 2.3, you can use joins only when the query is in Append output 
mode. Other output modes are not yet supported.*

Inner join is working as per spark documentation but it is failed for outer 
joins

  was:
Execute the below program and can see there is no AnalysisException thrown

import java.sql.Timestamp
 import org.apache.spark.sql.functions.\{col, expr}
 import org.apache.spark.sql.streaming.Trigger

val lines_stream1 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test11").
 option("includeTimestamp", true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data"),col("timestamp") as("recordTime")).
 select("data","recordTime").
 withWatermark("recordTime", "20 seconds ")

val lines_stream2 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test22").
 option("includeTimestamp", value = true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data1"),col("timestamp") as("recordTime1")).
 select("data1","recordTime1").
 withWatermark("recordTime1", "20 seconds ")

 
 val query = lines_stream1.join(lines_stream2, expr (
 """
 | data == data1 and
 | recordTime1 >= recordTime and
 | recordTime1 <= recordTime + interval 20 seconds
 """.stripMargin),"right").
 writeStream.
 option("truncate","false").
 outputMode("update").
 format("console").
 trigger(Trigger.ProcessingTime ("2 second")).
 start()
 
query.awaitTermination()

As per the document 
[https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html#stream-stream-joins]
 
 joins are only supported in append mode

*As of Spark 2.3, you can use joins only when the query is in Append output 
mode. Other output modes are not yet supported.*

Inner join is working as per spark documentation but it is failed for outer 
joins


> stream stream Outer join with update mode is not throwing exception
> ---
>
> Key: SPARK-25834
> URL: https://issues.apache.org/jira/browse/SPARK-25834
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.3.1, 2.3.2
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Execute the below program and can see there is no AnalysisException thrown
> import java.sql.Timestamp
>  import org.apache.spark.sql.functions.\{col, expr}
>  import org.apache.spark.sql.streaming.Trigger
> val lines_stream1 = spark.readStream.
>  format("kafka").
>  option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
>  option("subscribe", "test11").
>  option("includeTimestamp", true).
>  load().
>  selectExpr("CAST (value AS String)","CAST(timestamp AS 
> TIMESTAMP)").as[(String,Timestamp)].
>  select(col("value") as("data"),col("timestamp") as("recordTime")).
> 

[jira] [Updated] (SPARK-25834) stream stream Outer join with update mode is not throwing exception

2018-10-25 Thread Sachin Ramachandra Setty (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-25834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Ramachandra Setty updated SPARK-25834:
-
Description: 
Execute the below program and can see there is no AnalysisException thrown

import java.sql.Timestamp
 import org.apache.spark.sql.functions.\{col, expr}
 import org.apache.spark.sql.streaming.Trigger

val lines_stream1 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test11").
 option("includeTimestamp", true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data"),col("timestamp") as("recordTime")).
 select("data","recordTime").
 withWatermark("recordTime", "20 seconds ")

val lines_stream2 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test22").
 option("includeTimestamp", value = true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data1"),col("timestamp") as("recordTime1")).
 select("data1","recordTime1").
 withWatermark("recordTime1", "20 seconds ")

 
 val query = lines_stream1.join(lines_stream2, expr (
 """
 | data == data1 and
 | recordTime1 >= recordTime and
 | recordTime1 <= recordTime + interval 20 seconds
 """.stripMargin),"right").
 writeStream.
 option("truncate","false").
 outputMode("update").
 format("console").
 trigger(Trigger.ProcessingTime ("2 second")).
 start()
 
query.awaitTermination()

As per the document 
[https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html#stream-stream-joins]
 
 joins are only supported in append mode

*As of Spark 2.3, you can use joins only when the query is in Append output 
mode. Other output modes are not yet supported.*

Inner join is working as per spark documentation but it is failed for outer 
joins

  was:
Execute the below program and can see there is no AnalysisException thrown

import java.sql.Timestamp
 import org.apache.spark.sql.functions.\{col, expr}
 import org.apache.spark.sql.streaming.Trigger
 
 val lines_stream1 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test11").
 option("includeTimestamp", true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data"),col("timestamp") as("recordTime")).
 select("data","recordTime").
 withWatermark("recordTime", "20 seconds ")

val lines_stream2 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test22").
 option("includeTimestamp", value = true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data1"),col("timestamp") as("recordTime1")).
 select("data1","recordTime1").
 withWatermark("recordTime1", "20 seconds ")


 val query = lines_stream1.join(lines_stream2, expr (
 """
 | data == data1 and
 | recordTime1 >= recordTime and
 | recordTime1 <= recordTime + interval 20 seconds
 """.stripMargin),"*left*").
 writeStream.
 option("truncate","false").
 outputMode("update").
 format("console").
 trigger(Trigger.ProcessingTime ("2 second")).
 start()

query.awaitTermination()

As per the document 
https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html#stream-stream-joins
 
 joins are only supported in append mode

*As of Spark 2.3, you can use joins only when the query is in Append output 
mode. Other output modes are not yet supported.*

Inner join is working as per spark documentation but it is failed for outer 
joins


> stream stream Outer join with update mode is not throwing exception
> ---
>
> Key: SPARK-25834
> URL: https://issues.apache.org/jira/browse/SPARK-25834
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.3.1, 2.3.2
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Execute the below program and can see there is no AnalysisException thrown
> import java.sql.Timestamp
>  import org.apache.spark.sql.functions.\{col, expr}
>  import org.apache.spark.sql.streaming.Trigger
> val lines_stream1 = spark.readStream.
>  format("kafka").
>  option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
>  option("subscribe", "test11").
>  option("includeTimestamp", true).
>  load().
>  selectExpr("CAST (value AS String)","CAST(timestamp AS 
> TIMESTAMP)").as[(String,Timestamp)].
>  select(col("value") as("data"),col("timestamp") as("recordTime")).
> 

[jira] [Updated] (SPARK-25834) stream stream Outer join with update mode is not throwing exception

2018-10-25 Thread Sachin Ramachandra Setty (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-25834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sachin Ramachandra Setty updated SPARK-25834:
-
Summary: stream stream Outer join with update mode is not throwing 
exception  (was: stream stream Outer join with update mode is throwing 
exception)

> stream stream Outer join with update mode is not throwing exception
> ---
>
> Key: SPARK-25834
> URL: https://issues.apache.org/jira/browse/SPARK-25834
> Project: Spark
>  Issue Type: Bug
>  Components: Structured Streaming
>Affects Versions: 2.3.1, 2.3.2
>Reporter: Sachin Ramachandra Setty
>Priority: Minor
>
> Execute the below program and can see there is no AnalysisException thrown
> import java.sql.Timestamp
>  import org.apache.spark.sql.functions.\{col, expr}
>  import org.apache.spark.sql.streaming.Trigger
>  
>  val lines_stream1 = spark.readStream.
>  format("kafka").
>  option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
>  option("subscribe", "test11").
>  option("includeTimestamp", true).
>  load().
>  selectExpr("CAST (value AS String)","CAST(timestamp AS 
> TIMESTAMP)").as[(String,Timestamp)].
>  select(col("value") as("data"),col("timestamp") as("recordTime")).
>  select("data","recordTime").
>  withWatermark("recordTime", "20 seconds ")
> val lines_stream2 = spark.readStream.
>  format("kafka").
>  option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
>  option("subscribe", "test22").
>  option("includeTimestamp", value = true).
>  load().
>  selectExpr("CAST (value AS String)","CAST(timestamp AS 
> TIMESTAMP)").as[(String,Timestamp)].
>  select(col("value") as("data1"),col("timestamp") as("recordTime1")).
>  select("data1","recordTime1").
>  withWatermark("recordTime1", "20 seconds ")
>  val query = lines_stream1.join(lines_stream2, expr (
>  """
>  | data == data1 and
>  | recordTime1 >= recordTime and
>  | recordTime1 <= recordTime + interval 20 seconds
>  """.stripMargin),"*left*").
>  writeStream.
>  option("truncate","false").
>  outputMode("update").
>  format("console").
>  trigger(Trigger.ProcessingTime ("2 second")).
>  start()
> query.awaitTermination()
> As per the document 
> https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html#stream-stream-joins
>  
>  joins are only supported in append mode
> *As of Spark 2.3, you can use joins only when the query is in Append output 
> mode. Other output modes are not yet supported.*
> Inner join is working as per spark documentation but it is failed for outer 
> joins



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-25834) stream stream Outer join with update mode is throwing exception

2018-10-25 Thread Sachin Ramachandra Setty (JIRA)
Sachin Ramachandra Setty created SPARK-25834:


 Summary: stream stream Outer join with update mode is throwing 
exception
 Key: SPARK-25834
 URL: https://issues.apache.org/jira/browse/SPARK-25834
 Project: Spark
  Issue Type: Bug
  Components: Structured Streaming
Affects Versions: 2.3.2, 2.3.1
Reporter: Sachin Ramachandra Setty


Execute the below program and can see there is no AnalysisException thrown

import java.sql.Timestamp
 import org.apache.spark.sql.functions.\{col, expr}
 import org.apache.spark.sql.streaming.Trigger
 
 val lines_stream1 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test11").
 option("includeTimestamp", true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data"),col("timestamp") as("recordTime")).
 select("data","recordTime").
 withWatermark("recordTime", "20 seconds ")

val lines_stream2 = spark.readStream.
 format("kafka").
 option("kafka.bootstrap.servers", "10.18.99.58:21005,10.18.99.55:21005").
 option("subscribe", "test22").
 option("includeTimestamp", value = true).
 load().
 selectExpr("CAST (value AS String)","CAST(timestamp AS 
TIMESTAMP)").as[(String,Timestamp)].
 select(col("value") as("data1"),col("timestamp") as("recordTime1")).
 select("data1","recordTime1").
 withWatermark("recordTime1", "20 seconds ")


 val query = lines_stream1.join(lines_stream2, expr (
 """
 | data == data1 and
 | recordTime1 >= recordTime and
 | recordTime1 <= recordTime + interval 20 seconds
 """.stripMargin),"*left*").
 writeStream.
 option("truncate","false").
 outputMode("update").
 format("console").
 trigger(Trigger.ProcessingTime ("2 second")).
 start()

query.awaitTermination()

As per the document 
https://spark.apache.org/docs/2.3.2/structured-streaming-programming-guide.html#stream-stream-joins
 
 joins are only supported in append mode

*As of Spark 2.3, you can use joins only when the query is in Append output 
mode. Other output modes are not yet supported.*

Inner join is working as per spark documentation but it is failed for outer 
joins



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org