[jira] [Commented] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables

2014-09-24 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147112#comment-14147112
 ] 

Deepesh Khandelwal commented on HIVE-8239:
--

Thanks [~alangates] for the review and commit!

 MSSQL upgrade schema scripts does not map Java long datatype columns 
 correctly for transaction related tables
 -

 Key: HIVE-8239
 URL: https://issues.apache.org/jira/browse/HIVE-8239
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8239.1.patch


 In Transaction related tables, Java long column fields are mapped to int 
 which results in failure as shown:
 {noformat}
 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - 
 Going to execute update insert into HIVE_LOCKS  (hl_lock_ext_id, 
 hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, 
 hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 
 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')
 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - 
 Going to rollback
 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: 
 MetaException(message:Unable to update transaction database 
 com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error 
 converting expression to data type int.
 at 
 com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
 at 
 com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
 at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
 at 
 com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
 ...
 {noformat}
 In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in 
 HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the 
 error. We should use bigint as column type instead.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables

2014-09-23 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145624#comment-14145624
 ] 

Alan Gates commented on HIVE-8239:
--

+1

 MSSQL upgrade schema scripts does not map Java long datatype columns 
 correctly for transaction related tables
 -

 Key: HIVE-8239
 URL: https://issues.apache.org/jira/browse/HIVE-8239
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8239.1.patch


 In Transaction related tables, Java long column fields are mapped to int 
 which results in failure as shown:
 {noformat}
 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - 
 Going to execute update insert into HIVE_LOCKS  (hl_lock_ext_id, 
 hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, 
 hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 
 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')
 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - 
 Going to rollback
 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: 
 MetaException(message:Unable to update transaction database 
 com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error 
 converting expression to data type int.
 at 
 com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
 at 
 com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
 at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
 at 
 com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
 ...
 {noformat}
 In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in 
 HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the 
 error. We should use bigint as column type instead.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables

2014-09-23 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145636#comment-14145636
 ] 

Alan Gates commented on HIVE-8239:
--

One missing piece, we should make these same changes to hive-txn-schema-0.13, 
for completeness.  I can do that when I check in the patch.

 MSSQL upgrade schema scripts does not map Java long datatype columns 
 correctly for transaction related tables
 -

 Key: HIVE-8239
 URL: https://issues.apache.org/jira/browse/HIVE-8239
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8239.1.patch


 In Transaction related tables, Java long column fields are mapped to int 
 which results in failure as shown:
 {noformat}
 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - 
 Going to execute update insert into HIVE_LOCKS  (hl_lock_ext_id, 
 hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, 
 hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 
 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')
 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - 
 Going to rollback
 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: 
 MetaException(message:Unable to update transaction database 
 com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error 
 converting expression to data type int.
 at 
 com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
 at 
 com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
 at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
 at 
 com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
 ...
 {noformat}
 In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in 
 HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the 
 error. We should use bigint as column type instead.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables

2014-09-23 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145649#comment-14145649
 ] 

Deepesh Khandelwal commented on HIVE-8239:
--

I left that out as the composite hive-schema-0.14.0.mssql.sql includes those 
tables.

 MSSQL upgrade schema scripts does not map Java long datatype columns 
 correctly for transaction related tables
 -

 Key: HIVE-8239
 URL: https://issues.apache.org/jira/browse/HIVE-8239
 Project: Hive
  Issue Type: Bug
  Components: Database/Schema
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal
 Fix For: 0.14.0

 Attachments: HIVE-8239.1.patch


 In Transaction related tables, Java long column fields are mapped to int 
 which results in failure as shown:
 {noformat}
 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - 
 Going to execute update insert into HIVE_LOCKS  (hl_lock_ext_id, 
 hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, 
 hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 
 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')
 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - 
 Going to rollback
 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler 
 (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: 
 MetaException(message:Unable to update transaction database 
 com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error 
 converting expression to data type int.
 at 
 com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
 at 
 com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
 at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
 at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
 at 
 com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
 at 
 com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
 at 
 org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
 at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
 ...
 {noformat}
 In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in 
 HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the 
 error. We should use bigint as column type instead.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)