[jira] [Updated] (HIVE-25577) unix_timestamp() is ignoring the time zone value

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-25577:
--
Labels: pull-request-available  (was: )

> unix_timestamp() is ignoring the time zone value
> 
>
> Key: HIVE-25577
> URL: https://issues.apache.org/jira/browse/HIVE-25577
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> set hive.local.time.zone=Asia/Bangkok;
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('2000-01-07 00:00:00 
> GMT','-MM-dd HH:mm:ss z'));
> Result - 2000-01-07 00:00:00 ICT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25577) unix_timestamp() is ignoring the time zone value

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25577?focusedWorklogId=658098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-658098
 ]

ASF GitHub Bot logged work on HIVE-25577:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 05:46
Start Date: 30/Sep/21 05:46
Worklog Time Spent: 10m 
  Work Description: ashish-kumar-sharma opened a new pull request #2686:
URL: https://github.com/apache/hive/pull/2686


   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 658098)
Remaining Estimate: 0h
Time Spent: 10m

> unix_timestamp() is ignoring the time zone value
> 
>
> Key: HIVE-25577
> URL: https://issues.apache.org/jira/browse/HIVE-25577
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> set hive.local.time.zone=Asia/Bangkok;
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('2000-01-07 00:00:00 
> GMT','-MM-dd HH:mm:ss z'));
> Result - 2000-01-07 00:00:00 ICT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25577) unix_timestamp() is ignoring the time zone value

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25577:
-
Issue Type: Bug  (was: Improvement)

> unix_timestamp() is ignoring the time zone value
> 
>
> Key: HIVE-25577
> URL: https://issues.apache.org/jira/browse/HIVE-25577
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Minor
>
> set hive.local.time.zone=Asia/Bangkok;
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('2000-01-07 00:00:00 
> GMT','-MM-dd HH:mm:ss z'));
> Result - 2000-01-07 00:00:00 ICT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-25577) unix_timestamp() is ignoring the time zone value

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-25577 started by Ashish Sharma.

> unix_timestamp() is ignoring the time zone value
> 
>
> Key: HIVE-25577
> URL: https://issues.apache.org/jira/browse/HIVE-25577
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Minor
>
> set hive.local.time.zone=Asia/Bangkok;
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('2000-01-07 00:00:00 
> GMT','-MM-dd HH:mm:ss z'));
> Result - 2000-01-07 00:00:00 ICT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25448) Invalid partition columns when skew with distinct

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25448?focusedWorklogId=658091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-658091
 ]

ASF GitHub Bot logged work on HIVE-25448:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 04:05
Start Date: 30/Sep/21 04:05
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 removed a comment on pull request #2585:
URL: https://github.com/apache/hive/pull/2585#issuecomment-918013599


   @kasakrisz cloud you please take a look at the changes ? 
   Thanks,
   Zhihua Deng


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 658091)
Time Spent: 40m  (was: 0.5h)

> Invalid partition columns when skew with distinct
> -
>
> Key: HIVE-25448
> URL: https://issues.apache.org/jira/browse/HIVE-25448
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When hive.groupby.skewindata is enabled,  we spray by the grouping key and 
> distinct key if distinct is present in the first reduce sink operator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25365) Insufficient privileges to show partitions when partition columns are authorized

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25365?focusedWorklogId=658090=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-658090
 ]

ASF GitHub Bot logged work on HIVE-25365:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 04:03
Start Date: 30/Sep/21 04:03
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 removed a comment on pull request #2515:
URL: https://github.com/apache/hive/pull/2515#issuecomment-915771280


   Hi, @kgyrtkirk. any other comments about this pr?  
   Thanks,
   Zhihua Deng


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 658090)
Time Spent: 1h 10m  (was: 1h)

> Insufficient privileges to show partitions when partition columns are 
> authorized
> 
>
> Key: HIVE-25365
> URL: https://issues.apache.org/jira/browse/HIVE-25365
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> When the privileges of partition columns have granted to users, showing 
> partitions still needs select privilege on the table, though they are able to 
> query from partition columns.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25561?focusedWorklogId=658073=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-658073
 ]

ASF GitHub Bot logged work on HIVE-25561:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 02:35
Start Date: 30/Sep/21 02:35
Worklog Time Spent: 10m 
  Work Description: zhengchenyu edited a comment on pull request #2674:
URL: https://github.com/apache/hive/pull/2674#issuecomment-928998346


   @abstractdog  @zabetak  Can you help me review it, or give me some 
suggestion?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 658073)
Time Spent: 40m  (was: 0.5h)

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=658038=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-658038
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:48
Start Date: 30/Sep/21 00:48
Worklog Time Spent: 10m 
  Work Description: belugabehr closed pull request #2662:
URL: https://github.com/apache/hive/pull/2662


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 658038)
Time Spent: 2h 10m  (was: 2h)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=658034=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-658034
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:47
Start Date: 30/Sep/21 00:47
Worklog Time Spent: 10m 
  Work Description: kasakrisz opened a new pull request #2682:
URL: https://github.com/apache/hive/pull/2682


   ### What changes were proposed in this pull request?
   Add missing bracket when assembling direct sql query to get materialization 
invalidation info.
   
   ### Why are the changes needed?
   The query was syntactically incorrect and blocked incremental materialized 
view rebuild when one or more MV source table had uncommitted transaction when 
the MV was last rebuilt and the snapshot was taken however they might be 
committed later and has an affect on the next MV rebuild.
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   ### How was this patch tested?
   ```
   mvn test -Dtest=TestTxnHandler -pl ql
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 658034)
Time Spent: 1.5h  (was: 1h 20m)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
>   at 
> 

[jira] [Work logged] (HIVE-25545) Add/Drop constraints events on table should create authorizable events in HS2

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25545?focusedWorklogId=657967=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657967
 ]

ASF GitHub Bot logged work on HIVE-25545:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:41
Start Date: 30/Sep/21 00:41
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera commented on a change in pull 
request #2665:
URL: https://github.com/apache/hive/pull/2665#discussion_r718691636



##
File path: ql/src/test/queries/clientnegative/groupby_join_pushdown.q
##
@@ -22,45 +22,45 @@ FROM src f JOIN src g ON(f.key = g.key)
 GROUP BY f.key, g.key;
 
 EXPLAIN
-SELECT  f.ctinyint, g.ctinyint, SUM(f.cbigint)  
+SELECT  f.ctinyint, g.ctinyint, SUM(f.cbigint)

Review comment:
   Ack

##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/drop/AlterTableDropConstraintAnalyzer.java
##
@@ -47,11 +51,18 @@ protected void analyzeCommand(TableName tableName, 
Map partition
 String constraintName = unescapeIdentifier(command.getChild(0).getText());
 
 AlterTableDropConstraintDesc desc = new 
AlterTableDropConstraintDesc(tableName, null, constraintName);
-rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(), 
desc)));
 
 Table table = getTable(tableName);
+WriteEntity.WriteType writeType = null;
 if (AcidUtils.isTransactionalTable(table)) {
   setAcidDdlDesc(desc);
+  writeType = WriteType.DDL_EXCLUSIVE;
+} else {
+  writeType = 
WriteEntity.determineAlterTableWriteType(AlterTableType.DROP_CONSTRAINT);
 }
+inputs.add(new ReadEntity(table));

Review comment:
   AlterTableDropConstraintDesc doesn't extend 
AbstractAlterTableWithConstraintsDesc as is the case with 
AlterTableAddConstraintDesc, So it cannot convert the descriptor object to add 
inputs and outputs. The reason why  AlterTableDropConstraintDesc cannot extend 
AbstractAlterTableWithConstraintsDesc is that the input command only issues the 
constraint name (ALTER TABLE foo DROP CONSTRAINT foo_constraint) but not the 
constraint object (which is the case with ADD constraint) and 
AbstractAlterTableWithConstraintsDesc takes constraint as an argument. 

##
File path: ql/src/test/results/clientnegative/groupby_join_pushdown.q.out
##
@@ -1358,249 +1358,15 @@ STAGE PLANS:
 
 PREHOOK: query: ALTER TABLE alltypesorc ADD CONSTRAINT pk_alltypesorc_1 
PRIMARY KEY (ctinyint) DISABLE RELY
 PREHOOK: type: ALTERTABLE_ADDCONSTRAINT
-POSTHOOK: query: ALTER TABLE alltypesorc ADD CONSTRAINT pk_alltypesorc_1 
PRIMARY KEY (ctinyint) DISABLE RELY
-POSTHOOK: type: ALTERTABLE_ADDCONSTRAINT
-PREHOOK: query: explain
-SELECT sum(f.cint), f.ctinyint
-FROM alltypesorc f JOIN alltypesorc g ON(f.ctinyint = g.ctinyint)
-GROUP BY f.ctinyint, g.ctinyint
-PREHOOK: type: QUERY
-PREHOOK: Input: default@alltypesorc
- A masked pattern was here 
-POSTHOOK: query: explain
-SELECT sum(f.cint), f.ctinyint
-FROM alltypesorc f JOIN alltypesorc g ON(f.ctinyint = g.ctinyint)
-GROUP BY f.ctinyint, g.ctinyint
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@alltypesorc
- A masked pattern was here 
-STAGE DEPENDENCIES:
-  Stage-1 is a root stage
-  Stage-0 depends on stages: Stage-1
-
-STAGE PLANS:
-  Stage: Stage-1
-Tez
- A masked pattern was here 
-  Edges:
-Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
-Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
- A masked pattern was here 
-  Vertices:
-Map 1 
-Map Operator Tree:
-TableScan
-  alias: f
-  Statistics: Num rows: 12288 Data size: 73392 Basic stats: 
COMPLETE Column stats: COMPLETE
-  Select Operator
-expressions: ctinyint (type: tinyint), cint (type: int)
-outputColumnNames: _col0, _col1
-Statistics: Num rows: 12288 Data size: 73392 Basic stats: 
COMPLETE Column stats: COMPLETE
-Reduce Output Operator
-  key expressions: _col0 (type: tinyint)
-  null sort order: z
-  sort order: +
-  Map-reduce partition columns: _col0 (type: tinyint)
-  Statistics: Num rows: 12288 Data size: 73392 Basic 
stats: COMPLETE Column stats: COMPLETE
-  value expressions: _col1 (type: int)
-Execution mode: vectorized, llap
-LLAP IO: all inputs
-Map 4 
-Map Operator Tree:
-TableScan
-  alias: g
-  Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
COMPLETE Column stats: COMPLETE
-  Select Operator
-expressions: ctinyint (type: tinyint)
-outputColumnNames: 

[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=657920=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657920
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:37
Start Date: 30/Sep/21 00:37
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #2662:
URL: https://github.com/apache/hive/pull/2662#issuecomment-930213072


   ```
   [2021-09-27T17:39:00.975Z] timeout reached before the port went into state 
"inuse"
   script returned exit code 1
   ```
   the above is fixed now - but we have some failing tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657920)
Time Spent: 2h  (was: 1h 50m)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657912=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657912
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:36
Start Date: 30/Sep/21 00:36
Worklog Time Spent: 10m 
  Work Description: pvary commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718372903



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   What is the change here?
   Can you help me out please  

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   Thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657912)
Time Spent: 1h 20m  (was: 1h 10m)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> 

[jira] [Work logged] (HIVE-25550) Increase the RM_PROGRESS column max length to fit metrics stat

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25550?focusedWorklogId=657887=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657887
 ]

ASF GitHub Bot logged work on HIVE-25550:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:34
Start Date: 30/Sep/21 00:34
Worklog Time Spent: 10m 
  Work Description: aasha commented on a change in pull request #2668:
URL: https://github.com/apache/hive/pull/2668#discussion_r718173826



##
File path: standalone-metastore/metastore-server/src/main/resources/package.jdo
##
@@ -1556,7 +1556,7 @@
 
   
   
-
+

Review comment:
   will this work for oracle?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657887)
Time Spent: 40m  (was: 0.5h)

> Increase the RM_PROGRESS column max length to fit metrics stat
> --
>
> Key: HIVE-25550
> URL: https://issues.apache.org/jira/browse/HIVE-25550
> Project: Hive
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Presently it fails with the following trace:
> {noformat}
> [[Event Name: EVENT_ALLOC_WRITE_ID; Total Number: 213; Total Time: 85347.0; 
> Mean: 400.6901408450704; Median: 392.0; Standard Deviation: 
> 33.99178239314741; Variance: 1155.4412702630862; Kurtosis: 83.69411620601193; 
> Skewness: 83.69411620601193; 25th Percentile: 384.0; 50th Percentile: 392.0; 
> 75th Percentile: 408.0; 90th Percentile: 417.0; Top 5 EventIds(EventId=Time) 
> {1498476=791, 1498872=533, 1497805=508, 1498808=500, 1499027=492};]]}"}]}" in 
> column ""RM_PROGRESS"" that has maximum length of 4000. Please correct your 
> data!
> at 
> org.datanucleus.store.rdbms.mapping.datastore.CharRDBMSMapping.setString(CharRDBMSMapping.java:254)
>  ~[datanucleus-rdbms-4.1.19.jar:?]
> at 
> org.datanucleus.store.rdbms.mapping.java.SingleFieldMapping.setString(SingleFieldMapping.java:180)
>  ~{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25570) Hive should send full URL path for authorization for the command insert overwrite location

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25570?focusedWorklogId=657873=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657873
 ]

ASF GitHub Bot logged work on HIVE-25570:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:32
Start Date: 30/Sep/21 00:32
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera opened a new pull request #2684:
URL: https://github.com/apache/hive/pull/2684


   …and 'insert overwrite location'
   
   
   
   ### What changes were proposed in this pull request?
   Made changes such that hive will now be sending full url path for 
authorization for the command insert overwrite location
   
   
   
   ### Why are the changes needed?
   Otherwise, hive is only sending the URI specified in the input command which 
isn't consistent with other commands like Create external table with directory 
(where we send full URL for authorization).
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   
   
   ### How was this patch tested?
   Local machine, remote cluster
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657873)
Time Spent: 40m  (was: 0.5h)

> Hive should send full URL path for authorization for the command insert 
> overwrite location
> --
>
> Key: HIVE-25570
> URL: https://issues.apache.org/jira/browse/HIVE-25570
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> For authorization, Hive is currently sending the path given as input from the 
> user for the command, for eg
> {code:java}
> insert overwrite directory 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' select * 
> from test_tb1;
> {code}
> Hive is sending the path as 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' 
> Instead, Hive should send a fully qualified path for authorization,  for e.g: 
> 'hdfs://hostname:port_name/user/warehouse/tablespace/external/something/new/test_new_tb1'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=657833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657833
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:29
Start Date: 30/Sep/21 00:29
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on pull request #2662:
URL: https://github.com/apache/hive/pull/2662#issuecomment-930184269






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657833)
Time Spent: 1h 50m  (was: 1h 40m)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657829=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657829
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:28
Start Date: 30/Sep/21 00:28
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718389730



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   One more closing parenthese `)` was added:
   ```
   before: ") ");
   after: ") ) ");   
   ```
   
   

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   One more closing parenthesis `)` was added:
   ```
   before: ") ");
   after: ") ) ");   
   ```
   
   

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   One more closing parenthesis `)` was added:
   ```
   before: ") "
   after: ") ) "   
   ```
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657829)
Time Spent: 1h 10m  (was: 1h)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) 

[jira] [Work logged] (HIVE-25541) JsonSerDe: TBLPROPERTY treating nested json as String

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25541?focusedWorklogId=657818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657818
 ]

ASF GitHub Bot logged work on HIVE-25541:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:27
Start Date: 30/Sep/21 00:27
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on a change in pull request #2664:
URL: https://github.com/apache/hive/pull/2664#discussion_r718082739



##
File path: serde/src/java/org/apache/hadoop/hive/serde2/json/HiveJsonReader.java
##
@@ -393,7 +402,16 @@ private Object visitLeafNode(final JsonNode leafNode,
 case DOUBLE:
   return Double.valueOf(leafNode.asDouble());
 case STRING:
-  return leafNode.asText();
+  if (leafNode.isValueNode()) {
+return leafNode.asText();
+  } else {
+if (isEnabled(Feature.STRINGIFY_COMPLEX_FIELDS)) {
+  return leafNode.toString();
+} else {
+  throw new SerDeException(
+  "Complex field found in JSON does not match table definition: " 
+ typeInfo.getTypeName());

Review comment:
   Sorry for this, I wonder that if the column is defined as varchar or 
char in hive schema, but corresponds to a complex field the in json, should we 
do something for such cases?
   Thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657818)
Time Spent: 2h  (was: 1h 50m)

> JsonSerDe: TBLPROPERTY treating nested json as String
> -
>
> Key: HIVE-25541
> URL: https://issues.apache.org/jira/browse/HIVE-25541
> Project: Hive
>  Issue Type: Bug
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Native Jsonserde 'org.apache.hive.hcatalog.data.JsonSerDe' currently does not 
> support loading nested json into a string type directly. It requires the 
> declaring the column as complex type (struct, map, array) to unpack nested 
> json data.
> Even though the data field is not a valid JSON String type there is value 
> treating it as plain String instead of throwing an exception as we currently 
> do.
> {code:java}
> create table json_table(data string, messageid string, publish_time bigint, 
> attributes string);
> {"data":{"H":{"event":"track_active","platform":"Android"},"B":{"device_type":"Phone","uuid":"[36ffec24-f6a4-4f5d-aa39-72e5513d2cae,11883bee-a7aa-4010-8a66-6c3c63a73f16]"}},"messageId":"2475185636801962","publish_time":1622514629783,"attributes":{"region":"IN"}}"}}
> {code}
> This JIRA introduces an extra Table Property allowing to Stringify Complex 
> JSON values instead of forcing the User to define the complete nested 
> structure



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25517) Follow up on HIVE-24951: External Table created with Uppercase name using CTAS does not produce result for select queries

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25517?focusedWorklogId=657786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657786
 ]

ASF GitHub Bot logged work on HIVE-25517:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:24
Start Date: 30/Sep/21 00:24
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #2638:
URL: https://github.com/apache/hive/pull/2638#issuecomment-930069988


   @nrg4878  I don't see a clear testrun for this PR - where is it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657786)
Time Spent: 2h 40m  (was: 2.5h)

> Follow up on HIVE-24951: External Table created with Uppercase name using 
> CTAS does not produce result for select queries
> -
>
> Key: HIVE-25517
> URL: https://issues.apache.org/jira/browse/HIVE-25517
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Sourabh Goyal
>Assignee: Sourabh Goyal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> In [PR|https://github.com/apache/hive/pull/2125] for HIVE-24951, the 
> recommendation was to use getDefaultTablePath() to set the location for an 
> external table. This Jira addresses that and makes getDefaultTablePath() more 
> generic.
>  
> cc - [~ngangam]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23756) Added more constraints to the package.jdo file

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23756?focusedWorklogId=657779=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657779
 ]

ASF GitHub Bot logged work on HIVE-23756:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:24
Start Date: 30/Sep/21 00:24
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] closed pull request #2254:
URL: https://github.com/apache/hive/pull/2254


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657779)
Time Spent: 1h 50m  (was: 1h 40m)

> Added more constraints to the package.jdo file
> --
>
> Key: HIVE-23756
> URL: https://issues.apache.org/jira/browse/HIVE-23756
> Project: Hive
>  Issue Type: Bug
>Reporter: Ganesha Shreedhara
>Assignee: Steve Carlin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-23756.1.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Drop table command fails intermittently with the following exception.
> {code:java}
> Caused by: java.sql.BatchUpdateException: Cannot delete or update a parent 
> row: a foreign key constraint fails ("metastore"."COLUMNS_V2", CONSTRAINT 
> "COLUMNS_V2_FK1" FOREIGN KEY ("CD_ID") REFERENCES "CDS" ("CD_ID")) App > at 
> com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1815)at
>  com.mysql.jdbc.PreparedStatement.executeBatch(PreparedStatement.java:1277) 
> Appat 
> org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372)
> at 
> org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628)
> at 
> org.datanucleus.store.rdbms.SQLController.getStatementForUpdate(SQLController.java:207)
> at 
> org.datanucleus.store.rdbms.SQLController.getStatementForUpdate(SQLController.java:179)
> at 
> org.datanucleus.store.rdbms.scostore.JoinMapStore.clearInternal(JoinMapStore.java:901)
> ... 36 more 
> Caused by: 
> com.mysql.jdbc.exceptions.jdbc4.MySQLIntegrityConstraintViolationException: 
> Cannot delete or update a parent row: a foreign key constraint fails 
> ("metastore"."COLUMNS_V2", CONSTRAINT "COLUMNS_V2_FK1" FOREIGN KEY ("CD_ID") 
> REFERENCES "CDS" ("CD_ID"))
> at sun.reflect.GeneratedConstructorAccessor121.newInstance(Unknown Source)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at com.mysql.jdbc.Util.handleNewInstance(Util.java:377)
> at com.mysql.jdbc.Util.getInstance(Util.java:360)
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:971)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823){code}
> Although HIVE-19994 resolves this issue, the FK constraint name of COLUMNS_V2 
> table specified in package.jdo file is not same as the FK constraint name 
> used while creating COLUMNS_V2 table ([Ref|#L60]]). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25324) Add option to disable PartitionManagementTask

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25324?focusedWorklogId=657785=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657785
 ]

ASF GitHub Bot logged work on HIVE-25324:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:24
Start Date: 30/Sep/21 00:24
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] closed pull request #2470:
URL: https://github.com/apache/hive/pull/2470


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657785)
Time Spent: 1h  (was: 50m)

> Add option to disable PartitionManagementTask
> -
>
> Key: HIVE-25324
> URL: https://issues.apache.org/jira/browse/HIVE-25324
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Rajesh Balamohan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When large number of tables (e.g 2000) and databases are present, 
> PartitionManagementTask scans all tables and partitions causing pressure on 
> HMS.
> Currently there is no way to disable PartitionManagementTask as well. Round 
> about option is to provide pattern via 
> "metastore.partition.management.database.pattern / 
> metastore.partition.management.table.pattern".
> It will be good to provide an option to disable it completely.{color:#807d6e}
> {color}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25574) Replace clob with varchar when storing creation metadata

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25574?focusedWorklogId=657747=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657747
 ]

ASF GitHub Bot logged work on HIVE-25574:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:21
Start Date: 30/Sep/21 00:21
Worklog Time Spent: 10m 
  Work Description: kasakrisz opened a new pull request #2683:
URL: https://github.com/apache/hive/pull/2683


   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657747)
Time Spent: 20m  (was: 10m)

> Replace clob with varchar when storing creation metadata
> 
>
> Key: HIVE-25574
> URL: https://issues.apache.org/jira/browse/HIVE-25574
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Follow up of HIVE-21940.
> {code}
>  table="MV_CREATION_METADATA" detachable="true">
> ...
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25517) Follow up on HIVE-24951: External Table created with Uppercase name using CTAS does not produce result for select queries

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25517?focusedWorklogId=657732=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657732
 ]

ASF GitHub Bot logged work on HIVE-25517:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:20
Start Date: 30/Sep/21 00:20
Worklog Time Spent: 10m 
  Work Description: sourabh912 closed pull request #2638:
URL: https://github.com/apache/hive/pull/2638


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657732)
Time Spent: 2.5h  (was: 2h 20m)

> Follow up on HIVE-24951: External Table created with Uppercase name using 
> CTAS does not produce result for select queries
> -
>
> Key: HIVE-25517
> URL: https://issues.apache.org/jira/browse/HIVE-25517
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Sourabh Goyal
>Assignee: Sourabh Goyal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In [PR|https://github.com/apache/hive/pull/2125] for HIVE-24951, the 
> recommendation was to use getDefaultTablePath() to set the location for an 
> external table. This Jira addresses that and makes getDefaultTablePath() more 
> generic.
>  
> cc - [~ngangam]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25343) Create or replace view should clean the old table properties

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25343?focusedWorklogId=657647=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657647
 ]

ASF GitHub Bot logged work on HIVE-25343:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:09
Start Date: 30/Sep/21 00:09
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] closed pull request #2492:
URL: https://github.com/apache/hive/pull/2492


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657647)
Time Spent: 50m  (was: 40m)

> Create or replace view should clean the old table properties
> 
>
> Key: HIVE-25343
> URL: https://issues.apache.org/jira/browse/HIVE-25343
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 3.1.3, 3.2.0
>Reporter: Lantao Jin
>Assignee: Lantao Jin
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-07-19 at 15.36.29.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In many cases, users use Spark and Hive together. When a user creates a view 
> via Spark, the table output columns will store in table properties, such as 
>  !Screen Shot 2021-07-19 at 15.36.29.png|width=80%!
> After that, if the user runs the command "create or replace view" via Hive, 
> to change the schema. The old table properties added by Spark are not cleaned 
> by Hive. Then users read the table via Spark. The schema didn't change. It 
> very confused users.
> How to reproduce:
> {code}
> spark-sql>create table lajin_table (a int, b int) stored as parquet;
> spark-sql>create view lajin_view as select * from lajin_table;
> spark-sql> desc lajin_view;
> a   int NULLNULL
> b   int NULLNULL
> hive>desc lajin_view;
> a   int 
> b   int
> hive>create or replace view lajin_view as select a, b, 3 as c from 
> lajin_table;
> hive>desc lajin_view;
> a   int 
> b   int 
> c   int
> spark-sql> desc lajin_view; -- not changed
> a   int NULLNULL
> b   int NULLNULL
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25570) Hive should send full URL path for authorization for the command insert overwrite location

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25570?focusedWorklogId=657704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657704
 ]

ASF GitHub Bot logged work on HIVE-25570:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:17
Start Date: 30/Sep/21 00:17
Worklog Time Spent: 10m 
  Work Description: nrg4878 commented on pull request #2684:
URL: https://github.com/apache/hive/pull/2684#issuecomment-930533618


   Pending tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657704)
Time Spent: 0.5h  (was: 20m)

> Hive should send full URL path for authorization for the command insert 
> overwrite location
> --
>
> Key: HIVE-25570
> URL: https://issues.apache.org/jira/browse/HIVE-25570
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For authorization, Hive is currently sending the path given as input from the 
> user for the command, for eg
> {code:java}
> insert overwrite directory 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' select * 
> from test_tb1;
> {code}
> Hive is sending the path as 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' 
> Instead, Hive should send a fully qualified path for authorization,  for e.g: 
> 'hdfs://hostname:port_name/user/warehouse/tablespace/external/something/new/test_new_tb1'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25517) Follow up on HIVE-24951: External Table created with Uppercase name using CTAS does not produce result for select queries

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25517?focusedWorklogId=657686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657686
 ]

ASF GitHub Bot logged work on HIVE-25517:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:15
Start Date: 30/Sep/21 00:15
Worklog Time Spent: 10m 
  Work Description: sourabh912 commented on pull request #2638:
URL: https://github.com/apache/hive/pull/2638#issuecomment-929643289


   Thank you @nrg4878 for the review. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657686)
Time Spent: 2h 20m  (was: 2h 10m)

> Follow up on HIVE-24951: External Table created with Uppercase name using 
> CTAS does not produce result for select queries
> -
>
> Key: HIVE-25517
> URL: https://issues.apache.org/jira/browse/HIVE-25517
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Sourabh Goyal
>Assignee: Sourabh Goyal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> In [PR|https://github.com/apache/hive/pull/2125] for HIVE-24951, the 
> recommendation was to use getDefaultTablePath() to set the location for an 
> external table. This Jira addresses that and makes getDefaultTablePath() more 
> generic.
>  
> cc - [~ngangam]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25335) Unreasonable setting reduce number, when join big size table(but small row count) and small size table

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25335?focusedWorklogId=657675=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657675
 ]

ASF GitHub Bot logged work on HIVE-25335:
-

Author: ASF GitHub Bot
Created on: 30/Sep/21 00:14
Start Date: 30/Sep/21 00:14
Worklog Time Spent: 10m 
  Work Description: zhengchenyu edited a comment on pull request #2490:
URL: https://github.com/apache/hive/pull/2490#issuecomment-927529430


   @jcamachor @zabetak  Can you help me review it, or give me some suggestion?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657675)
Time Spent: 50m  (was: 40m)

> Unreasonable setting reduce number, when join big size table(but small row 
> count) and small size table
> --
>
> Key: HIVE-25335
> URL: https://issues.apache.org/jira/browse/HIVE-25335
> Project: Hive
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-25335.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> I found an application which is slow in our cluster, because the proccess 
> bytes of one reduce is very huge, but only two reduce. 
> when I debug, I found the reason. Because in this sql, one big size table 
> (about 30G) with few row count(about 3.5M), another small size table (about 
> 100M) have more row count (about 3.6M). So JoinStatsRule.process only use 
> 100M to estimate reducer's number. But we need to  process 30G byte in fact.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25570) Hive should send full URL path for authorization for the command insert overwrite location

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25570?focusedWorklogId=657542=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657542
 ]

ASF GitHub Bot logged work on HIVE-25570:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 20:51
Start Date: 29/Sep/21 20:51
Worklog Time Spent: 10m 
  Work Description: nrg4878 commented on pull request #2684:
URL: https://github.com/apache/hive/pull/2684#issuecomment-930533618


   Pending tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657542)
Time Spent: 20m  (was: 10m)

> Hive should send full URL path for authorization for the command insert 
> overwrite location
> --
>
> Key: HIVE-25570
> URL: https://issues.apache.org/jira/browse/HIVE-25570
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> For authorization, Hive is currently sending the path given as input from the 
> user for the command, for eg
> {code:java}
> insert overwrite directory 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' select * 
> from test_tb1;
> {code}
> Hive is sending the path as 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' 
> Instead, Hive should send a fully qualified path for authorization,  for e.g: 
> 'hdfs://hostname:port_name/user/warehouse/tablespace/external/something/new/test_new_tb1'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25545) Add/Drop constraints events on table should create authorizable events in HS2

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25545?focusedWorklogId=657394=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657394
 ]

ASF GitHub Bot logged work on HIVE-25545:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 16:38
Start Date: 29/Sep/21 16:38
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera commented on a change in pull 
request #2665:
URL: https://github.com/apache/hive/pull/2665#discussion_r718699524



##
File path: ql/src/test/results/clientnegative/groupby_join_pushdown.q.out
##
@@ -1358,249 +1358,15 @@ STAGE PLANS:
 
 PREHOOK: query: ALTER TABLE alltypesorc ADD CONSTRAINT pk_alltypesorc_1 
PRIMARY KEY (ctinyint) DISABLE RELY
 PREHOOK: type: ALTERTABLE_ADDCONSTRAINT
-POSTHOOK: query: ALTER TABLE alltypesorc ADD CONSTRAINT pk_alltypesorc_1 
PRIMARY KEY (ctinyint) DISABLE RELY
-POSTHOOK: type: ALTERTABLE_ADDCONSTRAINT
-PREHOOK: query: explain
-SELECT sum(f.cint), f.ctinyint
-FROM alltypesorc f JOIN alltypesorc g ON(f.ctinyint = g.ctinyint)
-GROUP BY f.ctinyint, g.ctinyint
-PREHOOK: type: QUERY
-PREHOOK: Input: default@alltypesorc
- A masked pattern was here 
-POSTHOOK: query: explain
-SELECT sum(f.cint), f.ctinyint
-FROM alltypesorc f JOIN alltypesorc g ON(f.ctinyint = g.ctinyint)
-GROUP BY f.ctinyint, g.ctinyint
-POSTHOOK: type: QUERY
-POSTHOOK: Input: default@alltypesorc
- A masked pattern was here 
-STAGE DEPENDENCIES:
-  Stage-1 is a root stage
-  Stage-0 depends on stages: Stage-1
-
-STAGE PLANS:
-  Stage: Stage-1
-Tez
- A masked pattern was here 
-  Edges:
-Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
-Reducer 3 <- Reducer 2 (SIMPLE_EDGE)
- A masked pattern was here 
-  Vertices:
-Map 1 
-Map Operator Tree:
-TableScan
-  alias: f
-  Statistics: Num rows: 12288 Data size: 73392 Basic stats: 
COMPLETE Column stats: COMPLETE
-  Select Operator
-expressions: ctinyint (type: tinyint), cint (type: int)
-outputColumnNames: _col0, _col1
-Statistics: Num rows: 12288 Data size: 73392 Basic stats: 
COMPLETE Column stats: COMPLETE
-Reduce Output Operator
-  key expressions: _col0 (type: tinyint)
-  null sort order: z
-  sort order: +
-  Map-reduce partition columns: _col0 (type: tinyint)
-  Statistics: Num rows: 12288 Data size: 73392 Basic 
stats: COMPLETE Column stats: COMPLETE
-  value expressions: _col1 (type: int)
-Execution mode: vectorized, llap
-LLAP IO: all inputs
-Map 4 
-Map Operator Tree:
-TableScan
-  alias: g
-  Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
COMPLETE Column stats: COMPLETE
-  Select Operator
-expressions: ctinyint (type: tinyint)
-outputColumnNames: _col0
-Statistics: Num rows: 12288 Data size: 36696 Basic stats: 
COMPLETE Column stats: COMPLETE
-Reduce Output Operator
-  key expressions: _col0 (type: tinyint)
-  null sort order: z
-  sort order: +
-  Map-reduce partition columns: _col0 (type: tinyint)
-  Statistics: Num rows: 12288 Data size: 36696 Basic 
stats: COMPLETE Column stats: COMPLETE
-Execution mode: vectorized, llap
-LLAP IO: all inputs
-Reducer 2 
-Execution mode: llap
-Reduce Operator Tree:
-  Merge Join Operator
-condition map:
- Inner Join 0 to 1
-keys:
-  0 _col0 (type: tinyint)
-  1 _col0 (type: tinyint)
-outputColumnNames: _col0, _col1
-Statistics: Num rows: 1161499 Data size: 9267080 Basic stats: 
COMPLETE Column stats: COMPLETE
-Group By Operator
-  aggregations: sum(_col1)
-  keys: _col0 (type: tinyint)
-  minReductionHashAggr: 0.99
-  mode: hash
-  outputColumnNames: _col0, _col1
-  Statistics: Num rows: 131 Data size: 1572 Basic stats: 
COMPLETE Column stats: COMPLETE
-  Reduce Output Operator
-key expressions: _col0 (type: tinyint)
-null sort order: z
-sort order: +
-Map-reduce partition columns: _col0 (type: tinyint)
-Statistics: Num rows: 131 Data size: 1572 Basic stats: 
COMPLETE Column stats: COMPLETE
-   

[jira] [Work logged] (HIVE-25545) Add/Drop constraints events on table should create authorizable events in HS2

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25545?focusedWorklogId=657392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657392
 ]

ASF GitHub Bot logged work on HIVE-25545:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 16:37
Start Date: 29/Sep/21 16:37
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera commented on a change in pull 
request #2665:
URL: https://github.com/apache/hive/pull/2665#discussion_r718698639



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/drop/AlterTableDropConstraintAnalyzer.java
##
@@ -47,11 +51,18 @@ protected void analyzeCommand(TableName tableName, 
Map partition
 String constraintName = unescapeIdentifier(command.getChild(0).getText());
 
 AlterTableDropConstraintDesc desc = new 
AlterTableDropConstraintDesc(tableName, null, constraintName);
-rootTasks.add(TaskFactory.get(new DDLWork(getInputs(), getOutputs(), 
desc)));
 
 Table table = getTable(tableName);
+WriteEntity.WriteType writeType = null;
 if (AcidUtils.isTransactionalTable(table)) {
   setAcidDdlDesc(desc);
+  writeType = WriteType.DDL_EXCLUSIVE;
+} else {
+  writeType = 
WriteEntity.determineAlterTableWriteType(AlterTableType.DROP_CONSTRAINT);
 }
+inputs.add(new ReadEntity(table));

Review comment:
   AlterTableDropConstraintDesc doesn't extend 
AbstractAlterTableWithConstraintsDesc as is the case with 
AlterTableAddConstraintDesc, So it cannot convert the descriptor object to add 
inputs and outputs. The reason why  AlterTableDropConstraintDesc cannot extend 
AbstractAlterTableWithConstraintsDesc is that the input command only issues the 
constraint name (ALTER TABLE foo DROP CONSTRAINT foo_constraint) but not the 
constraint object (which is the case with ADD constraint) and 
AbstractAlterTableWithConstraintsDesc takes constraint as an argument. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657392)
Time Spent: 50m  (was: 40m)

> Add/Drop constraints events on table should create authorizable events in HS2
> -
>
> Key: HIVE-25545
> URL: https://issues.apache.org/jira/browse/HIVE-25545
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Alter table foo_tbl ADD constraint c1_unique UNIQUE(id1) disable novalidate;
> Alter table foo_tbl DROP constraint c1_unique;
> The above statements are currently not being authorized in Ranger/Sentry. 
> These should be authorized by creating authorizable events in Hive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25545) Add/Drop constraints events on table should create authorizable events in HS2

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25545?focusedWorklogId=657378=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657378
 ]

ASF GitHub Bot logged work on HIVE-25545:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 16:28
Start Date: 29/Sep/21 16:28
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera commented on a change in pull 
request #2665:
URL: https://github.com/apache/hive/pull/2665#discussion_r718691636



##
File path: ql/src/test/queries/clientnegative/groupby_join_pushdown.q
##
@@ -22,45 +22,45 @@ FROM src f JOIN src g ON(f.key = g.key)
 GROUP BY f.key, g.key;
 
 EXPLAIN
-SELECT  f.ctinyint, g.ctinyint, SUM(f.cbigint)  
+SELECT  f.ctinyint, g.ctinyint, SUM(f.cbigint)

Review comment:
   Ack




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657378)
Time Spent: 40m  (was: 0.5h)

> Add/Drop constraints events on table should create authorizable events in HS2
> -
>
> Key: HIVE-25545
> URL: https://issues.apache.org/jira/browse/HIVE-25545
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Alter table foo_tbl ADD constraint c1_unique UNIQUE(id1) disable novalidate;
> Alter table foo_tbl DROP constraint c1_unique;
> The above statements are currently not being authorized in Ranger/Sentry. 
> These should be authorized by creating authorizable events in Hive.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25570) Hive should send full URL path for authorization for the command insert overwrite location

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25570?focusedWorklogId=657365=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657365
 ]

ASF GitHub Bot logged work on HIVE-25570:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 16:18
Start Date: 29/Sep/21 16:18
Worklog Time Spent: 10m 
  Work Description: saihemanth-cloudera opened a new pull request #2684:
URL: https://github.com/apache/hive/pull/2684


   …and 'insert overwrite location'
   
   
   
   ### What changes were proposed in this pull request?
   Made changes such that hive will now be sending full url path for 
authorization for the command insert overwrite location
   
   
   
   ### Why are the changes needed?
   Otherwise, hive is only sending the URI specified in the input command which 
isn't consistent with other commands like Create external table with directory 
(where we send full URL for authorization).
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   
   
   ### How was this patch tested?
   Local machine, remote cluster
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657365)
Remaining Estimate: 0h
Time Spent: 10m

> Hive should send full URL path for authorization for the command insert 
> overwrite location
> --
>
> Key: HIVE-25570
> URL: https://issues.apache.org/jira/browse/HIVE-25570
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For authorization, Hive is currently sending the path given as input from the 
> user for the command, for eg
> {code:java}
> insert overwrite directory 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' select * 
> from test_tb1;
> {code}
> Hive is sending the path as 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' 
> Instead, Hive should send a fully qualified path for authorization,  for e.g: 
> 'hdfs://hostname:port_name/user/warehouse/tablespace/external/something/new/test_new_tb1'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25570) Hive should send full URL path for authorization for the command insert overwrite location

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-25570:
--
Labels: pull-request-available  (was: )

> Hive should send full URL path for authorization for the command insert 
> overwrite location
> --
>
> Key: HIVE-25570
> URL: https://issues.apache.org/jira/browse/HIVE-25570
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization, HiveServer2
>Affects Versions: 4.0.0
>Reporter: Sai Hemanth Gantasala
>Assignee: Sai Hemanth Gantasala
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For authorization, Hive is currently sending the path given as input from the 
> user for the command, for eg
> {code:java}
> insert overwrite directory 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' select * 
> from test_tb1;
> {code}
> Hive is sending the path as 
> '/user/warehouse/tablespace/external/something/new/test_new_tb1' 
> Instead, Hive should send a fully qualified path for authorization,  for e.g: 
> 'hdfs://hostname:port_name/user/warehouse/tablespace/external/something/new/test_new_tb1'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25578) Tests are failing because operators can't be closed

2021-09-29 Thread Karen Coppage (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422176#comment-17422176
 ] 

Karen Coppage commented on HIVE-25578:
--

Okay I reverted... but still not sure how to move forward, as you mentioned 
that TestMiniLlapCliDriver#*[newline] passed for you locally

> Tests are failing because operators can't be closed
> ---
>
> Key: HIVE-25578
> URL: https://issues.apache.org/jira/browse/HIVE-25578
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Priority: Critical
>
> The following qtests are failing consistently 
> ([example|http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-2667/6/tests/])
>  on the master branch:
>  * TestMiniLlapCliDriver 
> ([http://ci.hive.apache.org/job/hive-flaky-check/420/])
>  ** newline
>  ** groupby_bigdata
>  ** input20
>  ** input33
>  ** rcfile_bigdata
>  ** remote_script
>  * TestContribCliDriver 
> ([http://ci.hive.apache.org/job/hive-flaky-check/421/])
>  ** serde_typedbytes5
> The failure reason for all seems to be that operators can't be closed. Not 
> 100% sure that TestContribCliDriver#serde_typedbytes5 failure is related to 
> the others – the issue seems to be the same, the error message is a bit 
> different.
> I'm about to disable these as they are blocking all work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=657262=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657262
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 14:11
Start Date: 29/Sep/21 14:11
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on pull request #2662:
URL: https://github.com/apache/hive/pull/2662#issuecomment-930215521


   @kgyrtkirk Thanks for the feedback. 
   
   FYI.  I am not ignoring your previous comments about de-duplication.  I just 
am trying to get this to run first and then I can work backwards and optimize 
the solution. :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657262)
Time Spent: 1h 40m  (was: 1.5h)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=657257=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657257
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 14:09
Start Date: 29/Sep/21 14:09
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #2662:
URL: https://github.com/apache/hive/pull/2662#issuecomment-930213072


   ```
   [2021-09-27T17:39:00.975Z] timeout reached before the port went into state 
"inuse"
   script returned exit code 1
   ```
   the above is fixed now - but we have some failing tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657257)
Time Spent: 1.5h  (was: 1h 20m)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25578) Tests are failing because operators can't be closed

2021-09-29 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422147#comment-17422147
 ] 

Zoltan Haindrich commented on HIVE-25578:
-

please do an investigation why these tests are failing; not just hunt to 
disable them...

...or fixing/investigating things is not your problem?

> Tests are failing because operators can't be closed
> ---
>
> Key: HIVE-25578
> URL: https://issues.apache.org/jira/browse/HIVE-25578
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Priority: Critical
>
> The following qtests are failing consistently 
> ([example|http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-2667/6/tests/])
>  on the master branch:
>  * TestMiniLlapCliDriver 
> ([http://ci.hive.apache.org/job/hive-flaky-check/420/])
>  ** newline
>  ** groupby_bigdata
>  ** input20
>  ** input33
>  ** rcfile_bigdata
>  ** remote_script
>  * TestContribCliDriver 
> ([http://ci.hive.apache.org/job/hive-flaky-check/421/])
>  ** serde_typedbytes5
> The failure reason for all seems to be that operators can't be closed. Not 
> 100% sure that TestContribCliDriver#serde_typedbytes5 failure is related to 
> the others – the issue seems to be the same, the error message is a bit 
> different.
> I'm about to disable these as they are blocking all work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25578) Tests are failing because operators can't be closed

2021-09-29 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422144#comment-17422144
 ] 

Zoltan Haindrich commented on HIVE-25578:
-

[~klcopp] please don't disable these; this should be fixed - you are about to 
disable 2 qtest suites

> Tests are failing because operators can't be closed
> ---
>
> Key: HIVE-25578
> URL: https://issues.apache.org/jira/browse/HIVE-25578
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Priority: Critical
>
> The following qtests are failing consistently 
> ([example|http://ci.hive.apache.org/blue/organizations/jenkins/hive-precommit/detail/PR-2667/6/tests/])
>  on the master branch:
>  * TestMiniLlapCliDriver 
> ([http://ci.hive.apache.org/job/hive-flaky-check/420/])
>  ** newline
>  ** groupby_bigdata
>  ** input20
>  ** input33
>  ** rcfile_bigdata
>  ** remote_script
>  * TestContribCliDriver 
> ([http://ci.hive.apache.org/job/hive-flaky-check/421/])
>  ** serde_typedbytes5
> The failure reason for all seems to be that operators can't be closed. Not 
> 100% sure that TestContribCliDriver#serde_typedbytes5 failure is related to 
> the others – the issue seems to be the same, the error message is a bit 
> different.
> I'm about to disable these as they are blocking all work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=657229=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657229
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 13:38
Start Date: 29/Sep/21 13:38
Worklog Time Spent: 10m 
  Work Description: belugabehr closed pull request #2662:
URL: https://github.com/apache/hive/pull/2662


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657229)
Time Spent: 1h 10m  (was: 1h)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25544) Remove Dependency of hive-meta-common From hive-common

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25544?focusedWorklogId=657230=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657230
 ]

ASF GitHub Bot logged work on HIVE-25544:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 13:38
Start Date: 29/Sep/21 13:38
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on pull request #2662:
URL: https://github.com/apache/hive/pull/2662#issuecomment-930184269


   ```
   [2021-09-27T17:32:45.684Z] Digest: 
sha256:90c1b38152f7260014bc4d3d10209cebbd5e25346cea07b00efd8e36028e
   [2021-09-27T17:32:46.263Z] Status: Downloaded newer image for postgres:latest
   [2021-09-27T17:33:54.106Z] 
206c95085bbcf9b468aadcca8986b306fb2d1c16927ecd296cda321a1fb7584a
   [2021-09-27T17:33:54.106Z] waiting for postgres to be available...
   [2021-09-27T17:39:00.975Z] timeout reached before the port went into state 
"inuse"
   script returned exit code 1
   ```
   
   Hmmm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657230)
Time Spent: 1h 20m  (was: 1h 10m)

> Remove Dependency of hive-meta-common From hive-common
> --
>
> Key: HIVE-25544
> URL: https://issues.apache.org/jira/browse/HIVE-25544
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> These two things should not be linked and it means any HS2 client libraries 
> pulling in hive-common library also has to pull in a ton of metastore code as 
> well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateTimeformatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Summary: Raise exception instead of silent change for new DateTimeformatter 
 (was: Raise exception instead of silent change for new DateFormatter)

> Raise exception instead of silent change for new DateTimeformatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History*
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> *Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> *Result* - 1800-01-01 07:00:00
> *Implementation details* - 
> SimpleDateFormat formatter = new SimpleDateFormat(pattern);
> Long unixtime = formatter.parse(textval).getTime() / 1000;
> Date date = new Date(unixtime * 1000L);
> https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
> documentation they have mention that "Unfortunately, the API for these 
> functions was not amenable to internationalization and The corresponding 
> methods in Date are deprecated" . Due to that this is producing wrong result
> *Master branch* - 
> set hive.local.time.zone=Asia/Bangkok;
> *Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> *Result* - 1800-01-01 06:42:04
> *Implementation details* - 
> DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
> .parseCaseInsensitive()
> .appendPattern(pattern)
> .toFormatter();
> ZonedDateTime zonedDateTime = 
> ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
> Long dttime = zonedDateTime.toInstant().getEpochSecond();
> *Problem*- 
> Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
> giving the correct result but it is not backword compatible. Which is causing 
> issue at time for migration to new version. Because the older data written is 
> using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.
> *Solution*
> Introduce an config "hive.legacy.timeParserPolicy" with following values -
> EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
> raise exception if doesn't match 
> LEGACY - use *SimpleDateFormat* 
> CORRECTED  - use *DateTimeFormatter*
> This will help hive user in following manner - 
> 1. Migrate to new version using *LEGACY*
> 2. Find values which are not compatible with new version - *EXCEPTION*
> 3. Use latest date apis - *CORRECTED*
> Note: apache spark also face the same issue 
> https://issues.apache.org/jira/browse/SPARK-30668



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-25577) unix_timestamp() is ignoring the time zone value

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma reassigned HIVE-25577:



> unix_timestamp() is ignoring the time zone value
> 
>
> Key: HIVE-25577
> URL: https://issues.apache.org/jira/browse/HIVE-25577
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Minor
>
> set hive.local.time.zone=Asia/Bangkok;
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('2000-01-07 00:00:00 
> GMT','-MM-dd HH:mm:ss z'));
> Result - 2000-01-07 00:00:00 ICT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

*Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result* - 1800-01-01 07:00:00

*Implementation details* - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result* - 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*

Note: apache spark also face the same issue 
https://issues.apache.org/jira/browse/SPARK-30668



  was:
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*

Note: apache spark also face the same issue 
https://issues.apache.org/jira/browse/SPARK-30668




> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History*
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> *Query* - SELECT 

[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

*Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result* - 1800-01-01 07:00:00

*Implementation details* - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result* - 1800-01-01 06:42:04

*Implementation details* - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*

Note: apache spark also face the same issue 
https://issues.apache.org/jira/browse/SPARK-30668



  was:
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

*Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result* - 1800-01-01 07:00:00

*Implementation details* - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query* - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result* - 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*

Note: apache spark also face the same issue 
https://issues.apache.org/jira/browse/SPARK-30668




> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History*
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> *Query* - SELECT 

[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*

Note: apache spark also face the same issue 
https://issues.apache.org/jira/browse/SPARK-30668



  was:
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*




> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History*
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 07:00:00
> implementation 

[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History*

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*



  was:
*History *

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*




> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History*
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 07:00:00
> implementation details - 
> SimpleDateFormat formatter = new SimpleDateFormat(pattern);
> Long unixtime = 

[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History *

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION *
3. Use latest date apis - "CORRECTED"



  was:
*History *

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat *has been replaced with *DateTimeFormatter  *which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION *
3. Use latest date apis - "CORRECTED"




> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History *
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 07:00:00
> implementation details - 
> SimpleDateFormat formatter = new SimpleDateFormat(pattern);
> Long unixtime = 

[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History *

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION*
3. Use latest date apis - *CORRECTED*



  was:
*History *

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat* has been replaced with *DateTimeFormatter* which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION *
3. Use latest date apis - "CORRECTED"




> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History *
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 07:00:00
> implementation details - 
> SimpleDateFormat formatter = new SimpleDateFormat(pattern);
> Long unixtime = 

[jira] [Updated] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma updated HIVE-25576:
-
Description: 
*History *

*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

implementation details - 

SimpleDateFormat formatter = new SimpleDateFormat(pattern);
Long unixtime = formatter.parse(textval).getTime() / 1000;
Date date = new Date(unixtime * 1000L);

https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
documentation they have mention that "Unfortunately, the API for these 
functions was not amenable to internationalization and The corresponding 
methods in Date are deprecated" . Due to that this is producing wrong result

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

*Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

*Result *- 1800-01-01 06:42:04

implementation details - 

DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
.parseCaseInsensitive()
.appendPattern(pattern)
.toFormatter();

ZonedDateTime zonedDateTime = 
ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
Long dttime = zonedDateTime.toInstant().getEpochSecond();


*Problem*- 

Now *SimpleDateFormat *has been replaced with *DateTimeFormatter  *which is 
giving the correct result but it is not backword compatible. Which is causing 
issue at time for migration to new version. Because the older data written is 
using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.

*Solution*

Introduce an config "hive.legacy.timeParserPolicy" with following values -
EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
raise exception if doesn't match 
LEGACY - use *SimpleDateFormat* 
CORRECTED  - use *DateTimeFormatter*

This will help hive user in following manner - 
1. Migrate to new version using *LEGACY*
2. Find values which are not compatible with new version - *EXCEPTION *
3. Use latest date apis - "CORRECTED"



  was:
*Hive 1.2* - 

VM time zone set to Asia/Bangkok

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 07:00:00

*Master branch* - 

set hive.local.time.zone=Asia/Bangkok;

Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
UTC','-MM-dd HH:mm:ss z'));

Result - 1800-01-01 06:42:04





> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *History *
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 07:00:00
> implementation details - 
> SimpleDateFormat formatter = new SimpleDateFormat(pattern);
> Long unixtime = formatter.parse(textval).getTime() / 1000;
> Date date = new Date(unixtime * 1000L);
> https://docs.oracle.com/javase/8/docs/api/java/util/Date.html . In official 
> documentation they have mention that "Unfortunately, the API for these 
> functions was not amenable to internationalization and The corresponding 
> methods in Date are deprecated" . Due to that this is producing wrong result
> *Master branch* - 
> set hive.local.time.zone=Asia/Bangkok;
> *Query *- SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> *Result *- 1800-01-01 06:42:04
> implementation details - 
> DateTimeFormatter dtformatter = new DateTimeFormatterBuilder()
> .parseCaseInsensitive()
> .appendPattern(pattern)
> .toFormatter();
> ZonedDateTime zonedDateTime = 
> ZonedDateTime.parse(textval,dtformatter).withZoneSameInstant(ZoneId.of(timezone));
> Long dttime = zonedDateTime.toInstant().getEpochSecond();
> *Problem*- 
> Now *SimpleDateFormat *has been replaced with *DateTimeFormatter  *which is 
> giving the correct result but it is not backword compatible. Which is causing 
> issue at time for migration to new version. Because the older data written is 
> using Hive 1.x or 2.x is not compatible with *DateTimeFormatter*.
> *Solution*
> Introduce an config "hive.legacy.timeParserPolicy" with following values -
> EXCEPTION - compare value of both *SimpleDateFormat* & *DateTimeFormatter* 
> raise exception if doesn't match 
> LEGACY - use *SimpleDateFormat* 
> CORRECTED  - use *DateTimeFormatter*
> This will help hive user in following manner - 
> 1. Migrate to new version 

[jira] [Commented] (HIVE-25571) Fix Metastore script for Oracle Database

2021-09-29 Thread Naveen Gangam (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422099#comment-17422099
 ] 

Naveen Gangam commented on HIVE-25571:
--

Nice catch [~ayushtkn]. Change looks good to me. +1

> Fix Metastore script for Oracle Database
> 
>
> Key: HIVE-25571
> URL: https://issues.apache.org/jira/browse/HIVE-25571
> Project: Hive
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Error:1
> {noformat}
> 354/359      CREATE UNIQUE INDEX DBPRIVILEGEINDEX ON DC_PRIVS 
> (AUTHORIZER,NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,DC_PRIV,GRANTOR,GRANTOR_TYPE);
> Error: ORA-00955: name is already used by an existing object 
> (state=42000,code=955)
> Aborting command set because "force" is false and command failed: "CREATE 
> UNIQUE INDEX DBPRIVILEGEINDEX ON DC_PRIVS 
> (AUTHORIZER,NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,DC_PRIV,GRANTOR,GRANTOR_TYPE);"
> [ERROR] 2021-09-29 09:18:59.075 [main] MetastoreSchemaTool - Schema 
> initialization FAILED! Metastore state would be inconsistent!
> Schema initialization FAILED! Metastore state would be inconsistent!{noformat}
> Error:2
> {noformat}
> Error: ORA-00900: invalid SQL statement (state=42000,code=900)
> Aborting command set because "force" is false and command failed: "===
> -- HIVE-24396
> -- Create DataCo{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-25576) Raise exception instead of silent change for new DateFormatter

2021-09-29 Thread Ashish Sharma (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Sharma reassigned HIVE-25576:



> Raise exception instead of silent change for new DateFormatter
> --
>
> Key: HIVE-25576
> URL: https://issues.apache.org/jira/browse/HIVE-25576
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>
> *Hive 1.2* - 
> VM time zone set to Asia/Bangkok
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 07:00:00
> *Master branch* - 
> set hive.local.time.zone=Asia/Bangkok;
> Query - SELECT FROM_UNIXTIME(UNIX_TIMESTAMP('1800-01-01 00:00:00 
> UTC','-MM-dd HH:mm:ss z'));
> Result - 1800-01-01 06:42:04



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25443) Arrow SerDe Cannot serialize/deserialize complex data types When there are more than 1024 values

2021-09-29 Thread Syed Shameerur Rahman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422080#comment-17422080
 ] 

Syed Shameerur Rahman commented on HIVE-25443:
--

[~kgyrtkirk] Could you please review the changes?
Thanks

> Arrow SerDe Cannot serialize/deserialize complex data types When there are 
> more than 1024 values
> 
>
> Key: HIVE-25443
> URL: https://issues.apache.org/jira/browse/HIVE-25443
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.0, 3.0.0, 3.1.1, 3.1.2
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Complex data types like MAP, STRUCT cannot be serialized/deserialzed using 
> Arrow SerDe when there are more than 1024 values. This happens due to 
> ColumnVector always being initialized with a size of 1024.
> Issue #1 : 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/arrow/ArrowColumnarBatchSerDe.java#L213
> Issue #2 : 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/io/arrow/ArrowColumnarBatchSerDe.java#L215
> Sample unit test to reproduce the case in TestArrowColumnarBatchSerDe :
> {code:java}
> @Test
>public void testListBooleanWithMoreThan1024Values() throws SerDeException {
>  String[][] schema = {
>  {"boolean_list", "array"},
>  };
>   
>  Object[][] rows = new Object[1025][1];
>  for (int i = 0; i < 1025; i++) {
>rows[i][0] = new BooleanWritable(true);
>  }
>   
>  initAndSerializeAndDeserialize(schema, toList(rows));
>}
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25575) Add support for JWT authentication

2021-09-29 Thread Shubham Chaurasia (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Chaurasia updated HIVE-25575:
-
Description: 
It would be good to support JWT auth mechanism in hive. In order to implement 
it, we would need the following - 

On HS2 side -
1. Accept JWT in Authorization: Bearer header.
2. Fetch JWKS from a public endpoint to verify JWT signature, to start with we 
can fetch on HS2 start up.
3. Verify JWT Signature.

On JDBC Client side - 
1. Hive jdbc client should be able to accept jwt in JDBC url. (will add more 
details)
2. Client should also be able to pick up JWT from an env var if it's defined.


  was:
It would be good to support JWT auth mechanism in hive. In order to implement 
it, we would need the following - 

On HS2 side -
1. Accept JWT in Authorization: Bearer header.
2. Fetch JWKS from a public endpoint to verify JWT signature, to start with we 
can fetch on HS2 start up.
3. Verify JWT Signature.

On JDBC Client side - 
1. Hive jdbc client should be able to accept jwt in JDBC url. (will add more 
details)
2. Client should also be able to pick up JWT from a env var if it's defined.



> Add support for JWT authentication
> --
>
> Key: HIVE-25575
> URL: https://issues.apache.org/jira/browse/HIVE-25575
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2, JDBC
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>
> It would be good to support JWT auth mechanism in hive. In order to implement 
> it, we would need the following - 
> On HS2 side -
> 1. Accept JWT in Authorization: Bearer header.
> 2. Fetch JWKS from a public endpoint to verify JWT signature, to start with 
> we can fetch on HS2 start up.
> 3. Verify JWT Signature.
> On JDBC Client side - 
> 1. Hive jdbc client should be able to accept jwt in JDBC url. (will add more 
> details)
> 2. Client should also be able to pick up JWT from an env var if it's defined.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-25575) Add support for JWT authentication

2021-09-29 Thread Shubham Chaurasia (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Chaurasia reassigned HIVE-25575:



> Add support for JWT authentication
> --
>
> Key: HIVE-25575
> URL: https://issues.apache.org/jira/browse/HIVE-25575
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2, JDBC
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>
> It would be good to support JWT auth mechanism in hive. In order to implement 
> it, we would need the following - 
> On HS2 side -
> 1. Accept JWT in Authorization: Bearer header.
> 2. Fetch JWKS from a public endpoint to verify JWT signature, to start with 
> we can fetch on HS2 start up.
> 3. Verify JWT Signature.
> On JDBC Client side - 
> 1. Hive jdbc client should be able to accept jwt in JDBC url. (will add more 
> details)
> 2. Client should also be able to pick up JWT from a env var if it's defined.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657134=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657134
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 11:06
Start Date: 29/Sep/21 11:06
Worklog Time Spent: 10m 
  Work Description: pvary commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718399421



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   Thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657134)
Time Spent: 1h  (was: 50m)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  

[jira] [Work logged] (HIVE-25517) Follow up on HIVE-24951: External Table created with Uppercase name using CTAS does not produce result for select queries

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25517?focusedWorklogId=657129=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657129
 ]

ASF GitHub Bot logged work on HIVE-25517:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 10:59
Start Date: 29/Sep/21 10:59
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #2638:
URL: https://github.com/apache/hive/pull/2638#issuecomment-930069988


   @nrg4878  I don't see a clear testrun for this PR - where is it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657129)
Time Spent: 2h 10m  (was: 2h)

> Follow up on HIVE-24951: External Table created with Uppercase name using 
> CTAS does not produce result for select queries
> -
>
> Key: HIVE-25517
> URL: https://issues.apache.org/jira/browse/HIVE-25517
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Sourabh Goyal
>Assignee: Sourabh Goyal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In [PR|https://github.com/apache/hive/pull/2125] for HIVE-24951, the 
> recommendation was to use getDefaultTablePath() to set the location for an 
> external table. This Jira addresses that and makes getDefaultTablePath() more 
> generic.
>  
> cc - [~ngangam]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657123=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657123
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 10:53
Start Date: 29/Sep/21 10:53
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718389730



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   One more closing parenthesis `)` was added:
   ```
   before: ") "
   after: ") ) "   
   ```
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657123)
Time Spent: 50m  (was: 40m)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>

[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657121=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657121
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 10:53
Start Date: 29/Sep/21 10:53
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718389730



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   One more closing parenthesis `)` was added:
   ```
   before: ") ");
   after: ") ) ");   
   ```
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657121)
Time Spent: 40m  (was: 0.5h)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]

[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657119=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657119
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 10:52
Start Date: 29/Sep/21 10:52
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718389730



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   One more closing parenthese `)` was added:
   ```
   before: ") ");
   after: ") ) ");   
   ```
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657119)
Time Spent: 0.5h  (was: 20m)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]

[jira] [Updated] (HIVE-25569) Enable table definition over a single file

2021-09-29 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-25569:

Attachment: sfs - single file fs.pdf

> Enable table definition over a single file
> --
>
> Key: HIVE-25569
> URL: https://issues.apache.org/jira/browse/HIVE-25569
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: sfs - single file fs.pdf
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Suppose there is a directory where multiple files are present - and by a 3rd 
> party database system this is perfectly normal - because its treating a 
> single file as the contents of the table.
> Tables defined in the metastore follow a different principle - tables are 
> considered to be under a directory - and all files under that directory are 
> the contents of that directory.
> To enable seamless migration/evaluation of Hive and other databases using HMS 
> as a metadatabackend the ability to define a table over a single file would 
> be usefull.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657110=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657110
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 10:27
Start Date: 29/Sep/21 10:27
Worklog Time Spent: 10m 
  Work Description: pvary commented on a change in pull request #2682:
URL: https://github.com/apache/hive/pull/2682#discussion_r718372903



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
##
@@ -2554,10 +2554,10 @@ public Materialization 
getMaterializationInvalidationInfo(
   queryCompletedCompactions.append(" AND (\"CC_HIGHEST_WRITE_ID\" > " + 
tblValidWriteIdList.getHighWatermark());
   queryUpdateDelete.append(tblValidWriteIdList.getInvalidWriteIds().length 
== 0 ? ") " :
   " OR \"CTC_WRITEID\" IN(" + StringUtils.join(",",
-  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ");
+  
Arrays.asList(ArrayUtils.toObject(tblValidWriteIdList.getInvalidWriteIds( + 
") ) ");

Review comment:
   What is the change here?
   Can you help me out please  




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657110)
Time Spent: 20m  (was: 10m)

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> 

[jira] [Updated] (HIVE-25574) Replace clob with varchar when storing creation metadata

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-25574:
--
Labels: pull-request-available  (was: )

> Replace clob with varchar when storing creation metadata
> 
>
> Key: HIVE-25574
> URL: https://issues.apache.org/jira/browse/HIVE-25574
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Follow up of HIVE-21940.
> {code}
>  table="MV_CREATION_METADATA" detachable="true">
> ...
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25574) Replace clob with varchar when storing creation metadata

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25574?focusedWorklogId=657093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657093
 ]

ASF GitHub Bot logged work on HIVE-25574:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 09:50
Start Date: 29/Sep/21 09:50
Worklog Time Spent: 10m 
  Work Description: kasakrisz opened a new pull request #2683:
URL: https://github.com/apache/hive/pull/2683


   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657093)
Remaining Estimate: 0h
Time Spent: 10m

> Replace clob with varchar when storing creation metadata
> 
>
> Key: HIVE-25574
> URL: https://issues.apache.org/jira/browse/HIVE-25574
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Follow up of HIVE-21940.
> {code}
>  table="MV_CREATION_METADATA" detachable="true">
> ...
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25574) Replace clob with varchar when storing creation metadata

2021-09-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-25574:
--
Summary: Replace clob with varchar when storing creation metadata  (was: 
Replace clob with varchar in JDO)

> Replace clob with varchar when storing creation metadata
> 
>
> Key: HIVE-25574
> URL: https://issues.apache.org/jira/browse/HIVE-25574
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>
> Follow up of HIVE-21940.
> {code}
>  table="MV_CREATION_METADATA" detachable="true">
> ...
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25574) Replace clob with varchar in JDO

2021-09-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-25574:
--
Description: 
Follow up of HIVE-21940.
{code}

...
  

  
{code}

  was:Follow up of HIVE-21940.


> Replace clob with varchar in JDO
> 
>
> Key: HIVE-25574
> URL: https://issues.apache.org/jira/browse/HIVE-25574
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>
> Follow up of HIVE-21940.
> {code}
>  table="MV_CREATION_METADATA" detachable="true">
> ...
>   
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-25574) Replace clob with varchar in JDO

2021-09-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa reassigned HIVE-25574:
-


> Replace clob with varchar in JDO
> 
>
> Key: HIVE-25574
> URL: https://issues.apache.org/jira/browse/HIVE-25574
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>
> Follow up of HIVE-21940.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25571) Fix Metastore script for Oracle Database

2021-09-29 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422034#comment-17422034
 ] 

Ayush Saxena commented on HIVE-25571:
-

The Script works post the changes in the PR.

 
{noformat}
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.hadoop.hive.metastore.dbinstall.ITestOracle
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 79.623 s 
- in org.apache.hadoop.hive.metastore.dbinstall.ITestOracle
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
{noformat}

cc. [~ngangam] The PR changes the code in the linked jiras, Can you help give a 
check

 

> Fix Metastore script for Oracle Database
> 
>
> Key: HIVE-25571
> URL: https://issues.apache.org/jira/browse/HIVE-25571
> Project: Hive
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Error:1
> {noformat}
> 354/359      CREATE UNIQUE INDEX DBPRIVILEGEINDEX ON DC_PRIVS 
> (AUTHORIZER,NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,DC_PRIV,GRANTOR,GRANTOR_TYPE);
> Error: ORA-00955: name is already used by an existing object 
> (state=42000,code=955)
> Aborting command set because "force" is false and command failed: "CREATE 
> UNIQUE INDEX DBPRIVILEGEINDEX ON DC_PRIVS 
> (AUTHORIZER,NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,DC_PRIV,GRANTOR,GRANTOR_TYPE);"
> [ERROR] 2021-09-29 09:18:59.075 [main] MetastoreSchemaTool - Schema 
> initialization FAILED! Metastore state would be inconsistent!
> Schema initialization FAILED! Metastore state would be inconsistent!{noformat}
> Error:2
> {noformat}
> Error: ORA-00900: invalid SQL statement (state=42000,code=900)
> Aborting command set because "force" is false and command failed: "===
> -- HIVE-24396
> -- Create DataCo{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24706) Spark SQL access hive on HBase table access exception

2021-09-29 Thread cadl (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17422021#comment-17422021
 ] 

cadl commented on HIVE-24706:
-

I meet the issue too. After tracking the stacktrace, there are two problems 
that cause the issue.
 # The `org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat` not implements 
`org.apache.hadoop.mapreduce.InputFormat` completely. As [~Lysak] said, 
`HiveHBaseTableInputFormat` doesn't  overwrite `getSplits(JobContext context)` 
and `createRecordReader`, and doesn't initialize the table.
 # Because of extending the 
`org.apache.hadoop.hbase.mapreduce.TableInputFormatBase extends 
InputFormat`, `HiveHBaseTableInputFormat` can't 
cast to `InputFormat` at [spark 
createNewHadoopRDD|https://github.com/apache/spark/blob/35848385ae6518b4b72c2f5c1e9ca5a83a190723/sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala#L373]
 

Is it safe to change `HiveHBaseTableInputFormat` from 
`InputFormat` to 
`InputFormat`?

I'd like to submit a pull request about it.

> Spark SQL access hive on HBase table access exception
> -
>
> Key: HIVE-24706
> URL: https://issues.apache.org/jira/browse/HIVE-24706
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: zhangzhanchang
>Priority: Major
> Attachments: image-2021-01-30-15-51-58-665.png
>
>
> Hivehbasetableinputformat relies on two versions of inputformat,one is 
> org.apache.hadoop.mapred.InputFormat, the other is 
> org.apache.hadoop.mapreduce.InputFormat,Causes
> spark 3.0(https://github.com/apache/spark/pull/31302) both conditions to be 
> true:
>  # classOf[oldInputClass[_, _]].isAssignableFrom(inputFormatClazz) is true
>  # classOf[newInputClass[_, _]].isAssignableFrom(inputFormatClazz) is true
> !image-2021-01-30-15-51-58-665.png|width=430,height=137!
> Hivehbasetableinputformat relies on inputformat to be changed to 
> org.apache.hadoop.mapreduce or org.apache.hadoop.mapred?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25335) Unreasonable setting reduce number, when join big size table(but small row count) and small size table

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25335?focusedWorklogId=657061=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657061
 ]

ASF GitHub Bot logged work on HIVE-25335:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 08:41
Start Date: 29/Sep/21 08:41
Worklog Time Spent: 10m 
  Work Description: zhengchenyu edited a comment on pull request #2490:
URL: https://github.com/apache/hive/pull/2490#issuecomment-927529430


   @jcamachor @zabetak  Can you help me review it, or give me some suggestion?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657061)
Time Spent: 40m  (was: 0.5h)

> Unreasonable setting reduce number, when join big size table(but small row 
> count) and small size table
> --
>
> Key: HIVE-25335
> URL: https://issues.apache.org/jira/browse/HIVE-25335
> Project: Hive
>  Issue Type: Improvement
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-25335.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I found an application which is slow in our cluster, because the proccess 
> bytes of one reduce is very huge, but only two reduce. 
> when I debug, I found the reason. Because in this sql, one big size table 
> (about 30G) with few row count(about 3.5M), another small size table (about 
> 100M) have more row count (about 3.6M). So JoinStatsRule.process only use 
> 100M to estimate reducer's number. But we need to  process 30G byte in fact.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421994#comment-17421994
 ] 

zhengchenyu edited comment on HIVE-25561 at 9/29/21, 8:20 AM:
--

[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. (One is normal task attempt, the other is speculative task 
attempt.) So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed under some 
conditions. Once some exception was not caught, abort may be false.


was (Author: zhengchenyu):
[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. (One is normal task attempt, the other is speculative task 
attempt.) So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed. Once some exception 
was not caught, abort may be false.

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421994#comment-17421994
 ] 

zhengchenyu edited comment on HIVE-25561 at 9/29/21, 8:18 AM:
--

[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. (One is normal task attempt, the other is speculative task 
attempt.) So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed. Once some exception 
was not caught, abort may be false.


was (Author: zhengchenyu):
[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed. Once some exception 
was not caught, abort may be false.

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421994#comment-17421994
 ] 

zhengchenyu edited comment on HIVE-25561 at 9/29/21, 8:18 AM:
--

[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed. Once some exception 
was not caught, abort may be false.


was (Author: zhengchenyu):
[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed. Once some exception 
was not caught, ** abort may be false.

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421994#comment-17421994
 ] 

zhengchenyu edited comment on HIVE-25561 at 9/29/21, 8:17 AM:
--

[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed. Once some exception 
was not caught, ** abort may be false.


was (Author: zhengchenyu):
[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed.

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421994#comment-17421994
 ] 

zhengchenyu edited comment on HIVE-25561 at 9/29/21, 8:15 AM:
--

[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

One file is the subset of the other file. Because the speculative task is 
killed, so this file created by killed task is the subset of the completed file.

I found the file created by killed task could be committed.


was (Author: zhengchenyu):
[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25561) Killed task should not commit file.

2021-09-29 Thread zhengchenyu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17421994#comment-17421994
 ] 

zhengchenyu commented on HIVE-25561:


[~zabetak] When bug is reproduced, partition contains duplicate file:  02_0 
and 02_1. The two file are created by two different task attempt which 
belong to same task. One is normal task attempt, the other is speculative task 
attempt. So we will query duplicated line.

> Killed task should not commit file.
> ---
>
> Key: HIVE-25561
> URL: https://issues.apache.org/jira/browse/HIVE-25561
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 1.2.1, 2.3.8, 2.4.0
>Reporter: zhengchenyu
>Assignee: zhengchenyu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For tez engine in our cluster, I found some duplicate line, especially tez 
> speculation is enabled. In partition dir, I found both 02_0 and 02_1 
> exist.
> It's a very low probability event. HIVE-10429 has fix some bug about 
> interrupt, but some exception was not caught.
> In our cluster, Task receive SIGTERM, then ClientFinalizer(Hadoop Class) was 
> called, hdfs client will close. Then will raise exception, but abort may not 
> set to true.
> Then removeTempOrDuplicateFiles may fail because of inconsistency, duplicate 
> file will retain. 
> (Notes: Driver first list dir, then Task commit file, then Driver remove 
> duplicate file. It is a inconsistency case)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?focusedWorklogId=657043=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-657043
 ]

ASF GitHub Bot logged work on HIVE-25572:
-

Author: ASF GitHub Bot
Created on: 29/Sep/21 07:49
Start Date: 29/Sep/21 07:49
Worklog Time Spent: 10m 
  Work Description: kasakrisz opened a new pull request #2682:
URL: https://github.com/apache/hive/pull/2682


   ### What changes were proposed in this pull request?
   Add missing bracket when assembling direct sql query to get materialization 
invalidation info.
   
   ### Why are the changes needed?
   The query was syntactically incorrect and blocked incremental materialized 
view rebuild when one or more MV source table had uncommitted transaction when 
the MV was last rebuilt and the snapshot was taken however they might be 
committed later and has an affect on the next MV rebuild.
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   ### How was this patch tested?
   ```
   mvn test -Dtest=TestTxnHandler -pl ql
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 657043)
Remaining Estimate: 0h
Time Spent: 10m

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
>   at 
> 

[jira] [Updated] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-25572:
--
Labels: pull-request-available  (was: )

> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.jar:4.13]
>   at 

[jira] [Assigned] (HIVE-25572) Exception while querying materialized view invalidation info

2021-09-29 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa reassigned HIVE-25572:
-


> Exception while querying materialized view invalidation info
> 
>
> Key: HIVE-25572
> URL: https://issues.apache.org/jira/browse/HIVE-25572
> Project: Hive
>  Issue Type: Bug
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
>
> {code:java}
> 2021-09-29T00:33:02,971  WARN [main] txn.TxnHandler: Unable to retrieve 
> materialization invalidation information: completed transaction components.
> java.sql.SQLSyntaxErrorException: Syntax error: Encountered "" at line 
> 1, column 234.
>   at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
> Source) ~[derby-10.14.1.0.jar:?]
>   at 
> com.zaxxer.hikari.pool.ProxyConnection.prepareStatement(ProxyConnection.java:311)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> com.zaxxer.hikari.pool.HikariProxyConnection.prepareStatement(HikariProxyConnection.java)
>  ~[HikariCP-2.6.1.jar:?]
>   at 
> org.apache.hadoop.hive.metastore.tools.SQLGenerator.prepareStmtWithParameters(SQLGenerator.java:169)
>  ~[classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.executeBoolean(TxnHandler.java:2598)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getMaterializationInvalidationInfo(TxnHandler.java:2575)
>  [classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1910)
>  [test-classes/:?]
>   at 
> org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testGetMaterializationInvalidationInfo(TestTxnHandler.java:1875)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) 
> [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  [junit-4.13.jar:4.13]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) 
> [junit-4.13.jar:4.13]
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) 
> [junit-4.13.jar:4.13]
>   at