[jira] [Commented] (HIVE-22865) Include data in replication staging directory

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053122#comment-17053122
 ] 

Hive QA commented on HIVE-22865:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 44 new + 155 unchanged - 5 
fixed = 199 total (was 160) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} itests/hive-unit: The patch generated 16 new + 809 
unchanged - 9 fixed = 825 total (was 818) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
3s{color} | {color:green} ql generated 0 new + 1530 unchanged - 1 fixed = 1530 
total (was 1531) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20975/dev-support/hive-personality.sh
 |
| git revision | master / 3bed626 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20975/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20975/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20975/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20975/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Include data in replication staging directory
> -
>
> Key: HIVE-22865
> URL: https://issues.apache.org/jira/browse/HIVE-22865
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22865.1.patch, 

[jira] [Commented] (HIVE-21660) Wrong result when union all and later view with explode is used

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053090#comment-17053090
 ] 

Hive QA commented on HIVE-21660:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12967616/HIVE-21660.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18103 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query24]
 (batchId=306)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20974/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20974/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20974/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12967616 - PreCommit-HIVE-Build

> Wrong result when union all and later view with explode is used
> ---
>
> Key: HIVE-21660
> URL: https://issues.apache.org/jira/browse/HIVE-21660
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 3.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21660.1.patch, HIVE-21660.patch
>
>
> There is a data loss when the data is inserted to a partitioned table using 
> union all and lateral view with explode. 
>  
> *Steps to reproduce:*
>  
> {code:java}
> create table t1 (id int, dt string);
> insert into t1 values (2, '2019-04-01');
> create table t2( id int, dates array);
> insert into t2 select 1 as id, array('2019-01-01','2019-01-02','2019-01-03') 
> as dates;
> create table dst (id int) partitioned by (dt string);
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.dynamic.partition=true;
> insert overwrite table dst partition (dt)
> select t.id, t.dt from (
> select id, dt from t1
> union all
> select id, dts as dt from t2 tt2 lateral view explode(tt2.dates) dd as dts ) 
> t;
> select * from dst;
> {code}
>  
>  
> *Actual Result:*
> {code:java}
> +--+--+
> | 2| 2019-04-01   |
> +--+--+{code}
>  
> *Expected Result* (Run only the select part from the above insert query)*:* 
> {code:java}
> +---++
> | 2     | 2019-04-01 |
> | 1     | 2019-01-01 |
> | 1     | 2019-01-02 |
> | 1     | 2019-01-03 |
> +---++{code}
>  
> Data retrieved using union all and lateral view with explode from second 
> table is missing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22907) Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers

2020-03-05 Thread Miklos Gergely (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053072#comment-17053072
 ] 

Miklos Gergely commented on HIVE-22907:
---

Merged to master, thank you [~jcamachorodriguez]!

> Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers
> 
>
> Key: HIVE-22907
> URL: https://issues.apache.org/jira/browse/HIVE-22907
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22907.01.patch, HIVE-22907.02.patch, 
> HIVE-22907.03.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #15: extract the rest of the alter table analyzers from 
> DDLSemanticAnalyzer, and move them under the new package. Remove 
> DDLSemanticAnalyzer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21660) Wrong result when union all and later view with explode is used

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053064#comment-17053064
 ] 

Hive QA commented on HIVE-21660:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
49s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20974/dev-support/hive-personality.sh
 |
| git revision | master / 1fe0bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20974/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20974/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Wrong result when union all and later view with explode is used
> ---
>
> Key: HIVE-21660
> URL: https://issues.apache.org/jira/browse/HIVE-21660
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 3.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21660.1.patch, HIVE-21660.patch
>
>
> There is a data loss when the data is inserted to a partitioned table using 
> union all and lateral view with explode. 
>  
> *Steps to reproduce:*
>  
> {code:java}
> create table t1 (id int, dt string);
> insert into t1 values (2, '2019-04-01');
> create table t2( id int, dates array);
> insert into t2 select 1 as id, array('2019-01-01','2019-01-02','2019-01-03') 
> as dates;
> create table dst (id int) partitioned by (dt string);
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.dynamic.partition=true;
> insert overwrite table dst partition (dt)
> select t.id, t.dt from (
> select id, dt from t1
> union all
> select id, dts as dt from t2 tt2 lateral view explode(tt2.dates) dd as dts ) 
> t;
> select * from dst;
> {code}
>  
>  
> *Actual Result:*
> {code:java}
> +--+--+
> | 2| 2019-04-01   |
> +--+--+{code}
>  
> *Expected Result* (Run only the select part from the above insert query)*:* 
> {code:java}
> +---++
> | 2     | 2019-04-01 |
> | 1     | 2019-01-01 |
> | 1     | 

[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22954:
---
Status: In Progress  (was: Patch Available)

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.19.patch, HIVE-22954.20.patch, HIVE-22954.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22954:
---
Attachment: HIVE-22954.20.patch
Status: Patch Available  (was: In Progress)

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.19.patch, HIVE-22954.20.patch, HIVE-22954.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22976) Oracle and MSSQL upgrade script missing the addition of WM_RESOURCEPLAN_FK1 constraint

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17053051#comment-17053051
 ] 

Hive QA commented on HIVE-22976:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995709/HIVE-22976.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18102 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.spark.client.rpc.TestRpc.testClientTimeout (batchId=365)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20973/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20973/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20973/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995709 - PreCommit-HIVE-Build

> Oracle and MSSQL upgrade script missing the addition of WM_RESOURCEPLAN_FK1 
> constraint
> --
>
> Key: HIVE-22976
> URL: https://issues.apache.org/jira/browse/HIVE-22976
> Project: Hive
>  Issue Type: Bug
>Reporter: Barnabas Maidics
>Assignee: Barnabas Maidics
>Priority: Minor
> Attachments: HIVE-22976.1.patch
>
>
> The schema init script (>=hive-schema-3.0.0) contains a constraint addition, 
> which is missing from the upgrade scripts in oracle and mssql. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22948) QueryCache: Treat query cache locations as temporary storage

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HIVE-22948:

Status: Open  (was: Patch Available)

> QueryCache: Treat query cache locations as temporary storage
> 
>
> Key: HIVE-22948
> URL: https://issues.apache.org/jira/browse/HIVE-22948
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.1.2, 4.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Gopal Vijayaraghavan
>Priority: Major
> Attachments: HIVE-22948.1.patch, HIVE-22948.1.patch, 
> HIVE-22948.2.patch
>
>
> The WriteEntity with a query cache query is considered for user authorization 
> without having direct access for users.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java#L111
> {code}
>   if (privObject instanceof WriteEntity && 
> ((WriteEntity)privObject).isTempURI()) {
> // do not authorize temporary uris
> continue;
>   }
> {code}
> is not satisfied by the queries qualifying for the query cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22948) QueryCache: Treat query cache locations as temporary storage

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HIVE-22948:

Status: Patch Available  (was: Open)

> QueryCache: Treat query cache locations as temporary storage
> 
>
> Key: HIVE-22948
> URL: https://issues.apache.org/jira/browse/HIVE-22948
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.1.2, 4.0.0
>Reporter: Gopal Vijayaraghavan
>Assignee: Gopal Vijayaraghavan
>Priority: Major
> Attachments: HIVE-22948.1.patch, HIVE-22948.1.patch, 
> HIVE-22948.2.patch
>
>
> The WriteEntity with a query cache query is considered for user authorization 
> without having direct access for users.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java#L111
> {code}
>   if (privObject instanceof WriteEntity && 
> ((WriteEntity)privObject).isTempURI()) {
> // do not authorize temporary uris
> continue;
>   }
> {code}
> is not satisfied by the queries qualifying for the query cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22948) QueryCache: Treat query cache locations as temporary storage

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal Vijayaraghavan updated HIVE-22948:

Attachment: HIVE-22948.2.patch

> QueryCache: Treat query cache locations as temporary storage
> 
>
> Key: HIVE-22948
> URL: https://issues.apache.org/jira/browse/HIVE-22948
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 4.0.0, 3.1.2
>Reporter: Gopal Vijayaraghavan
>Assignee: Gopal Vijayaraghavan
>Priority: Major
> Attachments: HIVE-22948.1.patch, HIVE-22948.1.patch, 
> HIVE-22948.2.patch
>
>
> The WriteEntity with a query cache query is considered for user authorization 
> without having direct access for users.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/security/authorization/command/CommandAuthorizerV2.java#L111
> {code}
>   if (privObject instanceof WriteEntity && 
> ((WriteEntity)privObject).isTempURI()) {
> // do not authorize temporary uris
> continue;
>   }
> {code}
> is not satisfied by the queries qualifying for the query cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22865) Include data in replication staging directory

2020-03-05 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-22865:
--
Attachment: HIVE-22865.12.patch

> Include data in replication staging directory
> -
>
> Key: HIVE-22865
> URL: https://issues.apache.org/jira/browse/HIVE-22865
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22865.1.patch, HIVE-22865.10.patch, 
> HIVE-22865.11.patch, HIVE-22865.12.patch, HIVE-22865.2.patch, 
> HIVE-22865.3.patch, HIVE-22865.4.patch, HIVE-22865.5.patch, 
> HIVE-22865.6.patch, HIVE-22865.7.patch, HIVE-22865.8.patch, HIVE-22865.9.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22976) Oracle and MSSQL upgrade script missing the addition of WM_RESOURCEPLAN_FK1 constraint

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052837#comment-17052837
 ] 

Hive QA commented on HIVE-22976:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20973/dev-support/hive-personality.sh
 |
| git revision | master / 1fe0bd2 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20973/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20973/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Oracle and MSSQL upgrade script missing the addition of WM_RESOURCEPLAN_FK1 
> constraint
> --
>
> Key: HIVE-22976
> URL: https://issues.apache.org/jira/browse/HIVE-22976
> Project: Hive
>  Issue Type: Bug
>Reporter: Barnabas Maidics
>Assignee: Barnabas Maidics
>Priority: Minor
> Attachments: HIVE-22976.1.patch
>
>
> The schema init script (>=hive-schema-3.0.0) contains a constraint addition, 
> which is missing from the upgrade scripts in oracle and mssql. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22977) Merge delta files instead of running a query in major/minor compaction

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052736#comment-17052736
 ] 

Hive QA commented on HIVE-22977:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995707/HIVE-22977.02.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18114 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.mmTableBucketed 
(batchId=255)
org.apache.hadoop.hive.ql.txn.compactor.TestCompactor.mmTableOriginalsOrc 
(batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20972/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20972/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20972/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995707 - PreCommit-HIVE-Build

> Merge delta files instead of running a query in major/minor compaction
> --
>
> Key: HIVE-22977
> URL: https://issues.apache.org/jira/browse/HIVE-22977
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Pintér
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-22977.01.patch, HIVE-22977.02.patch
>
>
> [Compaction Optimiziation]
> We should analyse the possibility to move a delta file instead of running a 
> major/minor compaction query.
> Please consider the following use cases:
>  - full acid table but only insert queries were run. This means that no 
> delete delta directories were created. Is it possible to merge the delta 
> directory contents without running a compaction query?
>  - full acid table, initiating queries through the streaming API. If there 
> are no abort transactions during the streaming, is it possible to merge the 
> delta directory contents without running a compaction query?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21660) Wrong result when union all and later view with explode is used

2020-03-05 Thread Ganesha Shreedhara (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052726#comment-17052726
 ] 

Ganesha Shreedhara commented on HIVE-21660:
---

[~jcamachorodriguez] It looks like I do not have permission to create PR. I 
have created RB req request ([https://reviews.apache.org/r/72203/]) . Please 
review. 

> Wrong result when union all and later view with explode is used
> ---
>
> Key: HIVE-21660
> URL: https://issues.apache.org/jira/browse/HIVE-21660
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 3.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21660.1.patch, HIVE-21660.patch
>
>
> There is a data loss when the data is inserted to a partitioned table using 
> union all and lateral view with explode. 
>  
> *Steps to reproduce:*
>  
> {code:java}
> create table t1 (id int, dt string);
> insert into t1 values (2, '2019-04-01');
> create table t2( id int, dates array);
> insert into t2 select 1 as id, array('2019-01-01','2019-01-02','2019-01-03') 
> as dates;
> create table dst (id int) partitioned by (dt string);
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.dynamic.partition=true;
> insert overwrite table dst partition (dt)
> select t.id, t.dt from (
> select id, dt from t1
> union all
> select id, dts as dt from t2 tt2 lateral view explode(tt2.dates) dd as dts ) 
> t;
> select * from dst;
> {code}
>  
>  
> *Actual Result:*
> {code:java}
> +--+--+
> | 2| 2019-04-01   |
> +--+--+{code}
>  
> *Expected Result* (Run only the select part from the above insert query)*:* 
> {code:java}
> +---++
> | 2     | 2019-04-01 |
> | 1     | 2019-01-01 |
> | 1     | 2019-01-02 |
> | 1     | 2019-01-03 |
> +---++{code}
>  
> Data retrieved using union all and lateral view with explode from second 
> table is missing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22977) Merge delta files instead of running a query in major/minor compaction

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052724#comment-17052724
 ] 

Hive QA commented on HIVE-22977:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
49s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 8 new + 37 unchanged - 0 fixed 
= 45 total (was 37) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 7 
unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20972/dev-support/hive-personality.sh
 |
| git revision | master / 1fe0bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20972/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20972/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20972/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20972/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Merge delta files instead of running a query in major/minor compaction
> --
>
> Key: HIVE-22977
> URL: https://issues.apache.org/jira/browse/HIVE-22977
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Pintér
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-22977.01.patch, HIVE-22977.02.patch
>
>
> [Compaction Optimiziation]
> We should analyse the possibility to move a delta file instead of running a 
> major/minor compaction query.
> Please consider the following use cases:
>  - full 

[jira] [Commented] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052705#comment-17052705
 ] 

Hive QA commented on HIVE-22762:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995778/HIVE-22762.06.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18102 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20971/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20971/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20971/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995778 - PreCommit-HIVE-Build

> Leap day is incorrectly parsed during cast in Hive
> --
>
> Key: HIVE-22762
> URL: https://issues.apache.org/jira/browse/HIVE-22762
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, 
> HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, 
> HIVE-22762.03.patch, HIVE-22762.03.patch, HIVE-22762.04.patch, 
> HIVE-22762.05.patch, HIVE-22762.06.patch
>
>
> While casting a string to a date with a custom date format having day token 
> before year and moth tokens, the date is parsed incorrectly for leap days.
> h3. How to reproduce
> Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with 
> Hive. The query  results in *2020-02-28*, incorrectly.
> 
> Executing the another cast with a slightly modified representation of the 
> date (day is preceded by year and moth) is however correctly parsed:
> {code}select cast("0 02 29" as date format "rr mm dd"){code}
> It returns *2020-02-29*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052643#comment-17052643
 ] 

Hive QA commented on HIVE-22762:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} common: The patch generated 6 new + 0 unchanged - 0 
fixed = 6 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20971/dev-support/hive-personality.sh
 |
| git revision | master / 1fe0bd2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20971/yetus/diff-checkstyle-common.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20971/yetus/patch-asflicense-problems.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20971/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Leap day is incorrectly parsed during cast in Hive
> --
>
> Key: HIVE-22762
> URL: https://issues.apache.org/jira/browse/HIVE-22762
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, 
> HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, 
> HIVE-22762.03.patch, HIVE-22762.03.patch, HIVE-22762.04.patch, 
> HIVE-22762.05.patch, HIVE-22762.06.patch
>
>
> While casting a string to a date with a custom date format having day token 
> before year and moth tokens, the date is parsed incorrectly for leap days.
> h3. How to reproduce
> Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with 
> Hive. The query  results in *2020-02-28*, incorrectly.
> 
> Executing the another cast with a slightly modified representation of the 
> date (day is preceded by year and moth) is however correctly parsed:
> {code}select cast("0 02 29" as date format "rr mm dd"){code}
> It returns *2020-02-29*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-21660) Wrong result when union all and later view with explode is used

2020-03-05 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052639#comment-17052639
 ] 

Jesus Camacho Rodriguez edited comment on HIVE-21660 at 3/6/20, 1:22 AM:
-

[~ganeshas], I will review it. Can you rebase it (if needed) and create a PR? 
Thanks


was (Author: jcamachorodriguez):
[~ganeshas], I will review it. Can you create a PR? Thanks

> Wrong result when union all and later view with explode is used
> ---
>
> Key: HIVE-21660
> URL: https://issues.apache.org/jira/browse/HIVE-21660
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 3.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21660.1.patch, HIVE-21660.patch
>
>
> There is a data loss when the data is inserted to a partitioned table using 
> union all and lateral view with explode. 
>  
> *Steps to reproduce:*
>  
> {code:java}
> create table t1 (id int, dt string);
> insert into t1 values (2, '2019-04-01');
> create table t2( id int, dates array);
> insert into t2 select 1 as id, array('2019-01-01','2019-01-02','2019-01-03') 
> as dates;
> create table dst (id int) partitioned by (dt string);
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.dynamic.partition=true;
> insert overwrite table dst partition (dt)
> select t.id, t.dt from (
> select id, dt from t1
> union all
> select id, dts as dt from t2 tt2 lateral view explode(tt2.dates) dd as dts ) 
> t;
> select * from dst;
> {code}
>  
>  
> *Actual Result:*
> {code:java}
> +--+--+
> | 2| 2019-04-01   |
> +--+--+{code}
>  
> *Expected Result* (Run only the select part from the above insert query)*:* 
> {code:java}
> +---++
> | 2     | 2019-04-01 |
> | 1     | 2019-01-01 |
> | 1     | 2019-01-02 |
> | 1     | 2019-01-03 |
> +---++{code}
>  
> Data retrieved using union all and lateral view with explode from second 
> table is missing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21660) Wrong result when union all and later view with explode is used

2020-03-05 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052639#comment-17052639
 ] 

Jesus Camacho Rodriguez commented on HIVE-21660:


[~ganeshas], I will review it. Can you create a PR? Thanks

> Wrong result when union all and later view with explode is used
> ---
>
> Key: HIVE-21660
> URL: https://issues.apache.org/jira/browse/HIVE-21660
> Project: Hive
>  Issue Type: Bug
>  Components: Physical Optimizer
>Affects Versions: 3.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-21660.1.patch, HIVE-21660.patch
>
>
> There is a data loss when the data is inserted to a partitioned table using 
> union all and lateral view with explode. 
>  
> *Steps to reproduce:*
>  
> {code:java}
> create table t1 (id int, dt string);
> insert into t1 values (2, '2019-04-01');
> create table t2( id int, dates array);
> insert into t2 select 1 as id, array('2019-01-01','2019-01-02','2019-01-03') 
> as dates;
> create table dst (id int) partitioned by (dt string);
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.dynamic.partition=true;
> insert overwrite table dst partition (dt)
> select t.id, t.dt from (
> select id, dt from t1
> union all
> select id, dts as dt from t2 tt2 lateral view explode(tt2.dates) dd as dts ) 
> t;
> select * from dst;
> {code}
>  
>  
> *Actual Result:*
> {code:java}
> +--+--+
> | 2| 2019-04-01   |
> +--+--+{code}
>  
> *Expected Result* (Run only the select part from the above insert query)*:* 
> {code:java}
> +---++
> | 2     | 2019-04-01 |
> | 1     | 2019-01-01 |
> | 1     | 2019-01-02 |
> | 1     | 2019-01-03 |
> +---++{code}
>  
> Data retrieved using union all and lateral view with explode from second 
> table is missing. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22907) Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers

2020-03-05 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052638#comment-17052638
 ] 

Jesus Camacho Rodriguez commented on HIVE-22907:


+1

> Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers
> 
>
> Key: HIVE-22907
> URL: https://issues.apache.org/jira/browse/HIVE-22907
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: refactor-ddl
> Attachments: HIVE-22907.01.patch, HIVE-22907.02.patch, 
> HIVE-22907.03.patch
>
>
> DDLSemanticAnalyzer is a huge class, more than 4000 lines long. The goal is 
> to refactor it in order to have everything cut into more handleable classes 
> under the package  org.apache.hadoop.hive.ql.exec.ddl:
>  * have a separate class for each analyzers
>  * have a package for each operation, containing an analyzer, a description, 
> and an operation, so the amount of classes under a package is more manageable
> Step #15: extract the rest of the alter table analyzers from 
> DDLSemanticAnalyzer, and move them under the new package. Remove 
> DDLSemanticAnalyzer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22968) Set hive.parquet.timestamp.time.unit default to micros

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052634#comment-17052634
 ] 

Hive QA commented on HIVE-22968:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995692/HIVE-22968.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18101 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20970/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20970/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20970/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995692 - PreCommit-HIVE-Build

> Set hive.parquet.timestamp.time.unit default to micros
> --
>
> Key: HIVE-22968
> URL: https://issues.apache.org/jira/browse/HIVE-22968
> Project: Hive
>  Issue Type: Task
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-22968.2.patch, HIVE-22968.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries

2020-03-05 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22962:
---
Attachment: HIVE-22962.05.patch

> Reuse HiveRelFieldTrimmer instance across queries
> -
>
> Key: HIVE-22962
> URL: https://issues.apache.org/jira/browse/HIVE-22962
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, 
> HIVE-22962.03.patch, HIVE-22962.04.patch, HIVE-22962.05.patch, 
> HIVE-22962.patch
>
>
> Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. 
> {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching 
> mechanism: given a certain object, it stores the method that was called for 
> the object class. However, by instantiating the trimmer multiple times per 
> query and across queries, we create a new dispatcher with each instantiation, 
> thus effectively removing the caching mechanism that is built within the 
> dispatcher.
> This issue is to reutilize the same {{HiveRelFieldTrimmer}} instance within a 
> single query and across queries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22978) Fix decimal precision and scale inference for aggregate rewriting in Calcite

2020-03-05 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052632#comment-17052632
 ] 

Jesus Camacho Rodriguez commented on HIVE-22978:


[~vgarg], can you take a look?
https://github.com/apache/hive/pull/938

To be clear, it is expected to have additional CAST expressions to match AVG 
type semantics.

> Fix decimal precision and scale inference for aggregate rewriting in Calcite
> 
>
> Key: HIVE-22978
> URL: https://issues.apache.org/jira/browse/HIVE-22978
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22978.01.patch, HIVE-22978.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Calcite rules can do rewritings of aggregate functions, e.g., {{avg}} into 
> {{sum/count}}. When type of {{avg}} is decimal, inference of intermediate 
> precision and scale for the division is not done correctly. The reason is 
> that we miss support for some types in method {{getDefaultPrecision}} in 
> {{HiveTypeSystemImpl}}. Additionally, {{deriveSumType}} should be overridden 
> in {{HiveTypeSystemImpl}} to abide by the Hive semantics for sum aggregate 
> type inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22978) Fix decimal precision and scale inference for aggregate rewriting in Calcite

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22978:
--
Labels: pull-request-available  (was: )

> Fix decimal precision and scale inference for aggregate rewriting in Calcite
> 
>
> Key: HIVE-22978
> URL: https://issues.apache.org/jira/browse/HIVE-22978
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22978.01.patch, HIVE-22978.patch
>
>
> Calcite rules can do rewritings of aggregate functions, e.g., {{avg}} into 
> {{sum/count}}. When type of {{avg}} is decimal, inference of intermediate 
> precision and scale for the division is not done correctly. The reason is 
> that we miss support for some types in method {{getDefaultPrecision}} in 
> {{HiveTypeSystemImpl}}. Additionally, {{deriveSumType}} should be overridden 
> in {{HiveTypeSystemImpl}} to abide by the Hive semantics for sum aggregate 
> type inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22978) Fix decimal precision and scale inference for aggregate rewriting in Calcite

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22978?focusedWorklogId=398827=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398827
 ]

ASF GitHub Bot logged work on HIVE-22978:
-

Author: ASF GitHub Bot
Created on: 06/Mar/20 01:06
Start Date: 06/Mar/20 01:06
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #938: HIVE-22978
URL: https://github.com/apache/hive/pull/938
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398827)
Remaining Estimate: 0h
Time Spent: 10m

> Fix decimal precision and scale inference for aggregate rewriting in Calcite
> 
>
> Key: HIVE-22978
> URL: https://issues.apache.org/jira/browse/HIVE-22978
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22978.01.patch, HIVE-22978.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Calcite rules can do rewritings of aggregate functions, e.g., {{avg}} into 
> {{sum/count}}. When type of {{avg}} is decimal, inference of intermediate 
> precision and scale for the division is not done correctly. The reason is 
> that we miss support for some types in method {{getDefaultPrecision}} in 
> {{HiveTypeSystemImpl}}. Additionally, {{deriveSumType}} should be overridden 
> in {{HiveTypeSystemImpl}} to abide by the Hive semantics for sum aggregate 
> type inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22978) Fix decimal precision and scale inference for aggregate rewriting in Calcite

2020-03-05 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22978:
---
Attachment: HIVE-22978.01.patch

> Fix decimal precision and scale inference for aggregate rewriting in Calcite
> 
>
> Key: HIVE-22978
> URL: https://issues.apache.org/jira/browse/HIVE-22978
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-22978.01.patch, HIVE-22978.patch
>
>
> Calcite rules can do rewritings of aggregate functions, e.g., {{avg}} into 
> {{sum/count}}. When type of {{avg}} is decimal, inference of intermediate 
> precision and scale for the division is not done correctly. The reason is 
> that we miss support for some types in method {{getDefaultPrecision}} in 
> {{HiveTypeSystemImpl}}. Additionally, {{deriveSumType}} should be overridden 
> in {{HiveTypeSystemImpl}} to abide by the Hive semantics for sum aggregate 
> type inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22937) LLAP : Use unique names for the zip and tarball bundle for LLAP

2020-03-05 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated HIVE-22937:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

https://git-wip-us.apache.org/repos/asf?p=hive.git;a=commit;h=1fe0bd2298ece4eb37a89c5d9e983d597e2b93eb

> LLAP : Use unique names for the zip and tarball bundle for LLAP
> ---
>
> Key: HIVE-22937
> URL: https://issues.apache.org/jira/browse/HIVE-22937
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22937.1.patch
>
>
> LLAP : Use unique names for the zip and tarball bundle for LLAP



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22829) Decimal64: NVL in vectorization miss NPE with CBO on

2020-03-05 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22829:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Ramesh!

> Decimal64: NVL in vectorization miss NPE with CBO on
> 
>
> Key: HIVE-22829
> URL: https://issues.apache.org/jira/browse/HIVE-22829
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal Vijayaraghavan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22829.3.patch, HIVE-22829.4.patch
>
>
> {code}
> select  
> sum(NVL(ss_sales_price, 1.0BD))
> from store_sales where ss_sold_date_sk %  = 1;
> {code}
> {code}
> | notVectorizedReason: exception: 
> java.lang.NullPointerException stack trace: 
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4754),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4687),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4669),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5269),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:977),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:864),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:834),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2500(Vectorizer.java:245),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2103),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2055),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:2030),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.convertMapWork(Vectorizer.java:1185),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.dispatch(Vectorizer.java:1017),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180), 
> ... |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22786) Vectorization: Agg with distinct can be optimised in HASH mode

2020-03-05 Thread Ashutosh Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-22786:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Ramesh and Rajesh!

> Vectorization: Agg with distinct can be optimised in HASH mode
> --
>
> Key: HIVE-22786
> URL: https://issues.apache.org/jira/browse/HIVE-22786
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Rajesh Balamohan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22786.1.patch, HIVE-22786.10.patch, 
> HIVE-22786.2.patch, HIVE-22786.3.patch, HIVE-22786.4.wip.patch, 
> HIVE-22786.5.patch, HIVE-22786.6.patch, HIVE-22786.7.patch, 
> HIVE-22786.8.patch, HIVE-22786.9.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22986) Prevent Decimal64 to Decimal conversion when other operations support Decimal64

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052624#comment-17052624
 ] 

Gopal Vijayaraghavan commented on HIVE-22986:
-

Is 

{code}
+if (castTypeDecimal && inputTypeDecimal && commonPhysicalVariation == 
DataTypePhysicalVariation.DECIMAL_64) {
+  return null;
+}
{code}

the actual change needed? Waiting for the scale-up casting tests to confirm, if 
this didn't break scale-up.

> Prevent Decimal64 to Decimal conversion when other operations support 
> Decimal64
> ---
>
> Key: HIVE-22986
> URL: https://issues.apache.org/jira/browse/HIVE-22986
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22986.1.patch
>
>
> Prevent Decimal64 to Decimal conversion when other operations support 
> Decimal64



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22829) Decimal64: NVL in vectorization miss NPE with CBO on

2020-03-05 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052622#comment-17052622
 ] 

Ashutosh Chauhan commented on HIVE-22829:
-

+1

> Decimal64: NVL in vectorization miss NPE with CBO on
> 
>
> Key: HIVE-22829
> URL: https://issues.apache.org/jira/browse/HIVE-22829
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal Vijayaraghavan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22829.3.patch, HIVE-22829.4.patch
>
>
> {code}
> select  
> sum(NVL(ss_sales_price, 1.0BD))
> from store_sales where ss_sold_date_sk %  = 1;
> {code}
> {code}
> | notVectorizedReason: exception: 
> java.lang.NullPointerException stack trace: 
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4754),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4687),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4669),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5269),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:977),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:864),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:834),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2500(Vectorizer.java:245),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2103),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2055),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:2030),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.convertMapWork(Vectorizer.java:1185),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.dispatch(Vectorizer.java:1017),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180), 
> ... |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22986) Prevent Decimal64 to Decimal conversion when other operations support Decimal64

2020-03-05 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22986:

Attachment: HIVE-22986.1.patch
Status: Patch Available  (was: Open)

> Prevent Decimal64 to Decimal conversion when other operations support 
> Decimal64
> ---
>
> Key: HIVE-22986
> URL: https://issues.apache.org/jira/browse/HIVE-22986
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22986.1.patch
>
>
> Prevent Decimal64 to Decimal conversion when other operations support 
> Decimal64



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22987) ClassCastException in VectorCoalesce when DataTypePhysicalVariation is null

2020-03-05 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22987:

Attachment: HIVE-22987.1.patch
Status: Patch Available  (was: Open)

> ClassCastException in VectorCoalesce when DataTypePhysicalVariation is null
> ---
>
> Key: HIVE-22987
> URL: https://issues.apache.org/jira/browse/HIVE-22987
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22987.1.patch
>
>
> ClassCastException in VectorCoalesce when DataTypePhysicalVariation is null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052613#comment-17052613
 ] 

Zhihua Deng commented on HIVE-22983:


Can someone help review this the code change?  a simple fix on comments of 
ConstantPropagate.

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-22983.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22987) ClassCastException in VectorCoalesce when DataTypePhysicalVariation is null

2020-03-05 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan reassigned HIVE-22987:
---


> ClassCastException in VectorCoalesce when DataTypePhysicalVariation is null
> ---
>
> Key: HIVE-22987
> URL: https://issues.apache.org/jira/browse/HIVE-22987
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>
> ClassCastException in VectorCoalesce when DataTypePhysicalVariation is null



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22968) Set hive.parquet.timestamp.time.unit default to micros

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052612#comment-17052612
 ] 

Hive QA commented on HIVE-22968:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
40s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20970/dev-support/hive-personality.sh
 |
| git revision | master / 9b3ef2b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20970/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20970/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Set hive.parquet.timestamp.time.unit default to micros
> --
>
> Key: HIVE-22968
> URL: https://issues.apache.org/jira/browse/HIVE-22968
> Project: Hive
>  Issue Type: Task
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-22968.2.patch, HIVE-22968.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?focusedWorklogId=398780=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398780
 ]

ASF GitHub Bot logged work on HIVE-22983:
-

Author: ASF GitHub Bot
Created on: 06/Mar/20 00:16
Start Date: 06/Mar/20 00:16
Worklog Time Spent: 10m 
  Work Description: dengzhhu653 commented on pull request #937: HIVE-22983: 
Fix the comments on ConstantPropagate
URL: https://github.com/apache/hive/pull/937
 
 
   The constantPropagate traverse the DAG from root to child, the child won’t 
start until all his parents have been visited.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398780)
Remaining Estimate: 0h
Time Spent: 10m

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-22983.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-22983:
--
Labels: pull-request-available  (was: )

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22829) Decimal64: NVL in vectorization miss NPE with CBO on

2020-03-05 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22829:

Attachment: HIVE-22829.4.patch
Status: Patch Available  (was: Open)

> Decimal64: NVL in vectorization miss NPE with CBO on
> 
>
> Key: HIVE-22829
> URL: https://issues.apache.org/jira/browse/HIVE-22829
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal Vijayaraghavan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22829.3.patch, HIVE-22829.4.patch
>
>
> {code}
> select  
> sum(NVL(ss_sales_price, 1.0BD))
> from store_sales where ss_sold_date_sk %  = 1;
> {code}
> {code}
> | notVectorizedReason: exception: 
> java.lang.NullPointerException stack trace: 
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4754),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4687),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4669),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5269),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:977),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:864),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:834),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2500(Vectorizer.java:245),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2103),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2055),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:2030),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.convertMapWork(Vectorizer.java:1185),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.dispatch(Vectorizer.java:1017),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180), 
> ... |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22829) Decimal64: NVL in vectorization miss NPE with CBO on

2020-03-05 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22829:

Status: Open  (was: Patch Available)

> Decimal64: NVL in vectorization miss NPE with CBO on
> 
>
> Key: HIVE-22829
> URL: https://issues.apache.org/jira/browse/HIVE-22829
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal Vijayaraghavan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22829.3.patch, HIVE-22829.4.patch
>
>
> {code}
> select  
> sum(NVL(ss_sales_price, 1.0BD))
> from store_sales where ss_sold_date_sk %  = 1;
> {code}
> {code}
> | notVectorizedReason: exception: 
> java.lang.NullPointerException stack trace: 
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4754),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4687),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4669),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5269),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:977),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:864),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:834),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2500(Vectorizer.java:245),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2103),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2055),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:2030),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.convertMapWork(Vectorizer.java:1185),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.dispatch(Vectorizer.java:1017),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111),
>  
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180), 
> ... |
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Attachment: (was: HIVE-22983.patch)

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Attachment: HIVE-22983.patch

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Status: Open  (was: Patch Available)

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Status: Patch Available  (was: Open)

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Attachment: (was: HIVE-22983.1.patch)

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Attachment: HIVE-22983.patch

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22986) Prevent Decimal64 to Decimal conversion when other operations support Decimal64

2020-03-05 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan reassigned HIVE-22986:
---


> Prevent Decimal64 to Decimal conversion when other operations support 
> Decimal64
> ---
>
> Key: HIVE-22986
> URL: https://issues.apache.org/jira/browse/HIVE-22986
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>
> Prevent Decimal64 to Decimal conversion when other operations support 
> Decimal64



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22972) Allow table id to be set for table creation requests

2020-03-05 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22972:
--
Attachment: HIVE-22972.03.patch

> Allow table id to be set for table creation requests
> 
>
> Key: HIVE-22972
> URL: https://issues.apache.org/jira/browse/HIVE-22972
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22972.01.patch, HIVE-22972.02.patch, 
> HIVE-22972.03.patch
>
>
> Hive Metastore should accept requests for table creation where the id is set, 
> ignoring it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22972) Allow table id to be set for table creation requests

2020-03-05 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22972:
--
Attachment: (was: HIVE-22972.03.patch)

> Allow table id to be set for table creation requests
> 
>
> Key: HIVE-22972
> URL: https://issues.apache.org/jira/browse/HIVE-22972
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22972.01.patch, HIVE-22972.02.patch, 
> HIVE-22972.03.patch
>
>
> Hive Metastore should accept requests for table creation where the id is set, 
> ignoring it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22983) Fix the comments on ConstantPropagate

2020-03-05 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng updated HIVE-22983:
---
Summary: Fix the comments on ConstantPropagate  (was: Address the comments 
on ConstantPropagate)

> Fix the comments on ConstantPropagate
> -
>
> Key: HIVE-22983
> URL: https://issues.apache.org/jira/browse/HIVE-22983
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Minor
> Attachments: HIVE-22983.1.patch
>
>
> The constantPropagate traverse the DAG from root to child, the child won’t 
> start until all his parents have been visited.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22972) Allow table id to be set for table creation requests

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052602#comment-17052602
 ] 

Hive QA commented on HIVE-22972:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995684/HIVE-22972.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 18101 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters]
 (batchId=165)
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query20]
 (batchId=306)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20969/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20969/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20969/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995684 - PreCommit-HIVE-Build

> Allow table id to be set for table creation requests
> 
>
> Key: HIVE-22972
> URL: https://issues.apache.org/jira/browse/HIVE-22972
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22972.01.patch, HIVE-22972.02.patch, 
> HIVE-22972.03.patch
>
>
> Hive Metastore should accept requests for table creation where the id is set, 
> ignoring it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22945) Hive ACID Data Corruption: Update command mess the other column data and produces incorrect result

2020-03-05 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22945:
--
Status: Patch Available  (was: Open)

> Hive ACID Data Corruption: Update command mess the other column data and 
> produces incorrect result
> --
>
> Key: HIVE-22945
> URL: https://issues.apache.org/jira/browse/HIVE-22945
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 3.2.0
>Reporter: Rajkumar Singh
>Assignee: Denys Kuzmenko
>Priority: Critical
> Attachments: HIVE-22945.1.patch
>
>
> Hive Update Operation update the other column incorrectly and produces 
> incorrect results:
> Steps to reproduce:
> {code:java}
> CREATE TABLE `test`(
>   `start_dt` timestamp, 
>   `stop_dt` timestamp
>   );
>   
> INSERT INTO test (start_dt, stop_dt) SELECT  CURRENT_TIMESTAMP, CAST(NULL AS 
> TIMESTAMP);
> select * from test; 
> +--+---+
> |  test.start_dt   | test.stop_dt  |
> +--+---+
> | 2020-02-28 20:06:29.116  | NULL  |
> +--+---+
> UPDATE test SET STOP_DT = CURRENT_TIMESTAMP WHERE CAST(START_DT AS DATE) = 
> CURRENT_DATE;
> ++--+
> | test.start_dt  |   test.stop_dt   |
> ++--+
> | 2020-02-28 00:00:00.0  | 2020-02-28 20:07:12.248  |
> ++--+
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22945) Hive ACID Data Corruption: Update command mess the other column data and produces incorrect result

2020-03-05 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22945:
--
Attachment: HIVE-22945.1.patch

> Hive ACID Data Corruption: Update command mess the other column data and 
> produces incorrect result
> --
>
> Key: HIVE-22945
> URL: https://issues.apache.org/jira/browse/HIVE-22945
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 3.2.0
>Reporter: Rajkumar Singh
>Assignee: Denys Kuzmenko
>Priority: Critical
> Attachments: HIVE-22945.1.patch
>
>
> Hive Update Operation update the other column incorrectly and produces 
> incorrect results:
> Steps to reproduce:
> {code:java}
> CREATE TABLE `test`(
>   `start_dt` timestamp, 
>   `stop_dt` timestamp
>   );
>   
> INSERT INTO test (start_dt, stop_dt) SELECT  CURRENT_TIMESTAMP, CAST(NULL AS 
> TIMESTAMP);
> select * from test; 
> +--+---+
> |  test.start_dt   | test.stop_dt  |
> +--+---+
> | 2020-02-28 20:06:29.116  | NULL  |
> +--+---+
> UPDATE test SET STOP_DT = CURRENT_TIMESTAMP WHERE CAST(START_DT AS DATE) = 
> CURRENT_DATE;
> ++--+
> | test.start_dt  |   test.stop_dt   |
> ++--+
> | 2020-02-28 00:00:00.0  | 2020-02-28 20:07:12.248  |
> ++--+
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22945) Hive ACID Data Corruption: Update command mess the other column data and produces incorrect result

2020-03-05 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko reassigned HIVE-22945:
-

Assignee: Denys Kuzmenko

> Hive ACID Data Corruption: Update command mess the other column data and 
> produces incorrect result
> --
>
> Key: HIVE-22945
> URL: https://issues.apache.org/jira/browse/HIVE-22945
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Transactions
>Affects Versions: 3.2.0
>Reporter: Rajkumar Singh
>Assignee: Denys Kuzmenko
>Priority: Critical
> Attachments: HIVE-22945.1.patch
>
>
> Hive Update Operation update the other column incorrectly and produces 
> incorrect results:
> Steps to reproduce:
> {code:java}
> CREATE TABLE `test`(
>   `start_dt` timestamp, 
>   `stop_dt` timestamp
>   );
>   
> INSERT INTO test (start_dt, stop_dt) SELECT  CURRENT_TIMESTAMP, CAST(NULL AS 
> TIMESTAMP);
> select * from test; 
> +--+---+
> |  test.start_dt   | test.stop_dt  |
> +--+---+
> | 2020-02-28 20:06:29.116  | NULL  |
> +--+---+
> UPDATE test SET STOP_DT = CURRENT_TIMESTAMP WHERE CAST(START_DT AS DATE) = 
> CURRENT_DATE;
> ++--+
> | test.start_dt  |   test.stop_dt   |
> ++--+
> | 2020-02-28 00:00:00.0  | 2020-02-28 20:07:12.248  |
> ++--+
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22972) Allow table id to be set for table creation requests

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052567#comment-17052567
 ] 

Hive QA commented on HIVE-22972:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
17s{color} | {color:blue} standalone-metastore/metastore-server in master has 
185 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20969/dev-support/hive-personality.sh
 |
| git revision | master / 9b3ef2b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20969/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20969/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Allow table id to be set for table creation requests
> 
>
> Key: HIVE-22972
> URL: https://issues.apache.org/jira/browse/HIVE-22972
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22972.01.patch, HIVE-22972.02.patch, 
> HIVE-22972.03.patch
>
>
> Hive Metastore should accept requests for table creation where the id is set, 
> ignoring it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052547#comment-17052547
 ] 

Hive QA commented on HIVE-21218:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995749/HIVE-21218.7.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18101 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=306)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20968/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20968/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20968/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995749 - PreCommit-HIVE-Build

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: David McGinnis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, 
> HIVE-21218.4.patch, HIVE-21218.5.patch, HIVE-21218.6.patch, 
> HIVE-21218.7.patch, HIVE-21218.patch
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21851) FireEventResponse should include event id when available

2020-03-05 Thread Vihang Karajgaonkar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052546#comment-17052546
 ] 

Vihang Karajgaonkar commented on HIVE-21851:


Reattaching the patch.

> FireEventResponse should include event id when available
> 
>
> Key: HIVE-21851
> URL: https://issues.apache.org/jira/browse/HIVE-21851
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21851.01.patch, HIVE-21851.02.patch, 
> HIVE-21851.03.patch, HIVE-21851.04.patch, HIVE-21851.05.patch, 
> HIVE-21851.06.patch
>
>
> The metastore API {{fire_listener_event}} gives clients the ability to fire a 
> INSERT event on DML operations. However, the returned response is empty 
> struct. It would be useful to sent back the event id information in the 
> response so that clients can take actions based of the event id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21851) FireEventResponse should include event id when available

2020-03-05 Thread Vihang Karajgaonkar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-21851:
---
Attachment: HIVE-21851.06.patch

> FireEventResponse should include event id when available
> 
>
> Key: HIVE-21851
> URL: https://issues.apache.org/jira/browse/HIVE-21851
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-21851.01.patch, HIVE-21851.02.patch, 
> HIVE-21851.03.patch, HIVE-21851.04.patch, HIVE-21851.05.patch, 
> HIVE-21851.06.patch
>
>
> The metastore API {{fire_listener_event}} gives clients the ability to fire a 
> INSERT event on DML operations. However, the returned response is empty 
> struct. It would be useful to sent back the event id information in the 
> response so that clients can take actions based of the event id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22955) PreUpgradeTool can fail because access to CharsetDecoder is not synchronized

2020-03-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hankó Gergely updated HIVE-22955:
-
  Component/s: Transactions
Affects Version/s: 4.0.0
  Description: 
{code:java}
2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
(PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
org.apache.hadoop.hive.ql.metadata.HiveException at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
 Caused by: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
...
Caused by: java.lang.IllegalStateException: Current state = RESET, new state = 
FLUSHED at 
java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
 at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.processTable(PreUpgradeTool.java:374)
{code}

This is probably caused by HIVE-21948.

  was:

{code:java}
2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
(PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
org.apache.hadoop.hive.ql.metadata.HiveException at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
 Caused by: java.lang.RuntimeException: 
java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.lang.RuntimeException: 
java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
...
Caused by: java.lang.IllegalStateException: Current state = RESET, new state = 
FLUSHED at 
java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
 at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
 at 
org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.processTable(PreUpgradeTool.java:374)
{code}

This is probably caused by HIVE-21948.


> PreUpgradeTool can fail because access to CharsetDecoder is not synchronized
> 
>
> Key: HIVE-22955
> URL: https://issues.apache.org/jira/browse/HIVE-22955
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22955.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
> (PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
> org.apache.hadoop.hive.ql.metadata.HiveException at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
>  Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
> ...
> Caused by: java.lang.IllegalStateException: Current state = RESET, new state 
> = FLUSHED at 
> java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
>  at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
> java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
>  at 
> 

[jira] [Commented] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052521#comment-17052521
 ] 

Hive QA commented on HIVE-21218:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} kafka-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} kafka-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} kafka-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} kafka-handler: The patch generated 27 new + 1 
unchanged - 0 fixed = 28 total (was 1) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 10 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
16s{color} | {color:red} kafka-handler in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20968/dev-support/hive-personality.sh
 |
| git revision | master / 9b3ef2b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/patch-mvninstall-kafka-handler.txt
 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/patch-compile-kafka-handler.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/patch-compile-kafka-handler.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/diff-checkstyle-kafka-handler.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/whitespace-tabs.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/patch-findbugs-kafka-handler.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus/patch-asflicense-problems.txt
 |
| modules | C: serde kafka-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20968/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> 

[jira] [Commented] (HIVE-22978) Fix decimal precision and scale inference for aggregate rewriting in Calcite

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052500#comment-17052500
 ] 

Hive QA commented on HIVE-22978:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995679/HIVE-22978.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 95 failed/errored test(s), 18101 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_precision] 
(batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_udf] (batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_cast_constant] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_aggregate]
 (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_precision]
 (batchId=58)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[external_jdbc_table_perf]
 (batchId=192)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_cast_constant]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_aggregate]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_precision]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_udf]
 (batchId=194)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets3_dec]
 (batchId=189)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_scalar] 
(batchId=136)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_cast_constant]
 (batchId=122)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_decimal_aggregate]
 (batchId=126)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query13] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query18] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query24] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query26] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query27] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query28] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query30] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query32] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query44] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query65] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query6] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query7] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query81] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query85] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query92] 
(batchId=308)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query9] 
(batchId=308)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_ext_query1] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query13] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query14] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query18] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query1] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query22] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query24] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query26] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query27] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query28] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query30] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query32] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query65] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query6] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query7] 
(batchId=306)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query81] 
(batchId=306)

[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=398661=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398661
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 05/Mar/20 20:19
Start Date: 05/Mar/20 20:19
Worklog Time Spent: 10m 
  Work Description: davidov541 commented on pull request #933: HIVE-21218: 
Adding support for Confluent Kafka Avro message format
URL: https://github.com/apache/hive/pull/933#discussion_r388540929
 
 

 ##
 File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
 ##
 @@ -133,12 +134,44 @@
   Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema 
is empty Can not go further");
   Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty);
   LOG.debug("Building Avro Reader with schema {}", schemaFromProperty);
-  bytesConverter = new AvroBytesConverter(schema);
+  bytesConverter = getByteConverterForAvroDelegate(schema, tbl);
 } else {
   bytesConverter = new BytesWritableConverter();
 }
   }
 
+  enum BytesConverterType {
+SKIP,
+NONE;
+
+static BytesConverterType fromString(String value) {
+  try {
+return BytesConverterType.valueOf(value.trim().toUpperCase());
+  } catch (Exception e){
+return NONE;
+  }
+}
+  }
+
+  BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties 
tbl) throws SerDeException {
+String avroBytesConverterPropertyName = 
AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_TYPE.getPropName();
+String avroBytesConverterProperty = 
tbl.getProperty(avroBytesConverterPropertyName, 
+  BytesConverterType.NONE.toString());
+BytesConverterType avroByteConverterType = 
BytesConverterType.fromString(avroBytesConverterProperty);
+String avroSkipBytesPropertyName = 
AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_SKIP_BYTES.getPropName();
+Integer avroSkipBytes = 0;
+try {
+  Integer.parseInt(tbl.getProperty(avroSkipBytesPropertyName));
 
 Review comment:
   Dangit, you're right. I'll fix this and get a test for this too, since we 
should be catching these sorts of things in tests. I've got an old build around 
here of a Hive test cluster. I'll see if I can bring that up and give it a try.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398661)
Time Spent: 13h  (was: 12h 50m)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: David McGinnis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, 
> HIVE-21218.4.patch, HIVE-21218.5.patch, HIVE-21218.6.patch, 
> HIVE-21218.7.patch, HIVE-21218.patch
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21218) KafkaSerDe doesn't support topics created via Confluent Avro serializer

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21218?focusedWorklogId=398650=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398650
 ]

ASF GitHub Bot logged work on HIVE-21218:
-

Author: ASF GitHub Bot
Created on: 05/Mar/20 20:09
Start Date: 05/Mar/20 20:09
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #933: HIVE-21218: 
Adding support for Confluent Kafka Avro message format
URL: https://github.com/apache/hive/pull/933#discussion_r388535436
 
 

 ##
 File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
 ##
 @@ -133,12 +134,44 @@
   Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema 
is empty Can not go further");
   Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty);
   LOG.debug("Building Avro Reader with schema {}", schemaFromProperty);
-  bytesConverter = new AvroBytesConverter(schema);
+  bytesConverter = getByteConverterForAvroDelegate(schema, tbl);
 } else {
   bytesConverter = new BytesWritableConverter();
 }
   }
 
+  enum BytesConverterType {
+SKIP,
+NONE;
+
+static BytesConverterType fromString(String value) {
+  try {
+return BytesConverterType.valueOf(value.trim().toUpperCase());
+  } catch (Exception e){
+return NONE;
+  }
+}
+  }
+
+  BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties 
tbl) throws SerDeException {
+String avroBytesConverterPropertyName = 
AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_TYPE.getPropName();
+String avroBytesConverterProperty = 
tbl.getProperty(avroBytesConverterPropertyName, 
+  BytesConverterType.NONE.toString());
+BytesConverterType avroByteConverterType = 
BytesConverterType.fromString(avroBytesConverterProperty);
+String avroSkipBytesPropertyName = 
AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_SKIP_BYTES.getPropName();
+Integer avroSkipBytes = 0;
+try {
+  Integer.parseInt(tbl.getProperty(avroSkipBytesPropertyName));
 
 Review comment:
   Seems to me that this broken. The parsed value is never used... did you try 
this code on actual machines ?
   Can you please run this code against actual confluent based avro records ? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398650)
Time Spent: 12h 50m  (was: 12h 40m)

> KafkaSerDe doesn't support topics created via Confluent Avro serializer
> ---
>
> Key: HIVE-21218
> URL: https://issues.apache.org/jira/browse/HIVE-21218
> Project: Hive
>  Issue Type: Bug
>  Components: kafka integration, Serializers/Deserializers
>Affects Versions: 3.1.1
>Reporter: Milan Baran
>Assignee: David McGinnis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21218.2.patch, HIVE-21218.3.patch, 
> HIVE-21218.4.patch, HIVE-21218.5.patch, HIVE-21218.6.patch, 
> HIVE-21218.7.patch, HIVE-21218.patch
>
>  Time Spent: 12h 50m
>  Remaining Estimate: 0h
>
> According to [Google 
> groups|https://groups.google.com/forum/#!topic/confluent-platform/JYhlXN0u9_A]
>  the Confluent avro serialzier uses propertiary format for kafka value - 
> <4 bytes of schema ID> conforms to schema>. 
> This format does not cause any problem for Confluent kafka deserializer which 
> respect the format however for hive kafka handler its bit a problem to 
> correctly deserialize kafka value, because Hive uses custom deserializer from 
> bytes to objects and ignores kafka consumer ser/deser classes provided via 
> table property.
> It would be nice to support Confluent format with magic byte.
> Also it would be great to support Schema registry as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22978) Fix decimal precision and scale inference for aggregate rewriting in Calcite

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052459#comment-17052459
 ] 

Hive QA commented on HIVE-22978:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
48s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 4 new + 19 unchanged - 1 fixed 
= 23 total (was 20) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20967/dev-support/hive-personality.sh
 |
| git revision | master / 9b3ef2b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20967/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20967/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20967/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix decimal precision and scale inference for aggregate rewriting in Calcite
> 
>
> Key: HIVE-22978
> URL: https://issues.apache.org/jira/browse/HIVE-22978
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-22978.patch
>
>
> Calcite rules can do rewritings of aggregate functions, e.g., {{avg}} into 
> {{sum/count}}. When type of {{avg}} is decimal, inference of intermediate 
> precision and scale for the division is not done correctly. The reason is 
> that we miss support for some types in method {{getDefaultPrecision}} in 
> {{HiveTypeSystemImpl}}. Additionally, {{deriveSumType}} should be overridden 
> in {{HiveTypeSystemImpl}} to abide by the Hive semantics for sum aggregate 
> type inference.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive

2020-03-05 Thread Karen Coppage (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052454#comment-17052454
 ] 

Karen Coppage commented on HIVE-22762:
--

Gotcha, latest patch (06) has:
{code}for (Pair pair : tokenValueList) {
 TemporalField temporalField = pair.getLeft().temporalField;
 int value = pair.getRight();
{code}

> Leap day is incorrectly parsed during cast in Hive
> --
>
> Key: HIVE-22762
> URL: https://issues.apache.org/jira/browse/HIVE-22762
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, 
> HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, 
> HIVE-22762.03.patch, HIVE-22762.03.patch, HIVE-22762.04.patch, 
> HIVE-22762.05.patch, HIVE-22762.06.patch
>
>
> While casting a string to a date with a custom date format having day token 
> before year and moth tokens, the date is parsed incorrectly for leap days.
> h3. How to reproduce
> Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with 
> Hive. The query  results in *2020-02-28*, incorrectly.
> 
> Executing the another cast with a slightly modified representation of the 
> date (day is preceded by year and moth) is however correctly parsed:
> {code}select cast("0 02 29" as date format "rr mm dd"){code}
> It returns *2020-02-29*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive

2020-03-05 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-22762:
-
Attachment: HIVE-22762.06.patch

> Leap day is incorrectly parsed during cast in Hive
> --
>
> Key: HIVE-22762
> URL: https://issues.apache.org/jira/browse/HIVE-22762
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, 
> HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, 
> HIVE-22762.03.patch, HIVE-22762.03.patch, HIVE-22762.04.patch, 
> HIVE-22762.05.patch, HIVE-22762.06.patch
>
>
> While casting a string to a date with a custom date format having day token 
> before year and moth tokens, the date is parsed incorrectly for leap days.
> h3. How to reproduce
> Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with 
> Hive. The query  results in *2020-02-28*, incorrectly.
> 
> Executing the another cast with a slightly modified representation of the 
> date (day is preceded by year and moth) is however correctly parsed:
> {code}select cast("0 02 29" as date format "rr mm dd"){code}
> It returns *2020-02-29*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052433#comment-17052433
 ] 

Hive QA commented on HIVE-22954:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995713/HIVE-22954.18.patch

{color:green}SUCCESS:{color} +1 due to 23 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 18092 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testRetryAcidTablesBootstrapFromDifferentDump
 (batchId=259)
org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testAbortTxnEvent
 (batchId=277)
org.apache.hadoop.hive.ql.parse.TestReplAcidTablesWithJsonMessage.testOpenTxnEvent
 (batchId=277)
org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testCreateFunctionIncrementalReplication
 (batchId=268)
org.apache.hadoop.hive.ql.parse.TestReplAcrossInstancesWithJsonMessageFormat.testDropFunctionIncrementalReplication
 (batchId=268)
org.apache.hadoop.hive.ql.parse.TestReplTableMigrationWithJsonFormat.testIncrementalLoadMigrationManagedToAcidAllOp
 (batchId=275)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTables.testAbortTxnEvent
 (batchId=279)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTables.testOpenTxnEvent
 (batchId=279)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testRetryAcidTablesBootstrapFromDifferentDump
 (batchId=257)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testCreateFunctionIncrementalReplication
 (batchId=273)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.testDropFunctionIncrementalReplication
 (batchId=273)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosExternalTables.retryBootstrapExternalTablesFromDifferentDump
 (batchId=267)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosIncrementalLoadAcidTables.testAcidTableIncrementalReplication
 (batchId=280)
org.apache.hadoop.hive.ql.parse.TestReplicationWithTableMigration.testIncrementalLoadMigrationManagedToAcidAllOp
 (batchId=263)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20966/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20966/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20966/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995713 - PreCommit-HIVE-Build

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.19.patch, HIVE-22954.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-15079) Hive cannot read Parquet string timetamps as TIMESTAMP data type

2020-03-05 Thread Siddhesh (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052426#comment-17052426
 ] 

Siddhesh edited comment on HIVE-15079 at 3/5/20, 6:50 PM:
--

I was able to solve this issue, Please refer my StackOverflow post link  
[https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive
 
|https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive]
 for further details.


was (Author: sid_k):
I was able to solve this kind of issue, Please refer my StackOverflow post link 
 
[https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive
 
|https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive]
 for further details.

> Hive cannot read Parquet string timetamps as TIMESTAMP data type
> 
>
> Key: HIVE-15079
> URL: https://issues.apache.org/jira/browse/HIVE-15079
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Sergio Peña
>Priority: Major
>
> The Hive Wiki for timestamps specifies that strings timestamps can be read by 
> Hive. 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-Timestamps
> {noformat}
> Supported conversions:
> Integer numeric types: Interpreted as UNIX timestamp in seconds
> Floating point numeric types: Interpreted as UNIX timestamp in seconds with 
> decimal precision
> Strings: JDBC compliant java.sql.Timestamp format "-MM-DD 
> HH:MM:SS.f" (9 decimal place precision)
> {noformat}
> This works fine with Text table formats, but when Parquet is used, then it 
> throws the following exception:
> {noformat}
> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
> {noformat}
> How to reproduce
> {noformat}
> > create table t1 (id int, time string) stored as parquet;
> > insert into table t1 values (1,'2016-07-17 14:42:18');
> > alter table t1 replace columns (id int, time timestamp);
> > select * from t1
> {noformat}
> The above example will run fine if you use a TEXT format instead of PARQUET.
> This issue was raised on PARQUET-723



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-15079) Hive cannot read Parquet string timetamps as TIMESTAMP data type

2020-03-05 Thread Siddhesh (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052426#comment-17052426
 ] 

Siddhesh edited comment on HIVE-15079 at 3/5/20, 6:49 PM:
--

I was able to solve this kind of issue, Please refer my StackOverflow post link 
 
[https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive
 
|https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive]
 for further details.


was (Author: sid_k):
I was able to solve this kind of issue, Please refer my 
[stackoverflow|[https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive]]
 link for further details.

> Hive cannot read Parquet string timetamps as TIMESTAMP data type
> 
>
> Key: HIVE-15079
> URL: https://issues.apache.org/jira/browse/HIVE-15079
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Sergio Peña
>Priority: Major
>
> The Hive Wiki for timestamps specifies that strings timestamps can be read by 
> Hive. 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-Timestamps
> {noformat}
> Supported conversions:
> Integer numeric types: Interpreted as UNIX timestamp in seconds
> Floating point numeric types: Interpreted as UNIX timestamp in seconds with 
> decimal precision
> Strings: JDBC compliant java.sql.Timestamp format "-MM-DD 
> HH:MM:SS.f" (9 decimal place precision)
> {noformat}
> This works fine with Text table formats, but when Parquet is used, then it 
> throws the following exception:
> {noformat}
> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
> {noformat}
> How to reproduce
> {noformat}
> > create table t1 (id int, time string) stored as parquet;
> > insert into table t1 values (1,'2016-07-17 14:42:18');
> > alter table t1 replace columns (id int, time timestamp);
> > select * from t1
> {noformat}
> The above example will run fine if you use a TEXT format instead of PARQUET.
> This issue was raised on PARQUET-723



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-15079) Hive cannot read Parquet string timetamps as TIMESTAMP data type

2020-03-05 Thread Siddhesh (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-15079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052426#comment-17052426
 ] 

Siddhesh commented on HIVE-15079:
-

I was able to solve this kind of issue, Please refer my 
[stackoverflow|[https://stackoverflow.com/questions/60492836/timestamp-not-behaving-as-intended-with-parquet-in-hive]]
 link for further details.

> Hive cannot read Parquet string timetamps as TIMESTAMP data type
> 
>
> Key: HIVE-15079
> URL: https://issues.apache.org/jira/browse/HIVE-15079
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Sergio Peña
>Priority: Major
>
> The Hive Wiki for timestamps specifies that strings timestamps can be read by 
> Hive. 
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Types#LanguageManualTypes-Timestamps
> {noformat}
> Supported conversions:
> Integer numeric types: Interpreted as UNIX timestamp in seconds
> Floating point numeric types: Interpreted as UNIX timestamp in seconds with 
> decimal precision
> Strings: JDBC compliant java.sql.Timestamp format "-MM-DD 
> HH:MM:SS.f" (9 decimal place precision)
> {noformat}
> This works fine with Text table formats, but when Parquet is used, then it 
> throws the following exception:
> {noformat}
> java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.ClassCastException: org.apache.hadoop.io.Text cannot be cast to 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
> {noformat}
> How to reproduce
> {noformat}
> > create table t1 (id int, time string) stored as parquet;
> > insert into table t1 values (1,'2016-07-17 14:42:18');
> > alter table t1 replace columns (id int, time timestamp);
> > select * from t1
> {noformat}
> The above example will run fine if you use a TEXT format instead of PARQUET.
> This issue was raised on PARQUET-723



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052412#comment-17052412
 ] 

Hive QA commented on HIVE-22954:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
29s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  5m 
45s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
5s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch parser passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} ql: The patch generated 0 new + 38 unchanged - 6 
fixed = 38 total (was 44) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 1317 
unchanged - 11 fixed = 1317 total (was 1328) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20966/dev-support/hive-personality.sh
 |
| git revision | master / 9b3ef2b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20966/yetus/patch-asflicense-problems.txt
 |
| modules | C: parser ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20966/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, 

[jira] [Work logged] (HIVE-22865) Include data in replication staging directory

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22865?focusedWorklogId=398588=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398588
 ]

ASF GitHub Bot logged work on HIVE-22865:
-

Author: ASF GitHub Bot
Created on: 05/Mar/20 18:28
Start Date: 05/Mar/20 18:28
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #911: HIVE-22865 
Include data in replication staging directory
URL: https://github.com/apache/hive/pull/911#discussion_r388480128
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosExternalTables.java
 ##
 @@ -904,8 +901,20 @@ public void replicationWithTableNameContainsKeywords() 
throws Throwable {
 return 
ReplicationTestUtils.externalTableBasePathWithClause(REPLICA_EXTERNAL_BASE, 
replica);
   }
 
-  private void assertExternalFileInfo(List expected, Path 
externalTableInfoFile)
+  private void assertExternalFileInfo(List expected, String 
dumplocation) throws IOException {
+assertExternalFileInfo(expected, dumplocation, null);
+  }
+  private void assertExternalFileInfo(List expected, String 
dumplocation, String dbName)
   throws IOException {
+Path externalTableInfoFile = new Path(dumplocation, 
relativeExtInfoPath(dbName));
 ReplicationTestUtils.assertExternalFileInfo(primary, expected, 
externalTableInfoFile);
   }
+  private String relativeExtInfoPath(String dbName) {
+
+if (dbName == null) {
 
 Review comment:
   No, the location of external table info file is different in bootstrap and 
incremental case.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398588)
Time Spent: 4.5h  (was: 4h 20m)

> Include data in replication staging directory
> -
>
> Key: HIVE-22865
> URL: https://issues.apache.org/jira/browse/HIVE-22865
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22865.1.patch, HIVE-22865.10.patch, 
> HIVE-22865.11.patch, HIVE-22865.2.patch, HIVE-22865.3.patch, 
> HIVE-22865.4.patch, HIVE-22865.5.patch, HIVE-22865.6.patch, 
> HIVE-22865.7.patch, HIVE-22865.8.patch, HIVE-22865.9.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22865) Include data in replication staging directory

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22865?focusedWorklogId=398589=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398589
 ]

ASF GitHub Bot logged work on HIVE-22865:
-

Author: ASF GitHub Bot
Created on: 05/Mar/20 18:28
Start Date: 05/Mar/20 18:28
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on pull request #911: HIVE-22865 
Include data in replication staging directory
URL: https://github.com/apache/hive/pull/911#discussion_r388480221
 
 

 ##
 File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/parse/TestReplicationScenariosAcidTablesBootstrap.java
 ##
 @@ -264,7 +266,7 @@ public void 
testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites() throw
 prepareIncAcidData(primaryDbName);
 // Perform concurrent writes. Bootstrap won't see the written data but the 
subsequent
 // incremental repl should see it. We can not inject callerVerifier since 
an incremental dump
-// would not cause an ALTER DATABASE event. Instead we piggy back on
+// would not cause an ALTER DATABASE event. Instead we piggy bEHANack on
 
 Review comment:
   Fixed
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398589)
Time Spent: 4h 40m  (was: 4.5h)

> Include data in replication staging directory
> -
>
> Key: HIVE-22865
> URL: https://issues.apache.org/jira/browse/HIVE-22865
> Project: Hive
>  Issue Type: Task
>Reporter: PRAVIN KUMAR SINHA
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22865.1.patch, HIVE-22865.10.patch, 
> HIVE-22865.11.patch, HIVE-22865.2.patch, HIVE-22865.3.patch, 
> HIVE-22865.4.patch, HIVE-22865.5.patch, HIVE-22865.6.patch, 
> HIVE-22865.7.patch, HIVE-22865.8.patch, HIVE-22865.9.patch
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22977) Merge delta files instead of running a query in major/minor compaction

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052403#comment-17052403
 ] 

Gopal Vijayaraghavan commented on HIVE-22977:
-

Actually, I had a deeper read of the patch & I see that the Merger here (is an 
overloaded term), without using the MergeFileTask for ORC 

So this does not have the 2nd problem I mentioned, but we're relying on the 
first file to tell us how to merge things & unlike FileMergeTask, this looks 
like it is running as a thread on HiveServer2?

{code}
+  private Writer setupWriter(Reader reader, Path outPath) throws IOException {
+OrcFile.WriterOptions options =
+
OrcFile.writerOptions(conf).compress(reader.getCompression()).version(reader.getFileVersion())
+
.rowIndexStride(reader.getRowIndexStride()).inspector(reader.getObjectInspector());
+if (CompressionKind.NONE != reader.getCompression()) {
+  options.bufferSize(reader.getCompressionSize()).enforceBufferSize();
+}
{code}

> Merge delta files instead of running a query in major/minor compaction
> --
>
> Key: HIVE-22977
> URL: https://issues.apache.org/jira/browse/HIVE-22977
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Pintér
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-22977.01.patch, HIVE-22977.02.patch
>
>
> [Compaction Optimiziation]
> We should analyse the possibility to move a delta file instead of running a 
> major/minor compaction query.
> Please consider the following use cases:
>  - full acid table but only insert queries were run. This means that no 
> delete delta directories were created. Is it possible to merge the delta 
> directory contents without running a compaction query?
>  - full acid table, initiating queries through the streaming API. If there 
> are no abort transactions during the streaming, is it possible to merge the 
> delta directory contents without running a compaction query?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22126) hive-exec packaging should shade guava

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052402#comment-17052402
 ] 

Hive QA commented on HIVE-22126:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} service in master has 51 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} beeline in master has 48 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} cli in master has 9 extant Findbugs warnings. {color} 
|
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} hcatalog/core in master has 37 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/hcatalog-pig-adapter in master has 2 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/webhcat/java-client in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 12m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
11s{color} | {color:red} hcatalog-pig-adapter in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} java-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
19s{color} | {color:red} ql: The patch generated 1 new + 44 unchanged - 1 fixed 
= 45 total (was 45) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  4m  
2s{color} | {color:red} root: The patch generated 1 new + 50 unchanged - 1 
fixed = 51 total (was 51) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
30s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
52s{color} | {color:red} patch/common cannot run setBugDatabaseInfo from 
findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
58s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} patch/service cannot run 

[jira] [Comment Edited] (HIVE-22977) Merge delta files instead of running a query in major/minor compaction

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052398#comment-17052398
 ] 

Gopal Vijayaraghavan edited comment on HIVE-22977 at 3/5/20, 6:15 PM:
--

This is most likely not an optimization & might make read queries worse.

{code}
HIVE_ORC_BASE_DELTA_RATIO("hive.exec.orc.base.delta.ratio", 8, "The ratio 
of base writer and\n" +
"delta writer in terms of STRIPE_SIZE and BUFFER_SIZE."),

HIVE_ORC_DELTA_STREAMING_OPTIMIZATIONS_ENABLED("hive.exec.orc.delta.streaming.optimizations.enabled",
 false,
  "Whether to enable streaming optimizations for ORC delta files. This will 
disable ORC's internal indexes,\n" +
"disable compression, enable fast encoding and disable dictionary 
encoding."),
{code}

https://github.com/apache/hive/blob/master/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L2043

The Stripe sizing for the deltas are 8x smaller than the regular base files, 
with the assumption that a compactor will go fix it after inserts are done - 
merging them would result in the bad striping becoming permanent.

The streaming inserts do not write any ORC indexes for the same reason, to make 
streaming faster with the assumption that a compactor will rebuild the 
min/max/bloom when it runs in the background asynchronously. Merging stripes 
without rebuilding indexes will result in compacted data having no ability to 
do predicate push-down. 

The 10% of data in deltas can behave under-par for read throughput, but making 
these two permanent by running MergeTask instead is probably going to make the 
compactor faster and everything else slower.


was (Author: gopalv):
This is most likely not an optimization & might make read queries worse.

The Stripe sizing for the deltas are 8x smaller than the regular base files, 
with the assumption that a compactor will go fix it after inserts are done - 
merging them would result in the bad striping becoming permanent.

The streaming inserts do not write any ORC indexes for the same reason, to make 
streaming faster with the assumption that a compactor will rebuild the 
min/max/bloom when it runs in the background asynchronously. Merging stripes 
without rebuilding indexes will result in compacted data having no ability to 
do predicate push-down. 

The 10% of data in deltas can behave under-par for read throughput, but making 
these two permanent by running MergeTask instead is probably going to make the 
compactor faster and everything else slower.

> Merge delta files instead of running a query in major/minor compaction
> --
>
> Key: HIVE-22977
> URL: https://issues.apache.org/jira/browse/HIVE-22977
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Pintér
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-22977.01.patch, HIVE-22977.02.patch
>
>
> [Compaction Optimiziation]
> We should analyse the possibility to move a delta file instead of running a 
> major/minor compaction query.
> Please consider the following use cases:
>  - full acid table but only insert queries were run. This means that no 
> delete delta directories were created. Is it possible to merge the delta 
> directory contents without running a compaction query?
>  - full acid table, initiating queries through the streaming API. If there 
> are no abort transactions during the streaming, is it possible to merge the 
> delta directory contents without running a compaction query?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22977) Merge delta files instead of running a query in major/minor compaction

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052398#comment-17052398
 ] 

Gopal Vijayaraghavan commented on HIVE-22977:
-

This is most likely not an optimization & might make read queries worse.

The Stripe sizing for the deltas are 8x smaller than the regular base files, 
with the assumption that a compactor will go fix it after inserts are done - 
merging them would result in the bad striping becoming permanent.

The streaming inserts do not write any ORC indexes for the same reason, to make 
streaming faster with the assumption that a compactor will rebuild the 
min/max/bloom when it runs in the background asynchronously. Merging stripes 
without rebuilding indexes will result in compacted data having no ability to 
do predicate push-down. 

The 10% of data in deltas can behave under-par for read throughput, but making 
these two permanent by running MergeTask instead is probably going to make the 
compactor faster and everything else slower.

> Merge delta files instead of running a query in major/minor compaction
> --
>
> Key: HIVE-22977
> URL: https://issues.apache.org/jira/browse/HIVE-22977
> Project: Hive
>  Issue Type: Improvement
>Reporter: László Pintér
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-22977.01.patch, HIVE-22977.02.patch
>
>
> [Compaction Optimiziation]
> We should analyse the possibility to move a delta file instead of running a 
> major/minor compaction query.
> Please consider the following use cases:
>  - full acid table but only insert queries were run. This means that no 
> delete delta directories were created. Is it possible to merge the delta 
> directory contents without running a compaction query?
>  - full acid table, initiating queries through the streaming API. If there 
> are no abort transactions during the streaming, is it possible to merge the 
> delta directory contents without running a compaction query?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22955) PreUpgradeTool can fail because access to CharsetDecoder is not synchronized

2020-03-05 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-22955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hankó Gergely updated HIVE-22955:
-
Attachment: HIVE-22955.1.patch
Status: Patch Available  (was: In Progress)

> PreUpgradeTool can fail because access to CharsetDecoder is not synchronized
> 
>
> Key: HIVE-22955
> URL: https://issues.apache.org/jira/browse/HIVE-22955
> Project: Hive
>  Issue Type: Bug
>Reporter: Hankó Gergely
>Assignee: Hankó Gergely
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22955.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> 2020-02-26 20:22:49,683 ERROR [main] acid.PreUpgradeTool 
> (PreUpgradeTool.java:main(150)) - PreUpgradeTool failed 
> org.apache.hadoop.hive.ql.metadata.HiveException at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.prepareAcidUpgradeInternal(PreUpgradeTool.java:283)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.main(PreUpgradeTool.java:146)
>  Caused by: java.lang.RuntimeException: 
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.lang.IllegalStateException: Current state = RESET, new state = FLUSHED
> ...
> Caused by: java.lang.IllegalStateException: Current state = RESET, new state 
> = FLUSHED at 
> java.nio.charset.CharsetDecoder.throwIllegalStateException(CharsetDecoder.java:992)
>  at java.nio.charset.CharsetDecoder.flush(CharsetDecoder.java:675) at 
> java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:804) at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:606)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.needsCompaction(PreUpgradeTool.java:567)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.getCompactionCommands(PreUpgradeTool.java:464)
>  at 
> org.apache.hadoop.hive.upgrade.acid.PreUpgradeTool.processTable(PreUpgradeTool.java:374)
> {code}
> This is probably caused by HIVE-21948.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Status: Patch Available  (was: Open)

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22954:
---
Attachment: HIVE-22954.19.patch
Status: Patch Available  (was: In Progress)

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.19.patch, HIVE-22954.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22966) LLAP: Consider including waitTime for comparing attempts in same vertex

2020-03-05 Thread Gopal Vijayaraghavan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052380#comment-17052380
 ] 

Gopal Vijayaraghavan commented on HIVE-22966:
-

bq. even thought this patch takes into account task aging we do not cure the 
long-tail task issue and we need to properly take care of it.

This entire patch is hiding in the shadow of YARN FIFO assumptions in long tail 
task scheduling order code inside Tez.

https://github.com/apache/tez/blob/master/tez-runtime-library/src/main/java/org/apache/tez/dag/library/vertexmanager/ShuffleVertexManager.java#L591

There's also a somewhat equivalent version for the splits as well

https://github.com/apache/tez/blob/master/tez-mapreduce/src/main/java/org/apache/tez/mapreduce/hadoop/MRInputHelpers.java#L501

So Tez explicitly picks the biggest splits and the heaviest skewed reducers to 
start first, which is mostly relevant for query latency when we have a large 
number of tasks and a low number of executors.

That is why this patch makes a difference, because at the same priority, we get 
FIFO back.

> LLAP: Consider including waitTime for comparing attempts in same vertex
> ---
>
> Key: HIVE-22966
> URL: https://issues.apache.org/jira/browse/HIVE-22966
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22966.3.patch, HIVE-22966.4.patch
>
>
> When attempts are compared within same vertex, it should pick up the attempt 
> with longest wait time to avoid starvation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-22954:
---
Status: In Progress  (was: Patch Available)

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.19.patch, HIVE-22954.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22954?focusedWorklogId=398548=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398548
 ]

ASF GitHub Bot logged work on HIVE-22954:
-

Author: ASF GitHub Bot
Created on: 05/Mar/20 17:49
Start Date: 05/Mar/20 17:49
Worklog Time Spent: 10m 
  Work Description: aasha commented on pull request #932: HIVE-22954 Repl 
Load using scheduler
URL: https://github.com/apache/hive/pull/932#discussion_r388459135
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java
 ##
 @@ -466,29 +520,30 @@ private void initReplStatus(ASTNode ast) throws 
SemanticException{
 
   private void analyzeReplStatus(ASTNode ast) throws SemanticException {
 initReplStatus(ast);
-
 String dbNameOrPattern = replScope.getDbName();
-String replLastId = null;
+String replLastId = getReplStatus(dbNameOrPattern);
+prepareReturnValues(Collections.singletonList(replLastId), 
"last_repl_id#string");
+setFetchTask(createFetchTask("last_repl_id#string"));
+LOG.debug("ReplicationSemanticAnalyzer.analyzeReplStatus: writing 
repl.last.id={} out to {}",
 
 Review comment:
   yes printing conf in debug mode. The toString is already implemented in 
Configuration. I missed it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398548)
Time Spent: 1h 10m  (was: 1h)

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Attachment: (was: HIVE-22974.3.patch)

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Attachment: HIVE-22974.3.patch

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Attachment: (was: HIVE-22925.3.patch)

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Status: Open  (was: Patch Available)

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Attachment: HIVE-22925.3.patch

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Status: Open  (was: Patch Available)

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Attachment: HIVE-22974.3.patch

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22974) Metastore's table location check should be applied when location changed

2020-03-05 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22974:
-
Status: Patch Available  (was: Open)

> Metastore's table location check should be applied when location changed
> 
>
> Key: HIVE-22974
> URL: https://issues.apache.org/jira/browse/HIVE-22974
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22974.2.patch, HIVE-22974.3.patch
>
>
> In HIVE-22189 a check was introduced to make sure managed and external tables 
> are located at the proper space. This condition cannot be satisfied during an 
> upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22126) hive-exec packaging should shade guava

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052365#comment-17052365
 ] 

Hive QA commented on HIVE-22126:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995668/HIVE-22126.07.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18101 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20965/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20965/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20965/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995668 - PreCommit-HIVE-Build

> hive-exec packaging should shade guava
> --
>
> Key: HIVE-22126
> URL: https://issues.apache.org/jira/browse/HIVE-22126
> Project: Hive
>  Issue Type: Bug
>Reporter: Vihang Karajgaonkar
>Assignee: Eugene Chung
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22126.01.patch, HIVE-22126.02.patch, 
> HIVE-22126.03.patch, HIVE-22126.04.patch, HIVE-22126.05.patch, 
> HIVE-22126.06.patch, HIVE-22126.07.patch
>
>
> The ql/pom.xml includes complete guava library into hive-exec.jar 
> https://github.com/apache/hive/blob/master/ql/pom.xml#L990 This causes a 
> problems for downstream clients of hive which have hive-exec.jar in their 
> classpath since they are pinned to the same guava version as that of hive. 
> We should shade guava classes so that other components which depend on 
> hive-exec can independently use a different version of guava as needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query

2020-03-05 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21778:
---
Attachment: HIVE-21778.7.patch

> CBO: "Struct is not null" gets evaluated as `nullable` always causing filter 
> miss in the query
> --
>
> Key: HIVE-21778
> URL: https://issues.apache.org/jira/browse/HIVE-21778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 4.0.0, 2.3.5
>Reporter: Rajesh Balamohan
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, 
> HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, 
> HIVE-21778.6.patch, HIVE-21778.7.patch, test_null.q, test_null.q.out
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> drop table if exists test_struct;
> CREATE external TABLE test_struct
> (
>   f1 string,
>   demo_struct struct,
>   datestr string
> );
> set hive.cbo.enable=true;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note 
> that demo_struct filter is not added here
>   Filter Operator
> predicate: (datestr = '2019-01-01') (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> set hive.cbo.enable=false;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean) <- Note that demo_struct filter is added when CBO is 
> turned off
>   Filter Operator
> predicate: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> {noformat}
> In CalcitePlanner::genFilterRelNode, the following code misses to evaluate 
> this filter. 
> {noformat}
> RexNode factoredFilterExpr = RexUtil
>   .pullFactors(cluster.getRexBuilder(), convertedFilterExpr);
> {noformat}
> Note that even if we add `demo_struct.f1` it would end up pushing the filter 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query

2020-03-05 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21778:
---
Status: Patch Available  (was: Open)

> CBO: "Struct is not null" gets evaluated as `nullable` always causing filter 
> miss in the query
> --
>
> Key: HIVE-21778
> URL: https://issues.apache.org/jira/browse/HIVE-21778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5, 4.0.0
>Reporter: Rajesh Balamohan
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, 
> HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, 
> HIVE-21778.6.patch, HIVE-21778.7.patch, test_null.q, test_null.q.out
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> drop table if exists test_struct;
> CREATE external TABLE test_struct
> (
>   f1 string,
>   demo_struct struct,
>   datestr string
> );
> set hive.cbo.enable=true;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note 
> that demo_struct filter is not added here
>   Filter Operator
> predicate: (datestr = '2019-01-01') (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> set hive.cbo.enable=false;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean) <- Note that demo_struct filter is added when CBO is 
> turned off
>   Filter Operator
> predicate: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> {noformat}
> In CalcitePlanner::genFilterRelNode, the following code misses to evaluate 
> this filter. 
> {noformat}
> RexNode factoredFilterExpr = RexUtil
>   .pullFactors(cluster.getRexBuilder(), convertedFilterExpr);
> {noformat}
> Note that even if we add `demo_struct.f1` it would end up pushing the filter 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21778) CBO: "Struct is not null" gets evaluated as `nullable` always causing filter miss in the query

2020-03-05 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21778:
---
Status: Open  (was: Patch Available)

> CBO: "Struct is not null" gets evaluated as `nullable` always causing filter 
> miss in the query
> --
>
> Key: HIVE-21778
> URL: https://issues.apache.org/jira/browse/HIVE-21778
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.3.5, 4.0.0
>Reporter: Rajesh Balamohan
>Assignee: Vineet Garg
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21778.1.patch, HIVE-21778.2.patch, 
> HIVE-21778.3.patch, HIVE-21778.4.patch, HIVE-21778.5.patch, 
> HIVE-21778.6.patch, HIVE-21778.7.patch, test_null.q, test_null.q.out
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> drop table if exists test_struct;
> CREATE external TABLE test_struct
> (
>   f1 string,
>   demo_struct struct,
>   datestr string
> );
> set hive.cbo.enable=true;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: (datestr = '2019-01-01') (type: boolean) <- Note 
> that demo_struct filter is not added here
>   Filter Operator
> predicate: (datestr = '2019-01-01') (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> set hive.cbo.enable=false;
> explain select * from etltmp.test_struct where datestr='2019-01-01' and 
> demo_struct is not null;
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: -1
>   Processor Tree:
> TableScan
>   alias: test_struct
>   filterExpr: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean) <- Note that demo_struct filter is added when CBO is 
> turned off
>   Filter Operator
> predicate: ((datestr = '2019-01-01') and demo_struct is not null) 
> (type: boolean)
> Select Operator
>   expressions: f1 (type: string), demo_struct (type: 
> struct), '2019-01-01' (type: string)
>   outputColumnNames: _col0, _col1, _col2
>   ListSink
> {noformat}
> In CalcitePlanner::genFilterRelNode, the following code misses to evaluate 
> this filter. 
> {noformat}
> RexNode factoredFilterExpr = RexUtil
>   .pullFactors(cluster.getRexBuilder(), convertedFilterExpr);
> {noformat}
> Note that even if we add `demo_struct.f1` it would end up pushing the filter 
> correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22762) Leap day is incorrectly parsed during cast in Hive

2020-03-05 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052295#comment-17052295
 ] 

David Mollitor commented on HIVE-22762:
---

[~klcopp] Hey, sorry, one more correction:

{code:java}
+// Create Timestamp
+LocalDateTime ldt = LocalDateTime.ofInstant(Instant.EPOCH, ZoneOffset.UTC);
+for (Pair pair : tokenValueList) {
+  TemporalField temporalField = ((Token) pair.getLeft()).temporalField;
+  int value = (int) pair.getRight();
{code}

You shouldn't need to cast any more since you're using Pair

> Leap day is incorrectly parsed during cast in Hive
> --
>
> Key: HIVE-22762
> URL: https://issues.apache.org/jira/browse/HIVE-22762
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22762.01.patch, HIVE-22762.01.patch, 
> HIVE-22762.01.patch, HIVE-22762.01.patch, HIVE-22762.02.patch, 
> HIVE-22762.03.patch, HIVE-22762.03.patch, HIVE-22762.04.patch, 
> HIVE-22762.05.patch
>
>
> While casting a string to a date with a custom date format having day token 
> before year and moth tokens, the date is parsed incorrectly for leap days.
> h3. How to reproduce
> Execute {code}select cast("29 02 0" as date format "dd mm rr"){code} with 
> Hive. The query  results in *2020-02-28*, incorrectly.
> 
> Executing the another cast with a slightly modified representation of the 
> date (day is preceded by year and moth) is however correctly parsed:
> {code}select cast("0 02 29" as date format "rr mm dd"){code}
> It returns *2020-02-29*.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22901) Variable substitution can lead to OOM on circular references

2020-03-05 Thread Daniel Voros (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052287#comment-17052287
 ] 

Daniel Voros commented on HIVE-22901:
-

Attached patch #2 that:
- fixes the broken test case by adding the new option to the list of expected 
restricted flags

> Variable substitution can lead to OOM on circular references
> 
>
> Key: HIVE-22901
> URL: https://issues.apache.org/jira/browse/HIVE-22901
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.2
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-22901.1.patch, HIVE-22901.2.patch
>
>
> {{SystemVariables#substitute()}} is dealing with circular references between 
> variables by only doing the substitution 40 times by default. If the 
> substituted part is sufficiently large though, it's possible that the 
> substitution will produce a string bigger than the heap size within the 40 
> executions.
> Take the following test case that fails with OOM in current master (third 
> round of execution would need 10G heap, while running with only 2G):
> {code}
> @Test
> public void testSubstitute() {
> String randomPart = RandomStringUtils.random(100_000);
> String reference = "${hiveconf:myTestVariable}";
> StringBuilder longStringWithReferences = new StringBuilder();
> for(int i = 0; i < 10; i ++) {
> longStringWithReferences.append(randomPart).append(reference);
> }
> SystemVariables uut = new SystemVariables();
> HiveConf conf = new HiveConf();
> conf.set("myTestVariable", longStringWithReferences.toString());
> uut.substitute(conf, longStringWithReferences.toString(), 40);
> }
> {code}
> Produces:
> {code}
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:3332)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
>   at java.lang.StringBuilder.append(StringBuilder.java:136)
>   at 
> org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110)
>   at 
> org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27)
> {code}
> We should check the size of the substituted query and bail out earlier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22962) Reuse HiveRelFieldTrimmer instance across queries

2020-03-05 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17052285#comment-17052285
 ] 

Hive QA commented on HIVE-22962:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12995667/HIVE-22962.04.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18101 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20964/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20964/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20964/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12995667 - PreCommit-HIVE-Build

> Reuse HiveRelFieldTrimmer instance across queries
> -
>
> Key: HIVE-22962
> URL: https://issues.apache.org/jira/browse/HIVE-22962
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-22962.01.patch, HIVE-22962.02.patch, 
> HIVE-22962.03.patch, HIVE-22962.04.patch, HIVE-22962.patch
>
>
> Currently we create multiple {{HiveRelFieldTrimmer}} instances per query. 
> {{HiveRelFieldTrimmer}} uses a method dispatcher that has a built-in caching 
> mechanism: given a certain object, it stores the method that was called for 
> the object class. However, by instantiating the trimmer multiple times per 
> query and across queries, we create a new dispatcher with each instantiation, 
> thus effectively removing the caching mechanism that is built within the 
> dispatcher.
> This issue is to reutilize the same {{HiveRelFieldTrimmer}} instance within a 
> single query and across queries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22901) Variable substitution can lead to OOM on circular references

2020-03-05 Thread Daniel Voros (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Voros updated HIVE-22901:

Attachment: HIVE-22901.2.patch

> Variable substitution can lead to OOM on circular references
> 
>
> Key: HIVE-22901
> URL: https://issues.apache.org/jira/browse/HIVE-22901
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.2
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-22901.1.patch, HIVE-22901.2.patch
>
>
> {{SystemVariables#substitute()}} is dealing with circular references between 
> variables by only doing the substitution 40 times by default. If the 
> substituted part is sufficiently large though, it's possible that the 
> substitution will produce a string bigger than the heap size within the 40 
> executions.
> Take the following test case that fails with OOM in current master (third 
> round of execution would need 10G heap, while running with only 2G):
> {code}
> @Test
> public void testSubstitute() {
> String randomPart = RandomStringUtils.random(100_000);
> String reference = "${hiveconf:myTestVariable}";
> StringBuilder longStringWithReferences = new StringBuilder();
> for(int i = 0; i < 10; i ++) {
> longStringWithReferences.append(randomPart).append(reference);
> }
> SystemVariables uut = new SystemVariables();
> HiveConf conf = new HiveConf();
> conf.set("myTestVariable", longStringWithReferences.toString());
> uut.substitute(conf, longStringWithReferences.toString(), 40);
> }
> {code}
> Produces:
> {code}
> java.lang.OutOfMemoryError: Java heap space
>   at java.util.Arrays.copyOf(Arrays.java:3332)
>   at 
> java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
>   at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
>   at java.lang.StringBuilder.append(StringBuilder.java:136)
>   at 
> org.apache.hadoop.hive.conf.SystemVariables.substitute(SystemVariables.java:110)
>   at 
> org.apache.hadoop.hive.conf.SystemVariablesTest.testSubstitute(SystemVariablesTest.java:27)
> {code}
> We should check the size of the substituted query and bail out earlier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22954) Schedule Repl Load using Hive Scheduler

2020-03-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22954?focusedWorklogId=398439=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-398439
 ]

ASF GitHub Bot logged work on HIVE-22954:
-

Author: ASF GitHub Bot
Created on: 05/Mar/20 15:36
Start Date: 05/Mar/20 15:36
Worklog Time Spent: 10m 
  Work Description: anishek commented on pull request #932: HIVE-22954 Repl 
Load using scheduler
URL: https://github.com/apache/hive/pull/932#discussion_r388373650
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java
 ##
 @@ -466,29 +520,30 @@ private void initReplStatus(ASTNode ast) throws 
SemanticException{
 
   private void analyzeReplStatus(ASTNode ast) throws SemanticException {
 initReplStatus(ast);
-
 String dbNameOrPattern = replScope.getDbName();
-String replLastId = null;
+String replLastId = getReplStatus(dbNameOrPattern);
+prepareReturnValues(Collections.singletonList(replLastId), 
"last_repl_id#string");
+setFetchTask(createFetchTask("last_repl_id#string"));
+LOG.debug("ReplicationSemanticAnalyzer.analyzeReplStatus: writing 
repl.last.id={} out to {}",
 
 Review comment:
   having files printed might not be useful, it might be good to have the full 
configs printed in debug or may be trace mode additionally.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 398439)
Time Spent: 1h  (was: 50m)

> Schedule Repl Load using Hive Scheduler
> ---
>
> Key: HIVE-22954
> URL: https://issues.apache.org/jira/browse/HIVE-22954
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22954.01.patch, HIVE-22954.02.patch, 
> HIVE-22954.03.patch, HIVE-22954.04.patch, HIVE-22954.05.patch, 
> HIVE-22954.06.patch, HIVE-22954.07.patch, HIVE-22954.08.patch, 
> HIVE-22954.09.patch, HIVE-22954.10.patch, HIVE-22954.11.patch, 
> HIVE-22954.12.patch, HIVE-22954.13.patch, HIVE-22954.15.patch, 
> HIVE-22954.16.patch, HIVE-22954.17.patch, HIVE-22954.18.patch, 
> HIVE-22954.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hive/pull/932]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >