[jira] [Commented] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080276#comment-17080276
 ] 

Hive QA commented on HIVE-22821:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} llap-common in master has 90 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} llap-client in master has 27 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} llap-server in master has 90 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 3 new + 2 unchanged - 0 fixed 
= 5 total (was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 2 new + 54 unchanged 
- 0 fixed = 56 total (was 54) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 17 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} llap-common generated 8 new + 90 unchanged - 0 fixed = 
98 total (was 90) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:llap-common |
|  |  
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$EvictEntityRequestProto.PARSER
 isn't final but should be  At LlapDaemonProtocolProtos.java:be  At 
LlapDaemonProtocolProtos.java:[line 22695] |
|  |  Class 
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$EvictEntityRequestProto
 defines non-transient non-serializable instance field unknownFields  In 
LlapDaemonProtocolProtos.java:instance field unknownFields  In 
LlapDaemonProtocolProtos.java |
|  |  
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$EvictEntityResponseProto.PARSER
 isn't final but should be  At LlapDaemonProtocolProtos.java:be  At 
LlapDaemonProtocolProtos.java:[line 24450] |
|  |  Class 
org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$EvictEntityResponseProto
 defines non-transient non-serializable instance field unknownFields  In 
LlapDaemonProtocolProtos.java:instance field unknownFields  In 
LlapDaemonProtocolProtos.java |
|  |  Useless contro

[jira] [Updated] (HIVE-23151) LLAP: default hive.llap.file.cleanup.delay.seconds=0s

2020-04-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-23151:

Attachment: HIVE-23151.01.patch

> LLAP: default hive.llap.file.cleanup.delay.seconds=0s
> -
>
> Key: HIVE-23151
> URL: https://issues.apache.org/jira/browse/HIVE-23151
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23151.01.patch, HIVE-23151.01.patch, 
> HIVE-23151.01.patch, HIVE-23151.01.patch
>
>
> The current default value (300s) reflects more a debugging scenario, let's 
> set this to 0s in order to make shuffle local files be cleaned up immediately 
> after dag complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23151) LLAP: default hive.llap.file.cleanup.delay.seconds=0s

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080263#comment-17080263
 ] 

Hive QA commented on HIVE-23151:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999416/HIVE-23151.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18207 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestTezPerfConstraintsCliDriver.testCliDriver[cbo_query26]
 (batchId=307)
org.apache.hadoop.hive.metastore.TestGetPartitionsUsingProjectionAndFilterSpecs.testGetPartitionsUsingValuesWithJDO
 (batchId=234)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21542/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21542/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21542/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999416 - PreCommit-HIVE-Build

> LLAP: default hive.llap.file.cleanup.delay.seconds=0s
> -
>
> Key: HIVE-23151
> URL: https://issues.apache.org/jira/browse/HIVE-23151
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23151.01.patch, HIVE-23151.01.patch, 
> HIVE-23151.01.patch
>
>
> The current default value (300s) reflects more a debugging scenario, let's 
> set this to 0s in order to make shuffle local files be cleaned up immediately 
> after dag complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23154) Fix race condition in Utilities::mvFileToFinalPath

2020-04-09 Thread Rajesh Balamohan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Balamohan updated HIVE-23154:

  Assignee: Rajesh Balamohan
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~ashutoshc]. Committed to master.

> Fix race condition in Utilities::mvFileToFinalPath
> --
>
> Key: HIVE-23154
> URL: https://issues.apache.org/jira/browse/HIVE-23154
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-23154.1.patch, HIVE-23154.3.patch
>
>
> Utilities::mvFileToFinalPath is used for moving files from "/_tmp.-ext to 
> "/-ext" folder. Tasks write data to "_tmp" . Before writing to final 
> destination, they are moved to "-ext" folder. As part of it, it has checks to 
> ensure that run-away task outputs are not copied to "-ext" folder.
> Currently, there is a race condition between computing the snapshot of files 
> to be copied and the rename operation. Same issue persists in "insert into" 
> case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23154) Fix race condition in Utilities::mvFileToFinalPath

2020-04-09 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080251#comment-17080251
 ] 

Ashutosh Chauhan commented on HIVE-23154:
-

+1

> Fix race condition in Utilities::mvFileToFinalPath
> --
>
> Key: HIVE-23154
> URL: https://issues.apache.org/jira/browse/HIVE-23154
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-23154.1.patch, HIVE-23154.3.patch
>
>
> Utilities::mvFileToFinalPath is used for moving files from "/_tmp.-ext to 
> "/-ext" folder. Tasks write data to "_tmp" . Before writing to final 
> destination, they are moved to "-ext" folder. As part of it, it has checks to 
> ensure that run-away task outputs are not copied to "-ext" folder.
> Currently, there is a race condition between computing the snapshot of files 
> to be copied and the rename operation. Same issue persists in "insert into" 
> case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23151) LLAP: default hive.llap.file.cleanup.delay.seconds=0s

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080227#comment-17080227
 ] 

Hive QA commented on HIVE-23151:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21542/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21542/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: default hive.llap.file.cleanup.delay.seconds=0s
> -
>
> Key: HIVE-23151
> URL: https://issues.apache.org/jira/browse/HIVE-23151
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: László Bodor
>Assignee: László Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23151.01.patch, HIVE-23151.01.patch, 
> HIVE-23151.01.patch
>
>
> The current default value (300s) reflects more a debugging scenario, let's 
> set this to 0s in order to make shuffle local files be cleaned up immediately 
> after dag complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23175) Skip serializing hadoop and tez config on HS side

2020-04-09 Thread Mustafa Iman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080216#comment-17080216
 ] 

Mustafa Iman commented on HIVE-23175:
-

[~ashutoshc] [~gopalv]

> Skip serializing hadoop and tez config on HS side
> -
>
> Key: HIVE-23175
> URL: https://issues.apache.org/jira/browse/HIVE-23175
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-23175.1.patch
>
>
> HiveServer spends a lot of time serializing configuration objects. We can 
> skip putting hadoop and tez config xml files in payload assuming that the 
> configs are the same on both HS and AM side. This depends on Tez to load 
> local xml configs when creating config objects 
> https://issues.apache.org/jira/browse/TEZ-4141



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23006) Basic compiler support for Probe MapJoin

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080214#comment-17080214
 ] 

Hive QA commented on HIVE-23006:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999414/HIVE-23006.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18209 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21541/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21541/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21541/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999414 - PreCommit-HIVE-Build

> Basic compiler support for Probe MapJoin
> 
>
> Key: HIVE-23006
> URL: https://issues.apache.org/jira/browse/HIVE-23006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23006.01.patch, HIVE-23006.02.patch, 
> HIVE-23006.03.patch
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The decision of pushing down information to the Record reader (potentially 
> reducing decoding time by row-level filtering) should be done at query 
> compilation time.
> This patch adds an extra optimisation step with the goal of finding Table 
> Scan operators that could reduce the number of rows decoded at runtime using 
> extra available information.
> It currently looks for all the available MapJoin operators that could use the 
> smaller HashTable on the probing side (where TS is) to filter-out rows that 
> would never match. 
> To do so the HashTable information is pushed down to the TS properties and 
> then propagated as part of MapWork.
> If the a single TS is used by multiple operators (shared-word), this rule can 
> not be applied.
> This rule can be extended to support static filter expressions like:
> _select * from sales where sold_state = 'PR';_
> This optimisation manly targets the Tez execution engine running on Llap.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23175) Skip serializing hadoop and tez config on HS side

2020-04-09 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-23175:

Component/s: Tez

> Skip serializing hadoop and tez config on HS side
> -
>
> Key: HIVE-23175
> URL: https://issues.apache.org/jira/browse/HIVE-23175
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-23175.1.patch
>
>
> HiveServer spends a lot of time serializing configuration objects. We can 
> skip putting hadoop and tez config xml files in payload assuming that the 
> configs are the same on both HS and AM side. This depends on Tez to load 
> local xml configs when creating config objects 
> https://issues.apache.org/jira/browse/TEZ-4141



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23175) Skip serializing hadoop and tez config on HS side

2020-04-09 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-23175:

Attachment: HIVE-23175.1.patch
Status: Patch Available  (was: Open)

> Skip serializing hadoop and tez config on HS side
> -
>
> Key: HIVE-23175
> URL: https://issues.apache.org/jira/browse/HIVE-23175
> Project: Hive
>  Issue Type: Improvement
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-23175.1.patch
>
>
> HiveServer spends a lot of time serializing configuration objects. We can 
> skip putting hadoop and tez config xml files in payload assuming that the 
> configs are the same on both HS and AM side. This depends on Tez to load 
> local xml configs when creating config objects 
> https://issues.apache.org/jira/browse/TEZ-4141



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23175) Skip serializing hadoop and tez config on HS side

2020-04-09 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman reassigned HIVE-23175:
---


> Skip serializing hadoop and tez config on HS side
> -
>
> Key: HIVE-23175
> URL: https://issues.apache.org/jira/browse/HIVE-23175
> Project: Hive
>  Issue Type: Improvement
>Reporter: Mustafa Iman
>Assignee: Mustafa Iman
>Priority: Major
>
> HiveServer spends a lot of time serializing configuration objects. We can 
> skip putting hadoop and tez config xml files in payload assuming that the 
> configs are the same on both HS and AM side. This depends on Tez to load 
> local xml configs when creating config objects 
> https://issues.apache.org/jira/browse/TEZ-4141



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23163) Class TrustDomainAuthenticationTest should be abstract

2020-04-09 Thread Yikun Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yikun Jiang updated HIVE-23163:
---
Status: Open  (was: Patch Available)

> Class TrustDomainAuthenticationTest should be abstract
> --
>
> Key: HIVE-23163
> URL: https://issues.apache.org/jira/browse/HIVE-23163
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23163.1.patch, HIVE-23163.2.patch, 
> HIVE-23163.3.patch
>
>
> When running tests in pre-commit CI, the test parser will only identify test 
> classes start with 'Test'
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java#L406]
> But when running using `mvn test`, it will also parse classes such as 
> '*Test.java':
> [http://maven.apache.org/plugins-archives/maven-surefire-plugin-2.12.4/examples/inclusion-exclusion.html]
> so for:
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TrustDomainAuthenticationTest.java#L38]
> it will also be included in the test, for example:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/TrustDomainAuthenticationTest/testTrustedDomainAuthentication/]
> This is because that class TrustDomainAuthenticationTest is actually a parent 
> class that does not have the parameters for init. The actual tests are its 
> children classes like the rest in 
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/]
> they will pass actuall parameters for init: 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestTrustDomainAuthenticationBinary.java#L26]
>  
> we can make this class abstract so that it won't be included when running 
> 'mvn test', like 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java#L54]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23163) Class TrustDomainAuthenticationTest should be abstract

2020-04-09 Thread Yikun Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yikun Jiang updated HIVE-23163:
---
Attachment: HIVE-23163.3.patch
Status: Patch Available  (was: Open)

> Class TrustDomainAuthenticationTest should be abstract
> --
>
> Key: HIVE-23163
> URL: https://issues.apache.org/jira/browse/HIVE-23163
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23163.1.patch, HIVE-23163.2.patch, 
> HIVE-23163.3.patch
>
>
> When running tests in pre-commit CI, the test parser will only identify test 
> classes start with 'Test'
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java#L406]
> But when running using `mvn test`, it will also parse classes such as 
> '*Test.java':
> [http://maven.apache.org/plugins-archives/maven-surefire-plugin-2.12.4/examples/inclusion-exclusion.html]
> so for:
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TrustDomainAuthenticationTest.java#L38]
> it will also be included in the test, for example:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/TrustDomainAuthenticationTest/testTrustedDomainAuthentication/]
> This is because that class TrustDomainAuthenticationTest is actually a parent 
> class that does not have the parameters for init. The actual tests are its 
> children classes like the rest in 
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/]
> they will pass actuall parameters for init: 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestTrustDomainAuthenticationBinary.java#L26]
>  
> we can make this class abstract so that it won't be included when running 
> 'mvn test', like 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java#L54]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23133) Numeric operations can have different result across hardware archs

2020-04-09 Thread Yikun Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yikun Jiang updated HIVE-23133:
---
Attachment: HIVE-23133.2.patch
Status: Patch Available  (was: Open)

> Numeric operations can have different result across hardware archs
> --
>
> Key: HIVE-23133
> URL: https://issues.apache.org/jira/browse/HIVE-23133
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23133.1.patch, HIVE-23133.2.patch
>
>
> Currently, we have set up an ARM CI to test out how Hive works on ARM 
> platform:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/]
> Among the failures, we have observed that some numeric operations can have 
> different result across hardware archs, such as:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/]
> we can see that the calculation results of log, exp, cos, toRadians etc is 
> slitly different than the .out file results that we are
> comparing(they are tested and wrote on X86 machines), this is because of we 
> use [Math 
> Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for 
> these kind of calculations.
> and according to the 
> [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]:
> _Unlike some of the numeric methods of class StrictMath, all implementations 
> of the equivalent functions of class Math are not_
> _defined to return the bit-for-bit same results. This relaxation permits 
> better-performing implementations where strict reproducibility_
> _is not required._
> _By default many of the Math methods simply call the equivalent method in 
> StrictMath for their implementation._
> _Code generators are encouraged to use platform-specific native libraries or 
> microprocessor instructions, where available,_
> _to provide higher-performance implementations of Math methods._
> so the result will have difference across hardware archs.
> On the other hand, JAVA provided another library 
> [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]
>  that will not have this kind of problem as according to its' 
> [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]:
> _To help ensure portability of Java programs, the definitions of some of the 
> numeric functions in this package require that they produce_
> _the same results as certain published algorithms._
> So in order to fix the above mentioned problem, we have to consider switch to 
> use StrictMath instead of Math.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23133) Numeric operations can have different result across hardware archs

2020-04-09 Thread Yikun Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yikun Jiang updated HIVE-23133:
---
Status: Open  (was: Patch Available)

> Numeric operations can have different result across hardware archs
> --
>
> Key: HIVE-23133
> URL: https://issues.apache.org/jira/browse/HIVE-23133
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23133.1.patch
>
>
> Currently, we have set up an ARM CI to test out how Hive works on ARM 
> platform:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/]
> Among the failures, we have observed that some numeric operations can have 
> different result across hardware archs, such as:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/]
> we can see that the calculation results of log, exp, cos, toRadians etc is 
> slitly different than the .out file results that we are
> comparing(they are tested and wrote on X86 machines), this is because of we 
> use [Math 
> Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for 
> these kind of calculations.
> and according to the 
> [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]:
> _Unlike some of the numeric methods of class StrictMath, all implementations 
> of the equivalent functions of class Math are not_
> _defined to return the bit-for-bit same results. This relaxation permits 
> better-performing implementations where strict reproducibility_
> _is not required._
> _By default many of the Math methods simply call the equivalent method in 
> StrictMath for their implementation._
> _Code generators are encouraged to use platform-specific native libraries or 
> microprocessor instructions, where available,_
> _to provide higher-performance implementations of Math methods._
> so the result will have difference across hardware archs.
> On the other hand, JAVA provided another library 
> [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]
>  that will not have this kind of problem as according to its' 
> [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]:
> _To help ensure portability of Java programs, the definitions of some of the 
> numeric functions in this package require that they produce_
> _the same results as certain published algorithms._
> So in order to fix the above mentioned problem, we have to consider switch to 
> use StrictMath instead of Math.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23006) Basic compiler support for Probe MapJoin

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080196#comment-17080196
 ] 

Hive QA commented on HIVE-23006:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} common: The patch generated 4 new + 374 unchanged - 0 
fixed = 378 total (was 374) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 9 new + 112 unchanged - 0 
fixed = 121 total (was 112) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
53s{color} | {color:red} ql generated 3 new + 1526 unchanged - 1 fixed = 1529 
total (was 1527) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Class org.apache.hadoop.hive.ql.exec.TableScanOperator defines 
non-transient non-serializable instance field probeDecodeContextSet  In 
TableScanOperator.java:instance field probeDecodeContextSet  In 
TableScanOperator.java |
|  |  Class org.apache.hadoop.hive.ql.plan.MapWork defines non-transient 
non-serializable instance field probeDecodeContext  In MapWork.java:instance 
field probeDecodeContext  In MapWork.java |
|  |  Class org.apache.hadoop.hive.ql.plan.TableScanDesc defines non-transient 
non-serializable instance field probeDecodeContext  In 
TableScanDesc.java:instance field probeDecodeContext  In TableScanDesc.java |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21541/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21541/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21541/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21541/yetus/new-findbugs-ql.html
 |
| modules | C: common ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21541/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apa

[jira] [Updated] (HIVE-23163) Class TrustDomainAuthenticationTest should be abstract

2020-04-09 Thread Zhenyu Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenyu Zheng updated HIVE-23163:

Attachment: HIVE-23163.2.patch
Status: Patch Available  (was: Open)

> Class TrustDomainAuthenticationTest should be abstract
> --
>
> Key: HIVE-23163
> URL: https://issues.apache.org/jira/browse/HIVE-23163
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23163.1.patch, HIVE-23163.2.patch
>
>
> When running tests in pre-commit CI, the test parser will only identify test 
> classes start with 'Test'
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java#L406]
> But when running using `mvn test`, it will also parse classes such as 
> '*Test.java':
> [http://maven.apache.org/plugins-archives/maven-surefire-plugin-2.12.4/examples/inclusion-exclusion.html]
> so for:
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TrustDomainAuthenticationTest.java#L38]
> it will also be included in the test, for example:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/TrustDomainAuthenticationTest/testTrustedDomainAuthentication/]
> This is because that class TrustDomainAuthenticationTest is actually a parent 
> class that does not have the parameters for init. The actual tests are its 
> children classes like the rest in 
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/]
> they will pass actuall parameters for init: 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestTrustDomainAuthenticationBinary.java#L26]
>  
> we can make this class abstract so that it won't be included when running 
> 'mvn test', like 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java#L54]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23163) Class TrustDomainAuthenticationTest should be abstract

2020-04-09 Thread Zhenyu Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenyu Zheng updated HIVE-23163:

Status: Open  (was: Patch Available)

> Class TrustDomainAuthenticationTest should be abstract
> --
>
> Key: HIVE-23163
> URL: https://issues.apache.org/jira/browse/HIVE-23163
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23163.1.patch
>
>
> When running tests in pre-commit CI, the test parser will only identify test 
> classes start with 'Test'
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java#L406]
> But when running using `mvn test`, it will also parse classes such as 
> '*Test.java':
> [http://maven.apache.org/plugins-archives/maven-surefire-plugin-2.12.4/examples/inclusion-exclusion.html]
> so for:
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TrustDomainAuthenticationTest.java#L38]
> it will also be included in the test, for example:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/TrustDomainAuthenticationTest/testTrustedDomainAuthentication/]
> This is because that class TrustDomainAuthenticationTest is actually a parent 
> class that does not have the parameters for init. The actual tests are its 
> children classes like the rest in 
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/]
> they will pass actuall parameters for init: 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestTrustDomainAuthenticationBinary.java#L26]
>  
> we can make this class abstract so that it won't be included when running 
> 'mvn test', like 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java#L54]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23163) Class TrustDomainAuthenticationTest should be abstract

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080182#comment-17080182
 ] 

Hive QA commented on HIVE-23163:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999408/HIVE-23163.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 18192 tests 
executed
*Failed tests:*
{noformat}
TestMiniLlapCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=165)

[cte_4.q,file_with_header_footer.q,results_cache_with_auth.q,alter_table_location2.q,parquet_map_type_vectorization.q,orc_merge2.q,insert_into2.q,reduce_deduplicate.q,orc_llap_counters.q,schemeAuthority2.q,reduce_deduplicate_distinct.q,rcfile_merge3.q,intersect_distinct.q,add_part_with_loc.q,multi_count_distinct_null.q]
org.apache.hive.beeline.TestBeeLineWithArgs.testRowsAffected (batchId=286)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21540/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21540/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21540/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999408 - PreCommit-HIVE-Build

> Class TrustDomainAuthenticationTest should be abstract
> --
>
> Key: HIVE-23163
> URL: https://issues.apache.org/jira/browse/HIVE-23163
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23163.1.patch
>
>
> When running tests in pre-commit CI, the test parser will only identify test 
> classes start with 'Test'
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java#L406]
> But when running using `mvn test`, it will also parse classes such as 
> '*Test.java':
> [http://maven.apache.org/plugins-archives/maven-surefire-plugin-2.12.4/examples/inclusion-exclusion.html]
> so for:
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TrustDomainAuthenticationTest.java#L38]
> it will also be included in the test, for example:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/TrustDomainAuthenticationTest/testTrustedDomainAuthentication/]
> This is because that class TrustDomainAuthenticationTest is actually a parent 
> class that does not have the parameters for init. The actual tests are its 
> children classes like the rest in 
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/]
> they will pass actuall parameters for init: 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TestTrustDomainAuthenticationBinary.java#L26]
>  
> we can make this class abstract so that it won't be included when running 
> 'mvn test', like 
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java#L54]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23174) Remove TOK_TRUNCATETABLE

2020-04-09 Thread Miklos Gergely (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080179#comment-17080179
 ] 

Miklos Gergely commented on HIVE-23174:
---

[~belugabehr] keywords and tokens are different concepts. In the future we may 
use the truncate word as part of other commands as well. What would be the 
benefit of such a change?

> Remove TOK_TRUNCATETABLE
> 
>
> Key: HIVE-23174
> URL: https://issues.apache.org/jira/browse/HIVE-23174
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23174.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23100) Create RexNode factory and use it in CalcitePlanner

2020-04-09 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23100:
---
Attachment: HIVE-23100.08.patch

> Create RexNode factory and use it in CalcitePlanner
> ---
>
> Key: HIVE-23100
> URL: https://issues.apache.org/jira/browse/HIVE-23100
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23100.01.patch, HIVE-23100.02.patch, 
> HIVE-23100.03.patch, HIVE-23100.04.patch, HIVE-23100.05.patch, 
> HIVE-23100.06.patch, HIVE-23100.07.patch, HIVE-23100.08.patch, 
> HIVE-23100.patch
>
>
> Follow-up of HIVE-22746.
> This will allow us to generate directly the RexNode from the AST nodes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-15577) Simplify current parser

2020-04-09 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-15577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080172#comment-17080172
 ] 

David Mollitor edited comment on HIVE-15577 at 4/10/20, 2:22 AM:
-

I am investigating [HIVE-23172] and I am having a problem addressing this 
because I am getting the following error from compiling the grammar:
 
{code:none}
hive-parser: Compilation failure
[ERROR] 
/home/apache/hive/hive/parser/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java:[40,38]
 code too large
 {code}

I traced it down to the fact that there are too many token defined.  In 
HiveParser.java, it has the following:

{code:java} 
public static final String[] tokenNames = new String[] \{ ... };
{code}
 
That list is so long, it's breaking Java compilation.
 
I observed that the parser defines two token for most elements, for example:
 
KW_TRUNCATE / TOK_TRUNCATETABLE
 
I propose consolidating this down to one and conserve some space.  I would 
propose just using  KW_TRUNCATE and get rid of the TOK version.  The same can 
be applied to quite a few ANTLR tokens.


was (Author: belugabehr):
I am investigating [HIVE-23172] and I am having a problem addressing this 
because I am getting the following error from compiling the grammar:
 
{code:none}
hive-parser: Compilation failure
[ERROR] 
/home/apache/hive/hive/parser/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java:[40,38]
 code too large
 {code}

I traced it down to the fact that there are too many token defined.  In 
HiveParser.java, it has the following:

{code:java} 
public static final String[] tokenNames = new String[] \{ ... };
{code}
 
That list is so long, it's breaking Java compilation.
 
I observed that the parser defines two token for most elements, for example:
 
KW_TRUNCATE / TOK_TRUNCATETABLE
 
I propose consolidating this down to one and conserve some space.  I would 
propose just using  KW_TRUNCATE and get rid of the TOK version.

> Simplify current parser
> ---
>
> Key: HIVE-15577
> URL: https://issues.apache.org/jira/browse/HIVE-15577
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Major
>
> We encountered "code too large" problem frequently. We need to reduce the 
> code size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-15577) Simplify current parser

2020-04-09 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-15577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080172#comment-17080172
 ] 

David Mollitor commented on HIVE-15577:
---

I am investigating [HIVE-23172] and I am having a problem addressing this 
because I am getting the following error from compiling the grammar:
 
{code:none}
hive-parser: Compilation failure
[ERROR] 
/home/apache/hive/hive/parser/target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java:[40,38]
 code too large
 {code}

I traced it down to the fact that there are too many token defined.  In 
HiveParser.java, it has the following:

{code:java} 
public static final String[] tokenNames = new String[] \{ ... };
{code}
 
That list is so long, it's breaking Java compilation.
 
I observed that the parser defines two token for most elements, for example:
 
KW_TRUNCATE / TOK_TRUNCATETABLE
 
I propose consolidating this down to one and conserve some space.  I would 
propose just using  KW_TRUNCATE and get rid of the TOK version.

> Simplify current parser
> ---
>
> Key: HIVE-15577
> URL: https://issues.apache.org/jira/browse/HIVE-15577
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Major
>
> We encountered "code too large" problem frequently. We need to reduce the 
> code size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23163) Class TrustDomainAuthenticationTest should be abstract

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080170#comment-17080170
 ] 

Hive QA commented on HIVE-23163:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 2 
unchanged - 1 fixed = 3 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21540/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21540/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: itests/hive-unit U: itests/hive-unit |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21540/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Class TrustDomainAuthenticationTest should be abstract
> --
>
> Key: HIVE-23163
> URL: https://issues.apache.org/jira/browse/HIVE-23163
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23163.1.patch
>
>
> When running tests in pre-commit CI, the test parser will only identify test 
> classes start with 'Test'
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java#L406]
> But when running using `mvn test`, it will also parse classes such as 
> '*Test.java':
> [http://maven.apache.org/plugins-archives/maven-surefire-plugin-2.12.4/examples/inclusion-exclusion.html]
> so for:
> [https://github.com/apache/hive/blob/d2163cbfb8bacf859fa8572e24c8533bb2dcb0f3/itests/hive-unit/src/test/java/org/apache/hive/service/auth/TrustDomainAuthenticationTest.java#L38]
> it will also be included in the test, for example:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/23/testReport/junit/org.apache.hive.service.auth/TrustDomainAuthenticationTest/testTrustedDomainAuthentication/]
> This is because that class TrustDomainAuthenticationTest is actually a parent 
> class that does not have the parameters for init. The actual tests 

[jira] [Commented] (HIVE-23104) Minimize critical paths of TxnHandler::commitTxn and abortTxn

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080163#comment-17080163
 ] 

Hive QA commented on HIVE-23104:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999422/HIVE-23104.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18207 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21539/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21539/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21539/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999422 - PreCommit-HIVE-Build

> Minimize critical paths of TxnHandler::commitTxn and abortTxn
> -
>
> Key: HIVE-23104
> URL: https://issues.apache.org/jira/browse/HIVE-23104
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23104.1.patch, HIVE-23104.1.patch, 
> HIVE-23104.1.patch, HIVE-23104.2.patch, HIVE-23104.2.patch, HIVE-23104.3.patch
>
>
> Investigate whether any code sections in TxnHandler::commitTxn and abortTxn 
> can be lifted out/executed async in order to reduce the overall execution 
> time of these methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23174) Remove TOK_TRUNCATETABLE

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23174:
--
Attachment: HIVE-23174.1.patch

> Remove TOK_TRUNCATETABLE
> 
>
> Key: HIVE-23174
> URL: https://issues.apache.org/jira/browse/HIVE-23174
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23174.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23174) Remove TOK_TRUNCATETABLE

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23174:
--
Status: Patch Available  (was: Open)

> Remove TOK_TRUNCATETABLE
> 
>
> Key: HIVE-23174
> URL: https://issues.apache.org/jira/browse/HIVE-23174
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23174.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23174) Remove TOK_TRUNCATETABLE

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor reassigned HIVE-23174:
-


> Remove TOK_TRUNCATETABLE
> 
>
> Key: HIVE-23174
> URL: https://issues.apache.org/jira/browse/HIVE-23174
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23104) Minimize critical paths of TxnHandler::commitTxn and abortTxn

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080126#comment-17080126
 ] 

Hive QA commented on HIVE-23104:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
12s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 11 new + 531 unchanged - 11 fixed = 542 total (was 542) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} standalone-metastore/metastore-server generated 3 new 
+ 190 unchanged - 0 fixed = 193 total (was 190) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  
org.apache.hadoop.hive.metastore.txn.TxnHandler.moveTxnComponentsToCompleted(Statement,
 long, char) passes a nonconstant String to an execute or addBatch method on an 
SQL statement  At TxnHandler.java:nonconstant String to an execute or addBatch 
method on an SQL statement  At TxnHandler.java:[line 1366] |
|  |  
org.apache.hadoop.hive.metastore.txn.TxnHandler.checkForWriteConflict(Statement,
 long) passes a nonconstant String to an execute or addBatch method on an SQL 
statement  At TxnHandler.java:String to an execute or addBatch method on an SQL 
statement  At TxnHandler.java:[line 1327] |
|  |  
org.apache.hadoop.hive.metastore.txn.TxnHandler.updateKeyValueAssociatedWithTxn(CommitTxnRequest,
 Statement) passes a nonconstant String to an execute or addBatch method on an 
SQL statement  At TxnHandler.java:String to an execute or addBatch method on an 
SQL statement  At TxnHandler.java:[line 1411] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21539/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21539/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21539/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21539/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Minimize critical paths of TxnHandler::commitTxn and abortTxn
> -
>
> Key

[jira] [Updated] (HIVE-23145) get_partitions_with_specs fails if filter expression is not parsable

2020-04-09 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23145:
---
Status: Patch Available  (was: Open)

> get_partitions_with_specs fails if filter expression is not parsable
> 
>
> Key: HIVE-23145
> URL: https://issues.apache.org/jira/browse/HIVE-23145
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23145.1.patch, HIVE-23145.2.patch
>
>
> Expression is not parsable in most of the cases. Current API 
> *get_partitions_by_expr* anticipates this and provide a fallback mechanism. 
> This basically deserialize the provided expression, fetches all partition 
> names for the table, prune partition names using the expression and then uses 
> the names to fetch required partition data.
>  Note that this expect serialized expression instead of string.
> This need to be done for both Direct SQL and JDO path.
> e.g. Following error is thrown for tpcds query 55 which provide expression 
> * IS NOT NULL filter*
> *ERROR*
> {code:java}
> MetaException(message:Error parsing partition filter; lexer error: null; 
> exception NoViableAltException(13@[]))MetaException(message:Error parsing 
> partition filter; lexer error: null; exception NoViableAltException(13@[])) 
> at 
> org.apache.hadoop.hive.metastore.PartFilterExprUtil.getFilterParser(PartFilterExprUtil.java:154)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$15.initExpressionTree(ObjectStore.java:4339)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$15.canUseDirectSql(ObjectStore.java:4319)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:4021)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3985)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionSpecsByFilterAndProjection(ObjectStore.java:4395)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> at com.sun.proxy.$Proxy26.getPartitionSpecsByFilterAndProjection(Unknown 
> Source) at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_with_specs(HiveMetaStore.java:5356)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  at com.sun.proxy.$Proxy27.get_partitions_with_specs(Unknown Source) at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21620)
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21604)
>  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at 
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643)
>  at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>  at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638)
>  at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23145) get_partitions_with_specs fails if filter expression is not parsable

2020-04-09 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23145:
---
Attachment: HIVE-23145.2.patch

> get_partitions_with_specs fails if filter expression is not parsable
> 
>
> Key: HIVE-23145
> URL: https://issues.apache.org/jira/browse/HIVE-23145
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23145.1.patch, HIVE-23145.2.patch
>
>
> Expression is not parsable in most of the cases. Current API 
> *get_partitions_by_expr* anticipates this and provide a fallback mechanism. 
> This basically deserialize the provided expression, fetches all partition 
> names for the table, prune partition names using the expression and then uses 
> the names to fetch required partition data.
>  Note that this expect serialized expression instead of string.
> This need to be done for both Direct SQL and JDO path.
> e.g. Following error is thrown for tpcds query 55 which provide expression 
> * IS NOT NULL filter*
> *ERROR*
> {code:java}
> MetaException(message:Error parsing partition filter; lexer error: null; 
> exception NoViableAltException(13@[]))MetaException(message:Error parsing 
> partition filter; lexer error: null; exception NoViableAltException(13@[])) 
> at 
> org.apache.hadoop.hive.metastore.PartFilterExprUtil.getFilterParser(PartFilterExprUtil.java:154)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$15.initExpressionTree(ObjectStore.java:4339)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$15.canUseDirectSql(ObjectStore.java:4319)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:4021)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3985)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionSpecsByFilterAndProjection(ObjectStore.java:4395)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> at com.sun.proxy.$Proxy26.getPartitionSpecsByFilterAndProjection(Unknown 
> Source) at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_with_specs(HiveMetaStore.java:5356)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  at com.sun.proxy.$Proxy27.get_partitions_with_specs(Unknown Source) at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21620)
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21604)
>  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at 
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643)
>  at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>  at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638)
>  at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23145) get_partitions_with_specs fails if filter expression is not parsable

2020-04-09 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23145:
---
Status: Open  (was: Patch Available)

> get_partitions_with_specs fails if filter expression is not parsable
> 
>
> Key: HIVE-23145
> URL: https://issues.apache.org/jira/browse/HIVE-23145
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23145.1.patch, HIVE-23145.2.patch
>
>
> Expression is not parsable in most of the cases. Current API 
> *get_partitions_by_expr* anticipates this and provide a fallback mechanism. 
> This basically deserialize the provided expression, fetches all partition 
> names for the table, prune partition names using the expression and then uses 
> the names to fetch required partition data.
>  Note that this expect serialized expression instead of string.
> This need to be done for both Direct SQL and JDO path.
> e.g. Following error is thrown for tpcds query 55 which provide expression 
> * IS NOT NULL filter*
> *ERROR*
> {code:java}
> MetaException(message:Error parsing partition filter; lexer error: null; 
> exception NoViableAltException(13@[]))MetaException(message:Error parsing 
> partition filter; lexer error: null; exception NoViableAltException(13@[])) 
> at 
> org.apache.hadoop.hive.metastore.PartFilterExprUtil.getFilterParser(PartFilterExprUtil.java:154)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$15.initExpressionTree(ObjectStore.java:4339)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$15.canUseDirectSql(ObjectStore.java:4319)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:4021)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3985)
>  at 
> org.apache.hadoop.hive.metastore.ObjectStore.getPartitionSpecsByFilterAndProjection(ObjectStore.java:4395)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
> at com.sun.proxy.$Proxy26.getPartitionSpecsByFilterAndProjection(Unknown 
> Source) at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_with_specs(HiveMetaStore.java:5356)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  at com.sun.proxy.$Proxy27.get_partitions_with_specs(Unknown Source) at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21620)
>  at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_with_specs.getResult(ThriftHiveMetastore.java:21604)
>  at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at 
> org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:643)
>  at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:638)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>  at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:638)
>  at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23154) Fix race condition in Utilities::mvFileToFinalPath

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080111#comment-17080111
 ] 

Hive QA commented on HIVE-23154:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999401/HIVE-23154.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 18207 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two]
 (batchId=15)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21538/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21538/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21538/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999401 - PreCommit-HIVE-Build

> Fix race condition in Utilities::mvFileToFinalPath
> --
>
> Key: HIVE-23154
> URL: https://issues.apache.org/jira/browse/HIVE-23154
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-23154.1.patch, HIVE-23154.3.patch
>
>
> Utilities::mvFileToFinalPath is used for moving files from "/_tmp.-ext to 
> "/-ext" folder. Tasks write data to "_tmp" . Before writing to final 
> destination, they are moved to "-ext" folder. As part of it, it has checks to 
> ensure that run-away task outputs are not copied to "-ext" folder.
> Currently, there is a race condition between computing the snapshot of files 
> to be copied and the rename operation. Same issue persists in "insert into" 
> case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22458) Add more constraints on showing partitions

2020-04-09 Thread Zhihua Deng (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080094#comment-17080094
 ] 

Zhihua Deng commented on HIVE-22458:


[~mgergely] [~jcamachorodriguez] [~ashutoshc] thoughts?

> Add more constraints on showing partitions
> --
>
> Key: HIVE-22458
> URL: https://issues.apache.org/jira/browse/HIVE-22458
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-22458.2.patch, HIVE-22458.3.patch, 
> HIVE-22458.branch-1.02.patch, HIVE-22458.branch-1.patch, HIVE-22458.patch
>
>
> When we showing partitions of a table with thousands of partitions,  all the 
> partitions will be returned and it's not easy to catch the specified one from 
> them, this make showing partitions hard to use. We can add where/limit/order 
> by constraints to show partitions like:
>  show partitions table_name [partition_specs] where partition_key >= value 
> order by partition_key desc limit n;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419885&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419885
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r401751572
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java
 ##
 @@ -0,0 +1,327 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.llap;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import javax.net.SocketFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.io.CacheTag;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
+import org.apache.hadoop.hive.llap.impl.LlapManagementProtocolClientImpl;
+import org.apache.hadoop.hive.llap.registry.LlapServiceInstance;
+import org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hive.common.util.ShutdownHookManager;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Through this class the caller (typically HS2) can request eviction of 
buffers from LLAP cache by specifying a DB,
+ * table or partition name/(value). Request sending is implemented here.
+ */
+public final class ProactiveEviction {
+
+  static {
+ShutdownHookManager.addShutdownHook(new Runnable() {
+  @Override
+  public void run() {
+if (EXECUTOR != null) {
+  EXECUTOR.shutdownNow();
+}
+  }
+});
+  }
+
+  private static final ExecutorService EXECUTOR = 
Executors.newSingleThreadExecutor(
+  new 
ThreadFactoryBuilder().setNameFormat("Proactive-Eviction-Requester").setDaemon(true).build());
+
+  private ProactiveEviction() {
+// Not to be used;
+  }
+
+  /**
+   * Trigger LLAP cache eviction of buffers related to entities residing in 
request parameter.
+   * @param conf
+   * @param request
+   */
+  public static void evict(Configuration conf, Request request) {
+if (!HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_IO_PROACTIVE_EVICTION_ENABLED)) {
+  return;
+}
+
+try {
+  LlapRegistryService llapRegistryService = 
LlapRegistryService.getClient(conf);
+  Collection instances = 
llapRegistryService.getInstances().getAll();
+  if (instances.size() == 0) {
+// Not in LLAP mode.
+return;
+  }
+  for (LlapServiceInstance instance : instances) {
+Task task = new Task(conf, instance, request);
+EXECUTOR.execute(task);
+  }
+
+} catch (IOException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  /**
+   * The executable task to carry out request sending.
+   */
+  public static class Task implements Runnable {
 
 Review comment:
   There is at least 2 classed named Task in the Hive exec module can you 
rename this to something more specific 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419885)
Time Spent: 6h 10m  (was: 6h)

> Add necessary endpoints f

[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419881&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419881
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r396825148
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapIoImpl.java
 ##
 @@ -221,6 +226,7 @@ public void debugDumpShort(StringBuilder sb) {
 metadataCache, dataCache, bufferManagerOrc, conf, cacheMetrics, 
ioMetrics, tracePool);
 this.genericCvp = isEncodeEnabled ? new GenericColumnVectorProducer(
 serdeCache, bufferManagerGeneric, conf, cacheMetrics, ioMetrics, 
tracePool) : null;
+
 
 Review comment:
   not needed
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419881)
Time Spent: 5.5h  (was: 5h 20m)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch, HIVE-22821.3.patch, HIVE-22821.4.patch
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  ** dropping partition from a table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419883&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419883
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r401741332
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/cache/LowLevelLrfuCachePolicy.java
 ##
 @@ -240,6 +241,12 @@ public void setEvictionListener(EvictionListener 
listener) {
 this.evictionListener = listener;
   }
 
+  @Override
+  public long evictEntity(Predicate predicate) {
+// TODO
 
 Review comment:
   Same here please add the link to the todo jira.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419883)
Time Spent: 5h 50m  (was: 5h 40m)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch, HIVE-22821.3.patch, HIVE-22821.4.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  ** dropping partition from a table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419886&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419886
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r401746158
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/llap/ProactiveEviction.java
 ##
 @@ -0,0 +1,327 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.llap;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.LinkedHashMap;
+import java.util.LinkedHashSet;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import javax.net.SocketFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.io.CacheTag;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos;
+import org.apache.hadoop.hive.llap.impl.LlapManagementProtocolClientImpl;
+import org.apache.hadoop.hive.llap.registry.LlapServiceInstance;
+import org.apache.hadoop.hive.llap.registry.impl.LlapRegistryService;
+import org.apache.hadoop.io.retry.RetryPolicies;
+import org.apache.hadoop.io.retry.RetryPolicy;
+import org.apache.hadoop.net.NetUtils;
+import org.apache.hive.common.util.ShutdownHookManager;
+
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Through this class the caller (typically HS2) can request eviction of 
buffers from LLAP cache by specifying a DB,
+ * table or partition name/(value). Request sending is implemented here.
+ */
+public final class ProactiveEviction {
+
+  static {
+ShutdownHookManager.addShutdownHook(new Runnable() {
+  @Override
+  public void run() {
+if (EXECUTOR != null) {
+  EXECUTOR.shutdownNow();
+}
+  }
+});
+  }
+
+  private static final ExecutorService EXECUTOR = 
Executors.newSingleThreadExecutor(
+  new 
ThreadFactoryBuilder().setNameFormat("Proactive-Eviction-Requester").setDaemon(true).build());
+
+  private ProactiveEviction() {
+// Not to be used;
+  }
+
+  /**
+   * Trigger LLAP cache eviction of buffers related to entities residing in 
request parameter.
+   * @param conf
+   * @param request
+   */
+  public static void evict(Configuration conf, Request request) {
+if (!HiveConf.getBoolVar(conf, 
HiveConf.ConfVars.LLAP_IO_PROACTIVE_EVICTION_ENABLED)) {
+  return;
+}
+
+try {
+  LlapRegistryService llapRegistryService = 
LlapRegistryService.getClient(conf);
+  Collection instances = 
llapRegistryService.getInstances().getAll();
+  if (instances.size() == 0) {
+// Not in LLAP mode.
+return;
+  }
+  for (LlapServiceInstance instance : instances) {
+Task task = new Task(conf, instance, request);
+EXECUTOR.execute(task);
+  }
+
+} catch (IOException e) {
+  throw new RuntimeException(e);
+}
+  }
+
+  /**
+   * The executable task to carry out request sending.
+   */
+  public static class Task implements Runnable {
+private static final Logger LOG = LoggerFactory.getLogger(Task.class);
+private final Request request;
+private Configuration conf;
+private LlapServiceInstance instance;
+private SocketFactory socketFactory;
+private RetryPolicy retryPolicy;
+
+Task(Configuration conf, LlapServiceInstance llapServiceInstance, Request 
request) {
+  this.conf = conf;
+  this.instance = llapServiceInstance;
+  this.socketFactory = NetUtils.getDefaultSocketFactory(conf);
+  //not making this configurable, best effort
+  this.retryPolicy = RetryPolicies.retr

[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419880&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419880
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r396825072
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/io/api/impl/LlapIoImpl.java
 ##
 @@ -23,12 +23,16 @@
 import java.util.Arrays;
 import java.util.List;
 import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
 
 Review comment:
   this is unused
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419880)
Time Spent: 5h 20m  (was: 5h 10m)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch, HIVE-22821.3.patch, HIVE-22821.4.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  ** dropping partition from a table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419882&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419882
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r401741044
 
 

 ##
 File path: 
llap-server/src/java/org/apache/hadoop/hive/llap/cache/LowLevelFifoCachePolicy.java
 ##
 @@ -71,6 +72,11 @@ public long purge() {
 return evicted;
   }
 
+  @Override
+  public long evictEntity(Predicate predicate) {
+return 0;
 
 Review comment:
   can you please file the jira and add the todo link to make this formal that 
this is a WIP.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419882)
Time Spent: 5h 40m  (was: 5.5h)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch, HIVE-22821.3.patch, HIVE-22821.4.patch
>
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  ** dropping partition from a table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22821) Add necessary endpoints for proactive cache eviction

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22821?focusedWorklogId=419884&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419884
 ]

ASF GitHub Bot logged work on HIVE-22821:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 23:37
Start Date: 09/Apr/20 23:37
Worklog Time Spent: 10m 
  Work Description: b-slim commented on pull request #909: HIVE-22821
URL: https://github.com/apache/hive/pull/909#discussion_r401747089
 
 

 ##
 File path: service/src/java/org/apache/hive/service/server/HiveServer2.java
 ##
 @@ -61,6 +61,7 @@
 import org.apache.hadoop.hive.common.ZooKeeperHiveHelper;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.llap.ProactiveEviction;
 
 Review comment:
   is this needed ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419884)
Time Spent: 6h  (was: 5h 50m)

> Add necessary endpoints for proactive cache eviction
> 
>
> Key: HIVE-22821
> URL: https://issues.apache.org/jira/browse/HIVE-22821
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Ádám Szita
>Assignee: Ádám Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22821.0.patch, HIVE-22821.1.patch, 
> HIVE-22821.2.patch, HIVE-22821.3.patch, HIVE-22821.4.patch
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Implement the parts required for iHS2 -> LLAP daemons communication:
>  * protobuf message schema and endpoints
>  * Hive configuration
>  * for use cases:
>  ** dropping db
>  ** dropping table
>  ** dropping partition from a table



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23154) Fix race condition in Utilities::mvFileToFinalPath

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080088#comment-17080088
 ] 

Hive QA commented on HIVE-23154:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
37s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 107 unchanged - 1 
fixed = 108 total (was 108) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21538/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21538/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21538/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix race condition in Utilities::mvFileToFinalPath
> --
>
> Key: HIVE-23154
> URL: https://issues.apache.org/jira/browse/HIVE-23154
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-23154.1.patch, HIVE-23154.3.patch
>
>
> Utilities::mvFileToFinalPath is used for moving files from "/_tmp.-ext to 
> "/-ext" folder. Tasks write data to "_tmp" . Before writing to final 
> destination, they are moved to "-ext" folder. As part of it, it has checks to 
> ensure that run-away task outputs are not copied to "-ext" folder.
> Currently, there is a race condition between computing the snapshot of files 
> to be copied and the rename operation. Same issue persists in "insert into" 
> case as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23164) server is not properly terminated because of non-daemon threads

2020-04-09 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung updated HIVE-23164:

Attachment: HIVE-23164.02.patch
Status: Patch Available  (was: Open)

[^HIVE-23164.02.patch] ThreadFactory is used with daemon(true) and each name 
format.

> server is not properly terminated because of non-daemon threads
> ---
>
> Key: HIVE-23164
> URL: https://issues.apache.org/jira/browse/HIVE-23164
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23164.01.patch, HIVE-23164.02.patch, 
> thread_dump_hiveserver2_is_not_terminated.txt
>
>
> HiveServer2 which receives the deregister command is at first preparing 
> shutdown. If there's no remaining session, HiveServer2.stop() is called to 
> shut down. But I found the case that the HiveServer2 JVM is not terminated 
> even if HiveServer2.stop() has been called and processed. The case is always 
> occurred when the local(embedded) metastore is used.
> I've attached the full thread dump describing the situation.
> [^thread_dump_hiveserver2_is_not_terminated.txt]
> In this thread dump, you can see some bunch of 'daemon' threads, NO main 
> thread, and some 'non-daemon' thread(or user thread)s. As specified by 
> [https://www.baeldung.com/java-daemon-thread], if there is at least one user 
> thread exists, JVM does not terminate. (Note that DestroyJavaVM thread is 
> non-daemon but it's special.)
>  
> {code:java}
> "pool-8-thread-1" #24 prio=5 os_prio=0 tid=0x7f52ad1fc000 nid=0x821c 
> waiting on condition [0x7f525c50]
>  java.lang.Thread.State: TIMED_WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  - parking to wait for <0x0003cfa057c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
>  - None
> {code}
> The thread above is created by ScheduledThreadPoolExecutor(int coreSize) with 
> default ThreadFactory which always creates non-daemon thread. If such thread 
> pool is not destroyed with ScheduledThreadPoolExecutor.shutdown() method, JVM 
> cannot terminate! The only way to kill is TERM signal. If JVM receives TERM 
> signal, it ignores non-daemon threads and terminates.
> So I have been digging modules which create ScheduledThreadPoolExecutor with 
> non-daemon threads and I got it. As you may guess, it's the local(embedded) 
> metastore. ScheduledThreadPoolExecutor is created by 
> org.apache.hadoop.hive.metastore.HiveMetaStore.HMSHandler#startAlwaysTaskThreads()
>  and ScheduledThreadPoolExecutor.shutdown() is never called.
> Plus, I found another usage of creating such ScheduledThreadPoolExecutor and 
> not calling its shutdown. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23164) server is not properly terminated because of non-daemon threads

2020-04-09 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung updated HIVE-23164:

Status: Open  (was: Patch Available)

> server is not properly terminated because of non-daemon threads
> ---
>
> Key: HIVE-23164
> URL: https://issues.apache.org/jira/browse/HIVE-23164
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23164.01.patch, 
> thread_dump_hiveserver2_is_not_terminated.txt
>
>
> HiveServer2 which receives the deregister command is at first preparing 
> shutdown. If there's no remaining session, HiveServer2.stop() is called to 
> shut down. But I found the case that the HiveServer2 JVM is not terminated 
> even if HiveServer2.stop() has been called and processed. The case is always 
> occurred when the local(embedded) metastore is used.
> I've attached the full thread dump describing the situation.
> [^thread_dump_hiveserver2_is_not_terminated.txt]
> In this thread dump, you can see some bunch of 'daemon' threads, NO main 
> thread, and some 'non-daemon' thread(or user thread)s. As specified by 
> [https://www.baeldung.com/java-daemon-thread], if there is at least one user 
> thread exists, JVM does not terminate. (Note that DestroyJavaVM thread is 
> non-daemon but it's special.)
>  
> {code:java}
> "pool-8-thread-1" #24 prio=5 os_prio=0 tid=0x7f52ad1fc000 nid=0x821c 
> waiting on condition [0x7f525c50]
>  java.lang.Thread.State: TIMED_WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  - parking to wait for <0x0003cfa057c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
>  - None
> {code}
> The thread above is created by ScheduledThreadPoolExecutor(int coreSize) with 
> default ThreadFactory which always creates non-daemon thread. If such thread 
> pool is not destroyed with ScheduledThreadPoolExecutor.shutdown() method, JVM 
> cannot terminate! The only way to kill is TERM signal. If JVM receives TERM 
> signal, it ignores non-daemon threads and terminates.
> So I have been digging modules which create ScheduledThreadPoolExecutor with 
> non-daemon threads and I got it. As you may guess, it's the local(embedded) 
> metastore. ScheduledThreadPoolExecutor is created by 
> org.apache.hadoop.hive.metastore.HiveMetaStore.HMSHandler#startAlwaysTaskThreads()
>  and ScheduledThreadPoolExecutor.shutdown() is never called.
> Plus, I found another usage of creating such ScheduledThreadPoolExecutor and 
> not calling its shutdown. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23164) server is not properly terminated because of non-daemon threads

2020-04-09 Thread Eugene Chung (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080073#comment-17080073
 ] 

Eugene Chung commented on HIVE-23164:
-

[~kgyrtkirk] I agree. I've been thinking of using ThreadFactory.

> server is not properly terminated because of non-daemon threads
> ---
>
> Key: HIVE-23164
> URL: https://issues.apache.org/jira/browse/HIVE-23164
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23164.01.patch, 
> thread_dump_hiveserver2_is_not_terminated.txt
>
>
> HiveServer2 which receives the deregister command is at first preparing 
> shutdown. If there's no remaining session, HiveServer2.stop() is called to 
> shut down. But I found the case that the HiveServer2 JVM is not terminated 
> even if HiveServer2.stop() has been called and processed. The case is always 
> occurred when the local(embedded) metastore is used.
> I've attached the full thread dump describing the situation.
> [^thread_dump_hiveserver2_is_not_terminated.txt]
> In this thread dump, you can see some bunch of 'daemon' threads, NO main 
> thread, and some 'non-daemon' thread(or user thread)s. As specified by 
> [https://www.baeldung.com/java-daemon-thread], if there is at least one user 
> thread exists, JVM does not terminate. (Note that DestroyJavaVM thread is 
> non-daemon but it's special.)
>  
> {code:java}
> "pool-8-thread-1" #24 prio=5 os_prio=0 tid=0x7f52ad1fc000 nid=0x821c 
> waiting on condition [0x7f525c50]
>  java.lang.Thread.State: TIMED_WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  - parking to wait for <0x0003cfa057c0> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>  at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
>  at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:748)
> Locked ownable synchronizers:
>  - None
> {code}
> The thread above is created by ScheduledThreadPoolExecutor(int coreSize) with 
> default ThreadFactory which always creates non-daemon thread. If such thread 
> pool is not destroyed with ScheduledThreadPoolExecutor.shutdown() method, JVM 
> cannot terminate! The only way to kill is TERM signal. If JVM receives TERM 
> signal, it ignores non-daemon threads and terminates.
> So I have been digging modules which create ScheduledThreadPoolExecutor with 
> non-daemon threads and I got it. As you may guess, it's the local(embedded) 
> metastore. ScheduledThreadPoolExecutor is created by 
> org.apache.hadoop.hive.metastore.HiveMetaStore.HMSHandler#startAlwaysTaskThreads()
>  and ScheduledThreadPoolExecutor.shutdown() is never called.
> Plus, I found another usage of creating such ScheduledThreadPoolExecutor and 
> not calling its shutdown. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080061#comment-17080061
 ] 

Hive QA commented on HIVE-23114:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999399/HIVE-23114.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18211 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21537/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21537/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21537/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999399 - PreCommit-HIVE-Build

> Insert overwrite with dynamic partitioning is not working correctly with 
> direct insert
> --
>
> Key: HIVE-23114
> URL: https://issues.apache.org/jira/browse/HIVE-23114
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, 
> HIVE-23114.3.patch, HIVE-23114.3.patch
>
>
> This is a follow-up Jira for the 
> [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280]
>  in HIVE-21164
>  Doing an insert overwrite from a multi-insert statement with dynamic 
> partitioning will give wrong results for ACID tables when 
> 'hive.acid.direct.insert.enabled' is true or for insert-only tables.
> Reproduction:
> {noformat}
> set hive.acid.direct.insert.enabled=true;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> set hive.vectorized.execution.enabled=false;
> set hive.stats.autogather=false;
> create external table multiinsert_test_text (a int, b int, c int) stored as 
> textfile;
> insert into multiinsert_test_text values (, 11, ), (, 22, ), 
> (, 33, ), (, 44, NULL), (, 55, NULL);
> create table multiinsert_test_acid (a int, b int) partitioned by (c int) 
> stored as orc tblproperties('transactional'='true');
> create table multiinsert_test_mm (a int, b int) partitioned by (c int) stored 
> as orc tblproperties('transactional'='true', 
> 'transactional_properties'='insert_only');
> from multiinsert_test_text a
> insert overwrite table multiinsert_test_acid partition (c)
> select
>  a.a,
>  a.b,
>  a.c
>  where a.c is not null
> insert overwrite table multiinsert_test_acid partition (c)
> select
>  a.a,
>  a.b,
>  a.c
> where a.c is null;
> select * from multiinsert_test_acid;
> from multiinsert_test_text a
> insert overwrite table multiinsert_test_mm partition (c)
> select
>  a.a,
>  a.b,
>  a.c
>  where a.c is not null
> insert overwrite table multiinsert_test_mm partition (c)
> select
>  a.a,
>  a.b,
>  a.c
> where a.c is null;
> select * from multiinsert_test_mm;
> {noformat}
> The result of these steps can be different, it depends on the execution order 
> of the FileSinkOperators of the insert overwrite statements. It can happen 
> that an error occurs due to manifest file collision, it can happen that no 
> error occurs but the result will be incorrect.
>  Running the same insert query with an external table of with and ACID table 
> with 'hive.acid.direct.insert.enabled=false' will give the follwing result:
> {noformat}
> 11  
> 22  
> 33  
> 44  NULL
> 55  NULL
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23114) Insert overwrite with dynamic partitioning is not working correctly with direct insert

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080029#comment-17080029
 ] 

Hive QA commented on HIVE-23114:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
42s{color} | {color:red} ql: The patch generated 1 new + 313 unchanged - 1 
fixed = 314 total (was 314) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
56s{color} | {color:red} ql generated 1 new + 1527 unchanged - 0 fixed = 1528 
total (was 1527) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  The field 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.dynamicPartitionSpecs is 
transient but isn't set by deserialization  In FileSinkOperator.java:but isn't 
set by deserialization  In FileSinkOperator.java |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21537/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21537/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21537/yetus/new-findbugs-ql.html
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21537/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Insert overwrite with dynamic partitioning is not working correctly with 
> direct insert
> --
>
> Key: HIVE-23114
> URL: https://issues.apache.org/jira/browse/HIVE-23114
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Attachments: HIVE-23114.1.patch, HIVE-23114.2.patch, 
> HIVE-23114.3.patch, HIVE-23114.3.patch
>
>
> This is a follow-up Jira for the 
> [conversation|https://issues.apache.org/jira/browse/HIVE-21164?focusedCommentId=17059280&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17059280]
>  in HIVE-21164
>  Doing an insert overwrite from a multi-inse

[jira] [Updated] (HIVE-23173) User login success/failed attempts should be logged

2020-04-09 Thread Naresh P R (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-23173:
--
Status: Patch Available  (was: In Progress)

> User login success/failed attempts should be logged
> ---
>
> Key: HIVE-23173
> URL: https://issues.apache.org/jira/browse/HIVE-23173
> Project: Hive
>  Issue Type: Improvement
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Attachments: HIVE-23173.1.patch
>
>
> User login success & failure attempts should be logged in server logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23173) User login success/failed attempts should be logged

2020-04-09 Thread Naresh P R (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-23173:
--
Attachment: HIVE-23173.1.patch

> User login success/failed attempts should be logged
> ---
>
> Key: HIVE-23173
> URL: https://issues.apache.org/jira/browse/HIVE-23173
> Project: Hive
>  Issue Type: Improvement
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Attachments: HIVE-23173.1.patch
>
>
> User login success & failure attempts should be logged in server logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23173) User login success/failed attempts should be logged

2020-04-09 Thread Naresh P R (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R reassigned HIVE-23173:
-

Assignee: Naresh P R

> User login success/failed attempts should be logged
> ---
>
> Key: HIVE-23173
> URL: https://issues.apache.org/jira/browse/HIVE-23173
> Project: Hive
>  Issue Type: Improvement
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
>
> User login success & failure attempts should be logged in server logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-23173) User login success/failed attempts should be logged

2020-04-09 Thread Naresh P R (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23173 started by Naresh P R.
-
> User login success/failed attempts should be logged
> ---
>
> Key: HIVE-23173
> URL: https://issues.apache.org/jira/browse/HIVE-23173
> Project: Hive
>  Issue Type: Improvement
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
>
> User login success & failure attempts should be logged in server logs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23133) Numeric operations can have different result across hardware archs

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17080013#comment-17080013
 ] 

Hive QA commented on HIVE-23133:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999397/HIVE-23133.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 18207 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_math_funcs] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCompareCliDriver.testCliDriver[vectorized_math_funcs]
 (batchId=303)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_select]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_decimal_udf2]
 (batchId=188)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_math_funcs]
 (batchId=171)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_select] 
(batchId=136)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vectorized_math_funcs]
 (batchId=127)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21536/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21536/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21536/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999397 - PreCommit-HIVE-Build

> Numeric operations can have different result across hardware archs
> --
>
> Key: HIVE-23133
> URL: https://issues.apache.org/jira/browse/HIVE-23133
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23133.1.patch
>
>
> Currently, we have set up an ARM CI to test out how Hive works on ARM 
> platform:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/]
> Among the failures, we have observed that some numeric operations can have 
> different result across hardware archs, such as:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/]
> we can see that the calculation results of log, exp, cos, toRadians etc is 
> slitly different than the .out file results that we are
> comparing(they are tested and wrote on X86 machines), this is because of we 
> use [Math 
> Library|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html] for 
> these kind of calculations.
> and according to the 
> [illustration|https://docs.oracle.com/javase/6/docs/api/java/lang/Math.html]:
> _Unlike some of the numeric methods of class StrictMath, all implementations 
> of the equivalent functions of class Math are not_
> _defined to return the bit-for-bit same results. This relaxation permits 
> better-performing implementations where strict reproducibility_
> _is not required._
> _By default many of the Math methods simply call the equivalent method in 
> StrictMath for their implementation._
> _Code generators are encouraged to use platform-specific native libraries or 
> microprocessor instructions, where available,_
> _to provide higher-performance implementations of Math methods._
> so the result will have difference across hardware archs.
> On the other hand, JAVA provided another library 
> [StrictMath|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]
>  that will not have this kind of problem as according to its' 
> [reference|https://docs.oracle.com/javase/6/docs/api/java/lang/StrictMath.html]:
> _To help ensure portability of Java programs, the definitions of some of the 
> numeric functions in this package require that they produce_
> _the same results as certain published algorithms._
> So in order to fix the above mentioned problem, we have to consider switch to 
> use StrictMath instead of Math.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23133) Numeric operations can have different result across hardware archs

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079995#comment-17079995
 ] 

Hive QA commented on HIVE-23133:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
43s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21536/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: vector-code-gen ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21536/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Numeric operations can have different result across hardware archs
> --
>
> Key: HIVE-23133
> URL: https://issues.apache.org/jira/browse/HIVE-23133
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zhenyu Zheng
>Assignee: Yikun Jiang
>Priority: Major
> Attachments: HIVE-23133.1.patch
>
>
> Currently, we have set up an ARM CI to test out how Hive works on ARM 
> platform:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/]
> Among the failures, we have observed that some numeric operations can have 
> different result across hardware archs, such as:
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_vector_decimal_udf2_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_subquery_select_/]
> [https://builds.apache.org/view/H-L/view/Hive/job/Hive-linux-ARM-trunk/25/testReport/org.apache.hadoop.hive.cli/TestSparkCliDriver/testCliDriver_vectorized_math_funcs_/]
> we can see that the calculation results of log, exp, cos, toRadians etc is 
> slitly different than the .out file results that we are
> comparing(they are t

[jira] [Assigned] (HIVE-23172) Quoted Backtick Columns Are Not Parsing Correctly

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor reassigned HIVE-23172:
-


> Quoted Backtick Columns Are Not Parsing Correctly
> -
>
> Key: HIVE-23172
> URL: https://issues.apache.org/jira/browse/HIVE-23172
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>
> I recently came across a weird behavior while examining failures of 
> {{special_character_in_tabnames_2.q}} while working on HIVE-23150. I was 
> surprised to see it fail because I couldn't see of any reason why it 
> should... it's doing pretty standard SQL statements just like every other 
> test, but for some reason this test is just a *little bit* differently than 
> most others and it brought this issue to light.
> Turns out,... the parsing of table names is pretty much wrong across the 
> board.
> The statement that caught my attention was this:
> {code:sql}
> DROP TABLE IF EXISTS `s/c`;
> {code}
> And here is the relevant grammar:
> {code:none}
> fragment
> RegexComponent
> : 'a'..'z' | 'A'..'Z' | '0'..'9' | '_'
> | PLUS | STAR | QUESTION | MINUS | DOT
> | LPAREN | RPAREN | LSQUARE | RSQUARE | LCURLY | RCURLY
> | BITWISEXOR | BITWISEOR | DOLLAR | '!'
> ;
> Identifier
> :
> (Letter | Digit) (Letter | Digit | '_')*
> | {allowQuotedId()}? QuotedIdentifier  /* though at the language level we 
> allow all Identifiers to be QuotedIdentifiers; 
>   at the API level only columns 
> are allowed to be of this form */
> | '`' RegexComponent+ '`'
> ;
> fragment
> QuotedIdentifier 
> :
> '`'  ( '``' | ~('`') )* '`' { 
> setText(StringUtils.replace(getText().substring(1, getText().length() -1 ), 
> "``", "`")); }
> ;
> {code}
> The mystery for me was that, for some reason, this String {{`s/c`}} was being 
> stripped of its back-ticks. Every other test I investigated did not have this 
> behavior, the back ticks were always preserved around the table name. The 
> main Hive Java code base would see the back-ticks and deal with it 
> internally. For HIVE-23150, I introduced some sanity checks and they were 
> failing because they were expecting the back ticks to be present.
> With the help of HIVE-23171 I finally figured it out. So, what I discovered 
> is that pretty much every table name is hitting the {{RegexComponent}} rule 
> and the back ticks are carried forward. However, {{`s/c`}} the forward slash 
> `/` is not allowable in {{RegexComponent}} so it hits on {{QuotedIdentifier}} 
> rule which is trimming the back ticks.
> I validated this by disabling {{QuotedIdentifier}}. When I did this, 
> {{`s/c`}} fails in error but {{`sc`}} parses successfully... because {{`sc`}} 
> is being treated as a {{RegexComponent}}.
> So, if you have {{allowQuotedId}} disabled, table names can only use the 
> characters defined in the {{RegexComponent}} rule (otherwise it errors), and 
> it will *not* strip the back ticks. If you have {{allowQuotedId}} enabled, 
> then if the table name has a character not specified in {{RegexComponent}}, 
> it will identify it as a table name and it *will* strip the back ticks, if 
> all the characters are part of {{RegexComponent}} then it will *not* strip 
> the back ticks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?focusedWorklogId=419716&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419716
 ]

ASF GitHub Bot logged work on HIVE-21603:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 20:19
Start Date: 09/Apr/20 20:19
Worklog Time Spent: 10m 
  Work Description: pgaref commented on pull request #975: HIVE-21603 
Bumping mockito to 3.3.3 and powermock to 2.0.2
URL: https://github.com/apache/hive/pull/975
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419716)
Remaining Estimate: 0h
Time Spent: 10m

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21603.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21603:
--
Labels: pull-request-available  (was: )

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21603.01.patch
>
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?focusedWorklogId=419717&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-419717
 ]

ASF GitHub Bot logged work on HIVE-21603:
-

Author: ASF GitHub Bot
Created on: 09/Apr/20 20:19
Start Date: 09/Apr/20 20:19
Worklog Time Spent: 10m 
  Work Description: pgaref commented on pull request #975: HIVE-21603 
Bumping mockito to 3.3.3 and powermock to 2.0.2
URL: https://github.com/apache/hive/pull/975
 
 
   * Replacing deprecated Matchers with ArgumentMatchers
   * Replacing deprecated runners.MockitoJUnitRunners with 
junit.MockitoJUnitRunners
   * Some cleaning
   
   Change-Id: Icd46f695e1e473b575f0ed4115da53521266c12d
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 419717)
Time Spent: 20m  (was: 10m)

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21603.01.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21603 started by Panagiotis Garefalakis.
-
> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-21603.01.patch
>
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-21603:
--
Attachment: HIVE-21603.01.patch

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-21603.01.patch
>
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-21603:
--
Status: Patch Available  (was: In Progress)

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-21603.01.patch
>
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22458) Add more constraints on showing partitions

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079976#comment-17079976
 ] 

Hive QA commented on HIVE-22458:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999388/HIVE-22458.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18208 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21535/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21535/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21535/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999388 - PreCommit-HIVE-Build

> Add more constraints on showing partitions
> --
>
> Key: HIVE-22458
> URL: https://issues.apache.org/jira/browse/HIVE-22458
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zhihua Deng
>Priority: Major
> Attachments: HIVE-22458.2.patch, HIVE-22458.3.patch, 
> HIVE-22458.branch-1.02.patch, HIVE-22458.branch-1.patch, HIVE-22458.patch
>
>
> When we showing partitions of a table with thousands of partitions,  all the 
> partitions will be returned and it's not easy to catch the specified one from 
> them, this make showing partitions hard to use. We can add where/limit/order 
> by constraints to show partitions like:
>  show partitions table_name [partition_specs] where partition_key >= value 
> order by partition_key desc limit n;
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22458) Add more constraints on showing partitions

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079965#comment-17079965
 ] 

Hive QA commented on HIVE-22458:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
58s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
36s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
55s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
17s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
50s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 1 new + 410 unchanged - 0 fixed = 411 total (was 410) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 1426 unchanged - 1 fixed = 1427 total (was 1427) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21535/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21535/yetus/diff-checkstyle-standalone-metastore_metastore-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21535/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-common parser 
standalone-metastore/metastore-server ql itests/hcatalog-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21535/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add more constraints on showing partitions
> --
>
> Key: HIVE-22458
> URL: https://issues.apache.or

[jira] [Updated] (HIVE-22390) Remove Dependency on JODA Time Library

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-22390:
--
Parent: HIVE-22415
Issue Type: Sub-task  (was: Improvement)

> Remove Dependency on JODA Time Library
> --
>
> Key: HIVE-22390
> URL: https://issues.apache.org/jira/browse/HIVE-22390
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>
> Hive uses Joda time library.
> {quote}
> Joda-Time is the de facto standard date and time library for Java prior to 
> Java SE 8. Users are now asked to migrate to java.time (JSR-310).
> https://www.joda.org/joda-time/
> {quote}
> Remove this dependency from classes, POM files, and licence files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21584) Java 11 preparation: system class loader is not URLClassLoader

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-21584:
--
Parent: HIVE-22415
Issue Type: Sub-task  (was: Task)

> Java 11 preparation: system class loader is not URLClassLoader
> --
>
> Key: HIVE-21584
> URL: https://issues.apache.org/jira/browse/HIVE-21584
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Zoltan Matyus
>Assignee: Zoltan Matyus
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21584.01.patch, HIVE-21584.02.patch, 
> HIVE-21584.03.patch, HIVE-21584.04.patch, HIVE-21584.05.patch, 
> HIVE-21584.06.patch, HIVE-21584.07.patch, HIVE-21584.08.patch, 
> HIVE-21584.09.patch, HIVE-21584.10.patch
>
>
> Currently, Hive assumes that the system class loader is instance of 
> {{URLClassLoader}}. In Java 11 this is not the case. There are a few 
> (unresolved) JIRAs about specific occurrences of {{URLClassLoader}} (e.g. 
> [HIVE-21237|https://issues.apache.org/jira/browse/HIVE-21237], 
> [HIVE-17909|https://issues.apache.org/jira/browse/HIVE-17909]), but no 
> _"remove all occurrences"_. Also I couldn't find umbrella "Java 11 upgrade" 
> JIRA.
> This ticket is to remove all unconditional casts of any random class loader 
> to {{URLClassLoader}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22097) Incompatible java.util.ArrayList for java 11

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-22097:
--
Parent: HIVE-22415
Issue Type: Sub-task  (was: Improvement)

> Incompatible java.util.ArrayList for java 11
> 
>
> Key: HIVE-22097
> URL: https://issues.apache.org/jira/browse/HIVE-22097
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Affects Versions: 3.0.0, 3.1.1
>Reporter: Yuming Wang
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22097.1.branch-3.1.patch, 
> HIVE-22097.1.branch-3.patch, HIVE-22097.1.patch, JDK1.8.png, JDK11.png
>
>
> {noformat}
> export JAVA_HOME=/usr/lib/jdk-11.0.3
> export PATH=${JAVA_HOME}/bin:${PATH}
> hive> create table t(id int);
> Time taken: 0.035 seconds
> hive> insert into t values(1);
> Query ID = root_20190811155400_7c0e0494-eecb-4c54-a9fd-942ab52a0794
> Total jobs = 3
> Launching Job 1 out of 3
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> java.lang.RuntimeException: java.lang.NoSuchFieldException: parentOffset
>   at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:390)
>   at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$1.create(SerializationUtilities.java:235)
>   at 
> org.apache.hive.com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48)
>   at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities.borrowKryo(SerializationUtilities.java:280)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.setBaseWork(Utilities.java:595)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.setMapWork(Utilities.java:587)
>   at 
> org.apache.hadoop.hive.ql.exec.Utilities.setMapRedWork(Utilities.java:579)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:357)
>   at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:159)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2317)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1969)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1636)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1396)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1390)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:242)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:189)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:838)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:777)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:696)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Caused by: java.lang.NoSuchFieldException: parentOffset
>   at java.base/java.lang.Class.getDeclaredField(Class.java:2412)
>   at 
> org.apache.hadoop.hive.ql.exec.SerializationUtilities$ArrayListSubListSerializer.(SerializationUtilities.java:384)
>   ... 29 more
> Job Submission failed with exception 
> 'java.lang.RuntimeException(java.lang.NoSuchFieldException: parentOffset)'
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask. java.lang.NoSuchFieldException: 
> parentOffset
> {noformat}
> The reason is Java removed {{parentOffset}}:
>  !JDK1.8.png! 
>  !JDK11.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-21603:
--
Parent: HIVE-22415
Issue Type: Sub-task  (was: Task)

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-21603) Java 11 preparation: update powermock version

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned HIVE-21603:
-

Assignee: Panagiotis Garefalakis

> Java 11 preparation: update powermock version
> -
>
> Key: HIVE-21603
> URL: https://issues.apache.org/jira/browse/HIVE-21603
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: László Pintér
>Assignee: Panagiotis Garefalakis
>Priority: Major
>
> PowerMock1 has no support for Java11, therefore we need to bump its version 
> to 2.0.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23162) Remove swapping logic to merge joins in AST converter

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079900#comment-17079900
 ] 

Hive QA commented on HIVE-23162:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999391/HIVE-23162.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 412 failed/errored test(s), 18207 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join12] (batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join13] (batchId=94)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join20] 
(batchId=103)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join21] (batchId=94)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join28] (batchId=83)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join29] (batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_reordering_values]
 (batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats2] 
(batchId=101)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats] 
(batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask]
 (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_const] (batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_cond_pushdown] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_join_breaktask] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join12] (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join13] (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join20] (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join21] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join26] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join28] (batchId=98)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join32] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join33] (batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join40] (batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_1] 
(batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_3] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual1]
 (batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual2]
 (batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual3]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual4]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_mapjoin] 
(batchId=58)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_subquery] 
(batchId=58)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mergejoin] (batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join2] (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join3] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_outer_join4] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[runtime_skewjoin_mapjoin_spark]
 (batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_47] 
(batchId=34)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[dynamic_semijoin_user_level]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] 
(batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join21]
 (batchId=191)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_join29]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_smb_mapjoin_14]
 (batchId=184)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[auto_sortmerge_join_9]
 (batchId=187)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[column_access_stats]
 (batchId=184)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[constraints_optimization]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[correlationoptimizer2]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[correlationoptimizer6]
 (batchId=181)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction]
 (batchId=180)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_sw]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[estimate_pkfk_filtered_fk]
 (batchId=186)
org.apache.hadoop.hive

[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Description: 
For some of the work I would like to do on [HIVE-23149], it would be nice to 
visualize the output of the statement parser.

I have created a tool that spits out the parser tree in DOT file format.  This 
allows it to be visualized using a plethora of tools.

I have attached an example of the output  !select_1.png! that I generated for a 
{{SELECT 1}} statement.

  was:For some of the work I would like to do on [HIVE-23149], it would be nice 
to visualize the output of the statement parser.


> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23171.1.patch, select_1.png
>
>
> For some of the work I would like to do on [HIVE-23149], it would be nice to 
> visualize the output of the statement parser.
> I have created a tool that spits out the parser tree in DOT file format.  
> This allows it to be visualized using a plethora of tools.
> I have attached an example of the output  !select_1.png! that I generated for 
> a {{SELECT 1}} statement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Description: 
For some of the work I would like to do on HIVE-23149, it would be nice to 
visualize the output of the statement parser.

I have created a tool that spits out the parser tree in DOT file format. This 
allows it to be visualized using a plethora of tools.

I have attached an example of the output that I generated for a {{SELECT 1}} 
statement:

 

 

!select_1.png!

  was:
For some of the work I would like to do on [HIVE-23149], it would be nice to 
visualize the output of the statement parser.

I have created a tool that spits out the parser tree in DOT file format.  This 
allows it to be visualized using a plethora of tools.

I have attached an example of the output  !select_1.png! that I generated for a 
{{SELECT 1}} statement.


> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23171.1.patch, select_1.png
>
>
> For some of the work I would like to do on HIVE-23149, it would be nice to 
> visualize the output of the statement parser.
> I have created a tool that spits out the parser tree in DOT file format. This 
> allows it to be visualized using a plethora of tools.
> I have attached an example of the output that I generated for a {{SELECT 1}} 
> statement:
>  
>  
> !select_1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Description: For some of the work I would like to do on [HIVE-23149], it 
would be nice to visualize the output of the statement parser.

> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23171.1.patch, select_1.png
>
>
> For some of the work I would like to do on [HIVE-23149], it would be nice to 
> visualize the output of the statement parser.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Attachment: HIVE-23171.1.patch

> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23171.1.patch, select_1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23149) Consistency of Parsing Object Identifiers

2020-04-09 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079655#comment-17079655
 ] 

David Mollitor edited comment on HIVE-23149 at 4/9/20, 6:32 PM:


Note to self.  MySQL supports dollar sign ($) in the identifier and Hive does 
not appear to.


was (Author: belugabehr):
Note to self.  MySQL supports dollar sign ($) in the identifier.

> Consistency of Parsing Object Identifiers
> -
>
> Key: HIVE-23149
> URL: https://issues.apache.org/jira/browse/HIVE-23149
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>
> There needs to be better consistency with handling of object identifiers 
> (database, tables, column, view, function, etc.).  I think it makes sense to 
> standardize on the same rules which MySQL/MariaDB uses for their column names 
> so that Hive can be more of a drop-in replacement for these.
>  
> The two important things to keep in mind are:
>  
> 1// Permitted characters in quoted identifiers include the full Unicode Basic 
> Multilingual Plane (BMP), except U+
>  
> 2// If any components of a multiple-part name require quoting, quote them 
> individually rather than quoting the name as a whole. For example, write 
> {{`my-table`.`my-column`}}, not {{`my-table.my-column`}}.  
>  
> [https://dev.mysql.com/doc/refman/8.0/en/identifiers.html]
> [https://dev.mysql.com/doc/refman/8.0/en/identifier-qualifiers.html]  
>  
> That is to say:
>  
> {code:sql}
> -- Select all rows from a table named `default.mytable`
> -- (Yes, the table name itself has a period in it. This is valid)
> SELECT * FROM `default.mytable`;
>  
> -- Select all rows from database `default`, table `mytable`
> SELECT * FROM `default`.`mytable`;  
> {code}
>  
> This plays out in a couple of ways.  There may be more, but these are the 
> ones I know about already:
>  
> 1// Hive generates incorrect syntax: [HIVE-23128]
>  
> 2// Hive throws exception if there is a period in the table name.  This is an 
> invalid response.  Table name may have a period in them. More likely than 
> not, it will throw 'table not found' exception since the user most likely 
> accidentally used backticks incorrectly and meant to specify a db and a table 
> separately. [HIVE-16907]
> Once we have the parsing figured out and support for backticks to enclose 
> UTF-8 strings, then the backend database needs to actually support the UTF-8 
> character set.  It currently does not: [HIVE-1808]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Status: Patch Available  (was: Open)

> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-23171.1.patch, select_1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23150) Create an Object Identifier Parser for All Components to Use

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23150:
--
Status: Open  (was: Patch Available)

> Create an Object Identifier Parser for All Components to Use
> 
>
> Key: HIVE-23150
> URL: https://issues.apache.org/jira/browse/HIVE-23150
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23150.1.patch
>
>
> Create a parser for parsing (and validating) MySQL/MariaDB style object 
> identifiers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-23150) Create an Object Identifier Parser for All Components to Use

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23150 started by David Mollitor.
-
> Create an Object Identifier Parser for All Components to Use
> 
>
> Key: HIVE-23150
> URL: https://issues.apache.org/jira/browse/HIVE-23150
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
> Attachments: HIVE-23150.1.patch
>
>
> Create a parser for parsing (and validating) MySQL/MariaDB style object 
> identifiers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23149) Consistency of Parsing Object Identifiers

2020-04-09 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079655#comment-17079655
 ] 

David Mollitor commented on HIVE-23149:
---

Note to self.  MySQL supports dollar sign ($) in the identifier.

> Consistency of Parsing Object Identifiers
> -
>
> Key: HIVE-23149
> URL: https://issues.apache.org/jira/browse/HIVE-23149
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>
> There needs to be better consistency with handling of object identifiers 
> (database, tables, column, view, function, etc.).  I think it makes sense to 
> standardize on the same rules which MySQL/MariaDB uses for their column names 
> so that Hive can be more of a drop-in replacement for these.
>  
> The two important things to keep in mind are:
>  
> 1// Permitted characters in quoted identifiers include the full Unicode Basic 
> Multilingual Plane (BMP), except U+
>  
> 2// If any components of a multiple-part name require quoting, quote them 
> individually rather than quoting the name as a whole. For example, write 
> {{`my-table`.`my-column`}}, not {{`my-table.my-column`}}.  
>  
> [https://dev.mysql.com/doc/refman/8.0/en/identifiers.html]
> [https://dev.mysql.com/doc/refman/8.0/en/identifier-qualifiers.html]  
>  
> That is to say:
>  
> {code:sql}
> -- Select all rows from a table named `default.mytable`
> -- (Yes, the table name itself has a period in it. This is valid)
> SELECT * FROM `default.mytable`;
>  
> -- Select all rows from database `default`, table `mytable`
> SELECT * FROM `default`.`mytable`;  
> {code}
>  
> This plays out in a couple of ways.  There may be more, but these are the 
> ones I know about already:
>  
> 1// Hive generates incorrect syntax: [HIVE-23128]
>  
> 2// Hive throws exception if there is a period in the table name.  This is an 
> invalid response.  Table name may have a period in them. More likely than 
> not, it will throw 'table not found' exception since the user most likely 
> accidentally used backticks incorrectly and meant to specify a db and a table 
> separately. [HIVE-16907]
> Once we have the parsing figured out and support for backticks to enclose 
> UTF-8 strings, then the backend database needs to actually support the UTF-8 
> character set.  It currently does not: [HIVE-1808]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23149) Consistency of Parsing Object Identifiers

2020-04-09 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079653#comment-17079653
 ] 

David Mollitor commented on HIVE-23149:
---

OK.  So, as I look at this more, it's becoming clear to me that Hive needs to 
fix this starting from the top, the Parser/Grammar should be better and more 
clear about handling these cases.

 

[https://github.com/apache/hive/blob/master/parser/src/java/org/apache/hadoop/hive/ql/parse/FromClauseParser.g#L212]

 
{code:none}
tableName
@init { gParent.pushMsg("table name", state); }
@after { gParent.popMsg(state); }
:
db=identifier DOT tab=identifier
-> ^(TOK_TABNAME $db $tab)
|
tab=identifier
-> ^(TOK_TABNAME $tab)
;

Identifier
:
(Letter | Digit) (Letter | Digit | '_')*
| {allowQuotedId()}? QuotedIdentifier  /* though at the language level we 
allow all Identifiers to be QuotedIdentifiers; 
  at the API level only columns are 
allowed to be of this form */
| '`' RegexComponent+ '`'
;

identifier
:
Identifier
| nonReserved -> Identifier[$nonReserved.start]
;
{code}
An Identifier can be ASCII or back-ticked UTF-8. I don't see the backtick here 
represented here.

I believe this should be something like...
{code:none}
qualifiedIdentifier:
  identifier dotIdentifier? ;

dotIdentifier:
  DOT identifier ;

identifier:
  -- UnquotedIdentifier |
  -- BackTickQuotedIdentifier |
  -- DoubleQuoteQuotedIdentifier (optional) ;

tableName:
  qualifiedIdentifier
;
{code}

In this way, the code receiving the tree can check for UnquotedIdentifier  | 
BackTickQuotedIdentifier  |  DoubleQuoteQuotedIdentifier | and then it later 
becomes trivial (in the Hive code base) to know if the name should be stripped 
based on the identifier type.

> Consistency of Parsing Object Identifiers
> -
>
> Key: HIVE-23149
> URL: https://issues.apache.org/jira/browse/HIVE-23149
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Critical
>
> There needs to be better consistency with handling of object identifiers 
> (database, tables, column, view, function, etc.).  I think it makes sense to 
> standardize on the same rules which MySQL/MariaDB uses for their column names 
> so that Hive can be more of a drop-in replacement for these.
>  
> The two important things to keep in mind are:
>  
> 1// Permitted characters in quoted identifiers include the full Unicode Basic 
> Multilingual Plane (BMP), except U+
>  
> 2// If any components of a multiple-part name require quoting, quote them 
> individually rather than quoting the name as a whole. For example, write 
> {{`my-table`.`my-column`}}, not {{`my-table.my-column`}}.  
>  
> [https://dev.mysql.com/doc/refman/8.0/en/identifiers.html]
> [https://dev.mysql.com/doc/refman/8.0/en/identifier-qualifiers.html]  
>  
> That is to say:
>  
> {code:sql}
> -- Select all rows from a table named `default.mytable`
> -- (Yes, the table name itself has a period in it. This is valid)
> SELECT * FROM `default.mytable`;
>  
> -- Select all rows from database `default`, table `mytable`
> SELECT * FROM `default`.`mytable`;  
> {code}
>  
> This plays out in a couple of ways.  There may be more, but these are the 
> ones I know about already:
>  
> 1// Hive generates incorrect syntax: [HIVE-23128]
>  
> 2// Hive throws exception if there is a period in the table name.  This is an 
> invalid response.  Table name may have a period in them. More likely than 
> not, it will throw 'table not found' exception since the user most likely 
> accidentally used backticks incorrectly and meant to specify a db and a table 
> separately. [HIVE-16907]
> Once we have the parsing figured out and support for backticks to enclose 
> UTF-8 strings, then the backend database needs to actually support the UTF-8 
> character set.  It currently does not: [HIVE-1808]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Parent: HIVE-23149
Issue Type: Sub-task  (was: Improvement)

> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: select_1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23162) Remove swapping logic to merge joins in AST converter

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079631#comment-17079631
 ] 

Hive QA commented on HIVE-23162:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
43s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} ql: The patch generated 0 new + 28 unchanged - 1 
fixed = 28 total (was 29) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21534/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21534/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove swapping logic to merge joins in AST converter
> -
>
> Key: HIVE-23162
> URL: https://issues.apache.org/jira/browse/HIVE-23162
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23162.01.patch, HIVE-23162.02.patch
>
>
> In ASTConverter, there is some logic to invert join inputs so the logic to 
> merge joins in SemanticAnalyzer kicks in.
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java#L407
> There is a bug because inputs are swapped but the schema is not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-23171:
--
Attachment: select_1.png

> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: select_1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23171) Create Tool To Visualize Hive Parser Tree

2020-04-09 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor reassigned HIVE-23171:
-


> Create Tool To Visualize Hive Parser Tree
> -
>
> Key: HIVE-23171
> URL: https://issues.apache.org/jira/browse/HIVE-23171
> Project: Hive
>  Issue Type: Improvement
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23170) Probe support for ORC DataConsumer

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned HIVE-23170:
-

Assignee: Panagiotis Garefalakis

> Probe support for ORC DataConsumer
> --
>
> Key: HIVE-23170
> URL: https://issues.apache.org/jira/browse/HIVE-23170
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23169) Probe runtime support for LLAP

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned HIVE-23169:
-

Assignee: Panagiotis Garefalakis

> Probe runtime support for LLAP
> --
>
> Key: HIVE-23169
> URL: https://issues.apache.org/jira/browse/HIVE-23169
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23168) Implement MJ HashTable contains key functionality

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned HIVE-23168:
-

Assignee: Panagiotis Garefalakis

> Implement MJ HashTable contains key functionality
> -
>
> Key: HIVE-23168
> URL: https://issues.apache.org/jira/browse/HIVE-23168
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23153) deregister from zookeeper is not properly worked on kerberized environment

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079593#comment-17079593
 ] 

Hive QA commented on HIVE-23153:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999374/HIVE-23153.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18207 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21533/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21533/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21533/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999374 - PreCommit-HIVE-Build

> deregister from zookeeper is not properly worked on kerberized environment
> --
>
> Key: HIVE-23153
> URL: https://issues.apache.org/jira/browse/HIVE-23153
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Minor
> Attachments: HIVE-23153.01.patch, HIVE-23153.02.patch, Screen Shot 
> 2020-04-08 at 5.00.40.png
>
>
> Deregistering from Zookeeper, initiated by the command 'hive --service 
> hiveserver2 -deregister ', is not properly worked when HiveServer2 
> and Zookeeper are kerberized. Even though hive-site.xml has configuration for 
> Zookeeper Kerberos login (hive.server2.authentication.kerberos.principal and 
> keytab), it isn't used. I know that kinit with hiveserver2 keytab would make 
> it work. But as I said, hive-site.xml already has so that the user doesn't 
> need to do kinit.
>  *  When Kerberos login to Zookeeper is failed and serverUri is not actually 
> removed from hiveserver2 namespace
> {code:java}
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: 
> -78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-client-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/tez/lib/jettison-1.3.4.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-server-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jetty-util-9.3.22.v20171030.jar:/usr/hdp/3.1.0.0-78/tez/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/metrics-core-3.1.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/tez/lib/servlet-api-2.5.jar:/usr/hdp/3.1.0.0-78/tez/lib/slf4j-api-1.7.10.jar:/usr/hdp/3.1.0.0-78/tez/conf
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:java.library.path=:
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:java.io.tmpdir=/tmp
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:java.compiler=
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:os.name=Linux
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:os.arch=amd64
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:os.version=...
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:user.name=...
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:user.home=...
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: Client 
> environment:user.dir=...
> 2020-04-08 04:45:44,699 INFO [main] zookeeper.ZooKeeper: Initiating client 
> connection, connectString=... sessionTimeout=6 
> watcher=org.apache.curator.ConnectionState@706eab5d
> 2020-04-08 04:45:44,725 INFO [main-SendThread(...)] zookeeper.ClientCnxn: 
> Opening socket connection to server ...:2181. Will not attempt to 
> authenticate using SASL (unknown error)
> 2020-04-08 04:45:44,731 INFO [main-SendThread(...:2181)] 
> zookeeper.ClientCnxn: Socket connection established to ...:2181, initiating 
> session
> 2020-04-08 04:45:44,743 INFO [main-SendThread(...:2181)] 
> zookeeper.ClientCnxn: Session establishment complete on server ...:2181, 
> sessionid = 0x27148fd2ab1002e, negotiated timeout = 6
> 2020-04-08 04:45:44,751 I

[jira] [Updated] (HIVE-22731) Support probe decode with row-level filtering

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-22731:
--
Summary: Support probe decode with row-level filtering  (was: Probe MapJoin 
hashtables for row level filtering)

> Support probe decode with row-level filtering
> -
>
> Key: HIVE-22731
> URL: https://issues.apache.org/jira/browse/HIVE-22731
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive, llap
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22731.1.patch, HIVE-22731.2.patch, 
> HIVE-22731.WIP.patch, decode_time_bars.pdf
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, RecordReaders such as ORC support filtering at coarser-grained 
> levels, namely: File, Stripe (64 to 256mb), and Row group (10k row) level. 
> They only filter sets of rows if they can guarantee that none of the rows can 
> pass a filter (usually given as searchable argument).
> However, a significant amount of time can be spend decoding rows with 
> multiple columns that are not even used in the final result. See figure where 
> original is what happens today and in LazyDecode we skip decoding rows that 
> do not match the key.
> To enable a more fine-grained filtering in the particular case of a MapJoin 
> we could utilize the key HashTable created from the smaller table to skip 
> deserializing row columns at the larger table that do not match any key and 
> thus save CPU time. 
> This Jira investigates this direction. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23167) Extend compiler support for Probe static filters

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned HIVE-23167:
-


> Extend compiler support for Probe static filters
> 
>
> Key: HIVE-23167
> URL: https://issues.apache.org/jira/browse/HIVE-23167
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23006) Basic compiler support for Probe MapJoin

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-23006:
--
Summary: Basic compiler support for Probe MapJoin  (was: Compiler support 
for Probe MapJoin)

> Basic compiler support for Probe MapJoin
> 
>
> Key: HIVE-23006
> URL: https://issues.apache.org/jira/browse/HIVE-23006
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23006.01.patch, HIVE-23006.02.patch, 
> HIVE-23006.03.patch
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The decision of pushing down information to the Record reader (potentially 
> reducing decoding time by row-level filtering) should be done at query 
> compilation time.
> This patch adds an extra optimisation step with the goal of finding Table 
> Scan operators that could reduce the number of rows decoded at runtime using 
> extra available information.
> It currently looks for all the available MapJoin operators that could use the 
> smaller HashTable on the probing side (where TS is) to filter-out rows that 
> would never match. 
> To do so the HashTable information is pushed down to the TS properties and 
> then propagated as part of MapWork.
> If the a single TS is used by multiple operators (shared-word), this rule can 
> not be applied.
> This rule can be extended to support static filter expressions like:
> _select * from sales where sold_state = 'PR';_
> This optimisation manly targets the Tez execution engine running on Llap.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23107) Remove MIN_HISTORY_LEVEL table

2020-04-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-23107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Pintér updated HIVE-23107:
-
Attachment: HIVE-23107.07.patch

> Remove MIN_HISTORY_LEVEL table
> --
>
> Key: HIVE-23107
> URL: https://issues.apache.org/jira/browse/HIVE-23107
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: László Pintér
>Assignee: László Pintér
>Priority: Major
> Attachments: HIVE-23107.01.patch, HIVE-23107.02.patch, 
> HIVE-23107.03.patch, HIVE-23107.04.patch, HIVE-23107.05.patch, 
> HIVE-23107.06.patch, HIVE-23107.07.patch
>
>
> MIN_HISTORY_LEVEL table is used in two places:
>  * Cleaner uses it to decide if the files can be removed - this could be 
> replaced by adding a new column to compaction_queue storing the next_txn_id 
> when the change was committed, and before cleaning checking the minimum open 
> transaction id in the TXNS table
>  * Initiator uses it to decide if some items from TXN_TO_WRITE_ID table can 
> be removed. This could be replaced by using the WRITE_SET.WS_COMMIT_ID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23153) deregister from zookeeper is not properly worked on kerberized environment

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079539#comment-17079539
 ] 

Hive QA commented on HIVE-23153:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} service in master has 49 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21533/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: service U: service |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21533/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> deregister from zookeeper is not properly worked on kerberized environment
> --
>
> Key: HIVE-23153
> URL: https://issues.apache.org/jira/browse/HIVE-23153
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Minor
> Attachments: HIVE-23153.01.patch, HIVE-23153.02.patch, Screen Shot 
> 2020-04-08 at 5.00.40.png
>
>
> Deregistering from Zookeeper, initiated by the command 'hive --service 
> hiveserver2 -deregister ', is not properly worked when HiveServer2 
> and Zookeeper are kerberized. Even though hive-site.xml has configuration for 
> Zookeeper Kerberos login (hive.server2.authentication.kerberos.principal and 
> keytab), it isn't used. I know that kinit with hiveserver2 keytab would make 
> it work. But as I said, hive-site.xml already has so that the user doesn't 
> need to do kinit.
>  *  When Kerberos login to Zookeeper is failed and serverUri is not actually 
> removed from hiveserver2 namespace
> {code:java}
> 2020-04-08 04:45:44,698 INFO [main] zookeeper.ZooKeeper: 
> -78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-azure-datalake-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-hdfs-client-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-common-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-mapreduce-client-core-3.1.1.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/tez/lib/hadoop-yarn-server-timeline-pluginstorage-3.1.1.3.1.0.0-78.ja

[jira] [Commented] (HIVE-23166) Guard VGB from flushing too often

2020-04-09 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079531#comment-17079531
 ] 

Ashutosh Chauhan commented on HIVE-23166:
-

+1

> Guard VGB from flushing too often
> -
>
> Key: HIVE-23166
> URL: https://issues.apache.org/jira/browse/HIVE-23166
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-23166.01.patch
>
>
> The existing flush logic in our VectorGroupByOperator is completely static.
>  It depends on the: number of HtEntries 
> (*hive.vectorized.groupby.maxentries*) and the MAX memory threshold (by 
> default 90% of available memory)
>  
> Assuming that we are not memory constrained the periodicity of flushing is 
> currently dictated by the static number of entries (1M by default) which can 
> be also misconfigured to a very low value.
> I am proposing along with maxHtEntries, to also take into account current 
> memory usage, to avoid flushing too ofter as it can hurt op throughput for 
> particular workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23158) Optimize S3A recordReader policy for Random IO formats

2020-04-09 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079529#comment-17079529
 ] 

Ashutosh Chauhan commented on HIVE-23158:
-

+1

> Optimize S3A recordReader policy for Random IO formats
> --
>
> Key: HIVE-23158
> URL: https://issues.apache.org/jira/browse/HIVE-23158
> Project: Hive
>  Issue Type: Bug
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: HIVE-23158.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> S3A filesystem client (inherited by Hadoop) supports the notion of input 
> policies.
>  These policies tune the behaviour of HTTP requests that are used for reading 
> different filetypes such as TEXT or ORC.
> For formats such as ORC and Parquet that do a lot of seek operations, there 
> is an optimized RANDOM mode that reads files only partially instead of fully 
> (default).
> I am suggesting to add some extra logic as part of HiveInputFormat to make 
> sure we optimize RecordReader requests for random IO when data is stored on 
> S3A using formats such as ORC or Parquet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22750) Consolidate LockType naming

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079522#comment-17079522
 ] 

Hive QA commented on HIVE-22750:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12999373/HIVE-22750.12.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 18207 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21532/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21532/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21532/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12999373 - PreCommit-HIVE-Build

> Consolidate LockType naming
> ---
>
> Key: HIVE-22750
> URL: https://issues.apache.org/jira/browse/HIVE-22750
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Zoltan Chovan
>Assignee: Marton Bod
>Priority: Minor
> Attachments: HIVE-22750.1.patch, HIVE-22750.10.patch, 
> HIVE-22750.11.patch, HIVE-22750.12.patch, HIVE-22750.12.patch, 
> HIVE-22750.2.patch, HIVE-22750.3.patch, HIVE-22750.4.patch, 
> HIVE-22750.5.patch, HIVE-22750.5.patch, HIVE-22750.6.patch, 
> HIVE-22750.7.patch, HIVE-22750.8.patch, HIVE-22750.9.patch, 
> HIVE-22750.9.patch, HIVE-22750.9.patch, HIVE-22750.9.patch
>
>
> Extend enum with string literal to remove unnecessary `id` to `char` casting 
> for the LockType:
> {code:java}
> switch (lockType) {
> case EXCLUSIVE:
>   lockChar = LOCK_EXCLUSIVE;
>   break;
> case SHARED_READ:
>   lockChar = LOCK_SHARED;
>   break;
> case SHARED_WRITE:
>   lockChar = LOCK_SEMI_SHARED;
>   break;
>   }
> {code}
> Consolidate LockType naming in code and schema upgrade scripts:
> {code:java}
> CASE WHEN HL.`HL_LOCK_TYPE` = 'e' THEN 'exclusive' WHEN HL.`HL_LOCK_TYPE` = 
> 'r' THEN 'shared' WHEN HL.`HL_LOCK_TYPE` = 'w' THEN *'semi-shared'* END AS 
> LOCK_TYPE,
> {code}
> +*Lock types:*+
> EXCLUSIVE
>  EXCL_WRITE
>  SHARED_WRITE
>  SHARED_READ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22750) Consolidate LockType naming

2020-04-09 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079500#comment-17079500
 ] 

Hive QA commented on HIVE-22750:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
32s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
10s{color} | {color:blue} standalone-metastore/metastore-server in master has 
190 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
46s{color} | {color:blue} ql in master has 1527 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} hcatalog/streaming in master has 11 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 45 new + 525 unchanged - 44 fixed = 570 total (was 569) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21532/dev-support/hive-personality.sh
 |
| git revision | master / 796a9c5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21532/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21532/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore/metastore-common metastore 
standalone-metastore/metastore-server ql hcatalog/streaming streaming U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21532/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Consolidate LockType naming
> ---
>
> Key: HIVE-22750
> URL: https://issues.apache.org/jira/browse/HIVE-22750
> Project: Hive
>  Issue Type: Improvement
>  Components:

[jira] [Commented] (HIVE-23166) Guard VGB from flushing too often

2020-04-09 Thread Panagiotis Garefalakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17079491#comment-17079491
 ] 

Panagiotis Garefalakis commented on HIVE-23166:
---

[~rajesh.balamohan] [~gopalv] [~ashutoshc] can you please take a look?

Thanks!

> Guard VGB from flushing too often
> -
>
> Key: HIVE-23166
> URL: https://issues.apache.org/jira/browse/HIVE-23166
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-23166.01.patch
>
>
> The existing flush logic in our VectorGroupByOperator is completely static.
>  It depends on the: number of HtEntries 
> (*hive.vectorized.groupby.maxentries*) and the MAX memory threshold (by 
> default 90% of available memory)
>  
> Assuming that we are not memory constrained the periodicity of flushing is 
> currently dictated by the static number of entries (1M by default) which can 
> be also misconfigured to a very low value.
> I am proposing along with maxHtEntries, to also take into account current 
> memory usage, to avoid flushing too ofter as it can hurt op throughput for 
> particular workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-23166) Guard VGB from flushing too often

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23166 started by Panagiotis Garefalakis.
-
> Guard VGB from flushing too often
> -
>
> Key: HIVE-23166
> URL: https://issues.apache.org/jira/browse/HIVE-23166
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-23166.01.patch
>
>
> The existing flush logic in our VectorGroupByOperator is completely static.
>  It depends on the: number of HtEntries 
> (*hive.vectorized.groupby.maxentries*) and the MAX memory threshold (by 
> default 90% of available memory)
>  
> Assuming that we are not memory constrained the periodicity of flushing is 
> currently dictated by the static number of entries (1M by default) which can 
> be also misconfigured to a very low value.
> I am proposing along with maxHtEntries, to also take into account current 
> memory usage, to avoid flushing too ofter as it can hurt op throughput for 
> particular workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23166) Guard VGB from flushing too often

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-23166:
--
Attachment: HIVE-23166.01.patch

> Guard VGB from flushing too often
> -
>
> Key: HIVE-23166
> URL: https://issues.apache.org/jira/browse/HIVE-23166
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-23166.01.patch
>
>
> The existing flush logic in our VectorGroupByOperator is completely static.
>  It depends on the: number of HtEntries 
> (*hive.vectorized.groupby.maxentries*) and the MAX memory threshold (by 
> default 90% of available memory)
>  
> Assuming that we are not memory constrained the periodicity of flushing is 
> currently dictated by the static number of entries (1M by default) which can 
> be also misconfigured to a very low value.
> I am proposing along with maxHtEntries, to also take into account current 
> memory usage, to avoid flushing too ofter as it can hurt op throughput for 
> particular workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23166) Guard VGB from flushing too often

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-23166:
--
Status: Patch Available  (was: In Progress)

> Guard VGB from flushing too often
> -
>
> Key: HIVE-23166
> URL: https://issues.apache.org/jira/browse/HIVE-23166
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
> Attachments: HIVE-23166.01.patch
>
>
> The existing flush logic in our VectorGroupByOperator is completely static.
>  It depends on the: number of HtEntries 
> (*hive.vectorized.groupby.maxentries*) and the MAX memory threshold (by 
> default 90% of available memory)
>  
> Assuming that we are not memory constrained the periodicity of flushing is 
> currently dictated by the static number of entries (1M by default) which can 
> be also misconfigured to a very low value.
> I am proposing along with maxHtEntries, to also take into account current 
> memory usage, to avoid flushing too ofter as it can hurt op throughput for 
> particular workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23166) Guard VGB from flushing too often

2020-04-09 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-23166:
--
Summary: Guard VGB from flushing too often  (was: Protect VGB from flushing 
too often)

> Guard VGB from flushing too often
> -
>
> Key: HIVE-23166
> URL: https://issues.apache.org/jira/browse/HIVE-23166
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Affects Versions: 4.0.0
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Major
>
> The existing flush logic in our VectorGroupByOperator is completely static.
>  It depends on the: number of HtEntries 
> (*hive.vectorized.groupby.maxentries*) and the MAX memory threshold (by 
> default 90% of available memory)
>  
> Assuming that we are not memory constrained the periodicity of flushing is 
> currently dictated by the static number of entries (1M by default) which can 
> be also misconfigured to a very low value.
> I am proposing along with maxHtEntries, to also take into account current 
> memory usage, to avoid flushing too ofter as it can hurt op throughput for 
> particular workloads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >