[jira] [Commented] (HIVE-23295) Possible NPE when on getting predicate literal list when dynamic values are not available

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092059#comment-17092059
 ] 

Hive QA commented on HIVE-23295:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001073/HIVE-23295.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17141 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21931/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21931/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21931/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001073 - PreCommit-HIVE-Build

> Possible NPE when on getting predicate literal list when dynamic values are 
> not available
> -
>
> Key: HIVE-23295
> URL: https://issues.apache.org/jira/browse/HIVE-23295
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23295.1.patch
>
>
> getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
> dynamic values are not available.
> {code:java}
> @Override
> public List getLiteralList() {
>   if (literalList != null && literalList.size() > 0 && literalList.get(0) 
> instanceof LiteralDelegate) {
> List newLiteraList = new ArrayList();
> try {
>   for (Object litertalObj : literalList) {
> Object literal = ((LiteralDelegate) litertalObj).getLiteral();
> if (literal != null) {
>   newLiteraList.add(literal);
> }
>   }
> } catch (NoDynamicValuesException err) {
>   LOG.debug("Error while retrieving literalList, returning null", err);
>   return null;
> }
> return newLiteraList;
>   }
>   return literalList;
> } {code}
>  
> There are multiple call sites where the return value is used without a null 
> check. E.g:  leaf.getLiteralList().stream(). 
>  
> The return null was added as part of HIVE-18827 to avoid having an 
> unimportant warning message when dynamic values have not been delivered yet.
>  
> [~sershe], [~jdere], I propose return an empty list instead of null in a case 
> like this. What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23295) Possible NPE when on getting predicate literal list when dynamic values are not available

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092050#comment-17092050
 ] 

Hive QA commented on HIVE-23295:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} storage-api in master has 51 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21931/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: storage-api U: storage-api |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21931/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Possible NPE when on getting predicate literal list when dynamic values are 
> not available
> -
>
> Key: HIVE-23295
> URL: https://issues.apache.org/jira/browse/HIVE-23295
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23295.1.patch
>
>
> getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
> dynamic values are not available.
> {code:java}
> @Override
> public List getLiteralList() {
>   if (literalList != null && literalList.size() > 0 && literalList.get(0) 
> instanceof LiteralDelegate) {
> List newLiteraList = new ArrayList();
> try {
>   for (Object litertalObj : literalList) {
> Object literal = ((LiteralDelegate) litertalObj).getLiteral();
> if (literal != null) {
>   newLiteraList.add(literal);
> }
>   }
> } catch (NoDynamicValuesException err) {
>   LOG.debug("Error while retrieving literalList, returning null", err);
>   return null;
> }
> return newLiteraList;
>   }
>   return literalList;
> } {code}
>  
> There are multiple call sites where the return value is used without a null 
> check. E.g:  leaf.getLiteralList().stream(). 
>  
> The return null was added as part of HIVE-18827 to avoid having an 
> unimportant warning message when dynamic values have not been delivered yet.
>  
> [~sershe], [~jdere], I propose return an empty list 

[jira] [Commented] (HIVE-23296) Setting Tez caller ID with the actual Hive user

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092044#comment-17092044
 ] 

Hive QA commented on HIVE-23296:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001066/HIVE-23296.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 17142 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriverMethods.testProcessSelectDatabase 
(batchId=133)
org.apache.hadoop.hive.cli.TestCliDriverMethods.testprocessInitFiles 
(batchId=133)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schq_ingest]
 (batchId=102)
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
 (batchId=240)
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testMetastoreVersion 
(batchId=175)
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMatching 
(batchId=175)
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionMisMatch 
(batchId=175)
org.apache.hadoop.hive.ql.hooks.TestHooks.org.apache.hadoop.hive.ql.hooks.TestHooks
 (batchId=296)
org.apache.hive.service.cli.session.TestQueryDisplay.testQueryDisplay 
(batchId=212)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testAbandonedSessionMetrics
 (batchId=168)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testActiveSessionMetrics
 (batchId=168)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testActiveSessionTimeMetrics
 (batchId=168)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testOpenSessionTimeMetrics
 (batchId=168)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21930/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21930/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21930/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 13 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001066 - PreCommit-HIVE-Build

> Setting Tez caller ID with the actual Hive user
> ---
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23296.01.patch, Screen Shot 2020-04-24 at 
> 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server for all Tez tasks are set by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin who has the Hive server principal can see all tasks from 
> tez-ui. But the admin hardly knows who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> The attached picture shows that 'Caller ID' includes 'user1' which is the 
> Kerberos user name of the actual Hive user.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23296) Setting Tez caller ID with the actual Hive user

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092038#comment-17092038
 ] 

Hive QA commented on HIVE-23296:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
56s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} ql: The patch generated 0 new + 98 unchanged - 1 
fixed = 98 total (was 99) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21930/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21930/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Setting Tez caller ID with the actual Hive user
> ---
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23296.01.patch, Screen Shot 2020-04-24 at 
> 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server for all Tez tasks are set by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin who has the Hive server principal can see all tasks from 
> tez-ui. But the admin hardly knows who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> The attached picture shows that 'Caller ID' includes 'user1' which is the 
> Kerberos user name of the actual Hive user.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23201) Improve logging in locking

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092035#comment-17092035
 ] 

Hive QA commented on HIVE-23201:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001080/HIVE-23201.10.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17141 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21929/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21929/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21929/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001080 - PreCommit-HIVE-Build

> Improve logging in locking
> --
>
> Key: HIVE-23201
> URL: https://issues.apache.org/jira/browse/HIVE-23201
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23201.1.patch, HIVE-23201.1.patch, 
> HIVE-23201.10.patch, HIVE-23201.2.patch, HIVE-23201.2.patch, 
> HIVE-23201.3.patch, HIVE-23201.4.patch, HIVE-23201.5.patch, 
> HIVE-23201.5.patch, HIVE-23201.5.patch, HIVE-23201.5.patch, 
> HIVE-23201.6.patch, HIVE-23201.6.patch, HIVE-23201.7.patch, 
> HIVE-23201.8.patch, HIVE-23201.8.patch, HIVE-23201.9.patch
>
>
> Currently it can be quite difficult to troubleshoot issues related to 
> locking. To understand why a particular txn gave up after a while on 
> acquiring a lock, you have to connect directly to the backend DB, since we 
> are not logging right now which exact locks the txn is waiting for.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23201) Improve logging in locking

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092033#comment-17092033
 ] 

Hive QA commented on HIVE-23201:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
18s{color} | {color:blue} standalone-metastore/metastore-server in master has 
189 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 501 unchanged - 14 fixed = 502 total (was 515) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
27s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 188 unchanged - 1 fixed = 189 total (was 189) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  org.apache.hadoop.hive.metastore.txn.TxnHandler.timeOutLocks(Connection) 
may fail to clean up java.sql.ResultSet  Obligation to clean up resource 
created at TxnHandler.java:up java.sql.ResultSet  Obligation to clean up 
resource created at TxnHandler.java:[line 4731] is not discharged |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21929/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21929/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21929/yetus/new-findbugs-standalone-metastore_metastore-server.html
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21929/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Improve logging in locking
> --
>
> Key: HIVE-23201
> URL: https://issues.apache.org/jira/browse/HIVE-23201
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23201.1.

[jira] [Updated] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-24 Thread Adesh Kumar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adesh Kumar Rao updated HIVE-23230:
---
Attachment: HIVE-23230.4.patch

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, 
> HIVE-23230.3.patch, HIVE-23230.4.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-24 Thread Adesh Kumar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adesh Kumar Rao updated HIVE-23230:
---
Status: Patch Available  (was: In Progress)

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, 
> HIVE-23230.3.patch, HIVE-23230.4.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-24 Thread Adesh Kumar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adesh Kumar Rao updated HIVE-23230:
---
Status: In Progress  (was: Patch Available)

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, 
> HIVE-23230.3.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23283) Generate random temp ID for lock enqueue and commitTxn

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092027#comment-17092027
 ] 

Hive QA commented on HIVE-23283:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001063/HIVE-23283.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17141 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21928/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21928/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21928/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001063 - PreCommit-HIVE-Build

> Generate random temp ID for lock enqueue and commitTxn
> --
>
> Key: HIVE-23283
> URL: https://issues.apache.org/jira/browse/HIVE-23283
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23283.1.patch
>
>
> In order to optimize the S4U scope of enqueue lock and commitTxn, currently a 
> hardcoded constant (-1) is used to first insert all the lock and ws entries 
> with a temporary lockID/commitID. However, in a concurrent environment this 
> seems to cause some performance degradation (and deadlock issues with some 
> rdbms) as multiple concurrent transactions are trying to insert rows with the 
> same primary key (e.g. (-1, 1), (-1, 2), (-1, 3), .. etc. for (extID/intID) 
> in HIVE_LOCKS). The proposed solution is to replace the constant with a 
> random generated negative number, which seems to resolve this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23283) Generate random temp ID for lock enqueue and commitTxn

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092021#comment-17092021
 ] 

Hive QA commented on HIVE-23283:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
20s{color} | {color:blue} standalone-metastore/metastore-server in master has 
189 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
24s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 3 new + 491 unchanged - 3 fixed = 494 total (was 494) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21928/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21928/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21928/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Generate random temp ID for lock enqueue and commitTxn
> --
>
> Key: HIVE-23283
> URL: https://issues.apache.org/jira/browse/HIVE-23283
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23283.1.patch
>
>
> In order to optimize the S4U scope of enqueue lock and commitTxn, currently a 
> hardcoded constant (-1) is used to first insert all the lock and ws entries 
> with a temporary lockID/commitID. However, in a concurrent environment this 
> seems to cause some performance degradation (and deadlock issues with some 
> rdbms) as multiple concurrent transactions are trying to insert rows with the 
> same primary key (e.g. (-1, 1), (-1, 2), (-1, 3), .. etc. for (extID/intID) 
> in HIVE_LOCKS). The proposed solution is to replace the constant with a 
> random generated negative number, which seems to resolve this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23294) Remove sync bottleneck in TezConfigurationFactory

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092017#comment-17092017
 ] 

Hive QA commented on HIVE-23294:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001057/HIVE-23294.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestTxnCommandsWithSplitUpdateAndVectorization.testMergeOnTezEdges
 (batchId=275)
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testAcidTablesReplLoadBootstrapIncr
 (batchId=206)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21927/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21927/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21927/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001057 - PreCommit-HIVE-Build

> Remove sync bottleneck in TezConfigurationFactory
> -
>
> Key: HIVE-23294
> URL: https://issues.apache.org/jira/browse/HIVE-23294
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-23294.1.patch, Screenshot 2020-04-24 at 1.53.20 
> PM.png
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezConfigurationFactory.java#L53]
> [https://github.com/apache/hadoop/blob/master/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L1628]
> It ends up locking for property names in the config. For short running 
> queries with concurrency, this is an issue.
>  
> !Screenshot 2020-04-24 at 1.53.20 PM.png|width=1086,height=459!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23207) Create integration tests for TxnManager for different rdbms metastores

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092015#comment-17092015
 ] 

Hive QA commented on HIVE-23207:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
20s{color} | {color:blue} standalone-metastore/metastore-server in master has 
189 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} contrib in master has 11 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} itests/qtest-druid in master has 7 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
52s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 10m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 505 unchanged - 12 fixed = 505 total (was 517) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} The patch contrib passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
24s{color} | {color:green} root: The patch generated 0 new + 650 unchanged - 12 
fixed = 650 total (was 662) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} The patch hive-blobstore passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} The patch qtest-accumulo passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch qtest-druid passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} The patch qtest-kudu passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} The patch util passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
7s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
17s{color} | {color:red} patch/standalone-metastore/metastore-server cannot run 
setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  6m 
21s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {col

[jira] [Commented] (HIVE-23294) Remove sync bottleneck in TezConfigurationFactory

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17092013#comment-17092013
 ] 

Hive QA commented on HIVE-23294:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  5m 
44s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
15s{color} | {color:red} ql: The patch generated 2 new + 4 unchanged - 0 fixed 
= 6 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21927/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21927/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21927/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove sync bottleneck in TezConfigurationFactory
> -
>
> Key: HIVE-23294
> URL: https://issues.apache.org/jira/browse/HIVE-23294
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Major
> Attachments: HIVE-23294.1.patch, Screenshot 2020-04-24 at 1.53.20 
> PM.png
>
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezConfigurationFactory.java#L53]
> [https://github.com/apache/hadoop/blob/master/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java#L1628]
> It ends up locking for property names in the config. For short running 
> queries with concurrency, this is an issue.
>  
> !Screenshot 2020-04-24 at 1.53.20 PM.png|width=1086,height=459!
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23207) Create integration tests for TxnManager for different rdbms metastores

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091992#comment-17091992
 ] 

Hive QA commented on HIVE-23207:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001047/HIVE-23207.9.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schq_ingest]
 (batchId=102)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21926/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21926/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21926/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001047 - PreCommit-HIVE-Build

> Create integration tests for TxnManager for different rdbms metastores
> --
>
> Key: HIVE-23207
> URL: https://issues.apache.org/jira/browse/HIVE-23207
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Minor
> Attachments: HIVE-23207.1.patch, HIVE-23207.2.patch, 
> HIVE-23207.3.patch, HIVE-23207.4.patch, HIVE-23207.5.patch, 
> HIVE-23207.6.patch, HIVE-23207.7.patch, HIVE-23207.8.patch, HIVE-23207.9.patch
>
>
> Create an integration test suite that runs tests for TxnManager with the 
> metastore configured to use different kind of RDBMS-s. Use the different 
> DatabaseRule-s defined in the standalone-metastore for docker environments, 
> and use the real init schema for every database type instead of the hardwired 
> TxnDbUtil.prepDb.
> This test will be useful for easy manual validation of schema changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091978#comment-17091978
 ] 

Hive QA commented on HIVE-23286:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001046/HIVE-23286.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17141 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21925/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21925/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21925/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001046 - PreCommit-HIVE-Build

> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch, HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table if an insert failure aborts the 
> FileSinkOperator and the ACID direct insert it turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23286) The clean-up in case of an aborted FileSinkOperator is not correct for ACID direct insert

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091968#comment-17091968
 ] 

Hive QA commented on HIVE-23286:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21925/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21925/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> The clean-up in case of an aborted FileSinkOperator is not correct for ACID 
> direct insert
> -
>
> Key: HIVE-23286
> URL: https://issues.apache.org/jira/browse/HIVE-23286
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23286.1.patch, HIVE-23286.1.patch
>
>
> In FileSinkOperator there is a code path when the operator is aborted:
> {noformat}
> } else {
>   // Will come here if an Exception was thrown in map() or reduce().
>   // Hadoop always call close() even if an Exception was thrown in map() 
> or
>   // reduce().
>   for (FSPaths fsp : valToPaths.values()) {
> fsp.abortWritersAndUpdaters(fs, abort, !autoDelete && isNativeTable() 
> && !conf.isMmTable());
>   }
> {noformat}
> In this part, the fsp.abortWritersAndUpdaters method call should consider the 
> conf.isDirectInsert parameter as well. Since this parameter is missing, this 
> method can delete the content of the table if an insert failure aborts the 
> FileSinkOperator and the ACID direct insert it turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23240) loadDynamicPartition complains about static partitions even when they are provided in the description

2020-04-24 Thread Reza Safi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reza Safi updated HIVE-23240:
-
Attachment: HIVE-23240.2.patch

> loadDynamicPartition complains about static partitions even when they are 
> provided in the description 
> --
>
> Key: HIVE-23240
> URL: https://issues.apache.org/jira/browse/HIVE-23240
> Project: Hive
>  Issue Type: Bug
>Reporter: Reza Safi
>Priority: Minor
> Attachments: HIVE-23240.2.patch, HIVE-23240.patch
>
>
> Hive is computing valid dynamic partitions here:
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2853
> However it later uses the specification provided by client here:
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2879
> (partSpec is exactly what client has provided and partSpec.keySet() contains 
> both static and dynamic partitions key)
> As a result the makeSpecFromName here will expect both static and dynamic 
> partitions in requiredKeys:
> https://github.com/apache/hive/blob/master/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/Warehouse.java#L580
> However since the curPath that is passed to the above method is just like 
> "somePath/dynamicPart=value" which miss the static partitions and a result 
> the method  will ignore static partition keys then complains in log a warning 
> that the static partition keys are missing. Returning false to Hive.java,  a 
> log warning that "dynamicPart=value" is an invalid partition will be issued, 
> despite the fact that the dynamic partition has been validated before:
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L2880
>  
> This will cause a silent data corruption in some clients. As an example spark 
> will suffer from this when working with hive metastore in master branch.
> It seems that if the goal was just to warn the client, there is no need to 
> ignore the valid dynamic partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091961#comment-17091961
 ] 

Hive QA commented on HIVE-19064:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
49s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
2s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
22s{color} | {color:blue} standalone-metastore/metastore-server in master has 
189 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} standalone-metastore/metastore-common: The patch 
generated 0 new + 109 unchanged - 2 fixed = 109 total (was 111) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} common: The patch generated 0 new + 377 unchanged - 
1 fixed = 377 total (was 378) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch parser passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} The patch metastore-server passed checkstyle {color} 
|
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 1 new + 429 unchanged - 10 
fixed = 430 total (was 439) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} The patch util passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
56s{color} | {color:green} metastore-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
19s{color} | {color:green} parser in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
28s{color} | {color:green} metastore-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} 

[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091955#comment-17091955
 ] 

Hive QA commented on HIVE-19064:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001043/HIVE-19064.7.patch

{color:green}SUCCESS:{color} +1 due to 11 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 17151 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[quotedid_basic_standard] 
(batchId=15)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schq_materialized]
 (batchId=99)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[special_character_in_tabnames_quotes_1]
 (batchId=70)
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testAcidTablesReplLoadBootstrapIncr
 (batchId=206)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21924/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21924/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21924/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001043 - PreCommit-HIVE-Build

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch, 
> HIVE-19064.03.patch, HIVE-19064.4.patch, HIVE-19064.5.patch, 
> HIVE-19064.6.patch, HIVE-19064.7.patch, HIVE-19064.7.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?focusedWorklogId=427119&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-427119
 ]

ASF GitHub Bot logged work on HIVE-23216:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 22:37
Start Date: 24/Apr/20 22:37
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #990:
URL: https://github.com/apache/hive/pull/990#discussion_r414900828



##
File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
##
@@ -1941,6 +1941,45 @@ public boolean listPartitionsByExpr(String catName, 
String db_name, String tbl_n
 return !r.isSetHasUnknownPartitions() || r.isHasUnknownPartitions(); // 
Assume the worst.
   }
 
+  @Override
+  public boolean listPartitionsSpecByExpr(String dbName, String tblName,
+  byte[] expr, String defaultPartName, short maxParts, List 
result)
+  throws TException {
+return listPartitionsSpecByExpr(getDefaultCatalog(conf), dbName, tblName, 
expr, defaultPartName,
+maxParts, result);
+  }
+
+  @Override
+  public boolean listPartitionsSpecByExpr(String catName, String dbName, 
String tblName, byte[] expr,
+  String defaultPartitionName, short maxParts, List result)
+  throws TException {
+assert result != null;
+PartitionsByExprRequest req = new PartitionsByExprRequest(
+dbName, tblName, ByteBuffer.wrap(expr));
+if (defaultPartitionName != null) {
+  req.setDefaultPartitionName(defaultPartitionName);
+}
+if (maxParts >= 0) {
+  req.setMaxParts(maxParts);
+}
+PartitionsSpecByExprResult r;
+try {
+  r = client.get_partitions_spec_by_expr(req);
+} catch (TApplicationException te) {
+  if (te.getType() != TApplicationException.UNKNOWN_METHOD
+  && te.getType() != TApplicationException.WRONG_METHOD_NAME) {
+throw te;
+  }
+  throw new IncompatibleMetastoreException(
+  "Metastore doesn't support listPartitionsByExpr: " + 
te.getMessage());
+}
+
+//TODO: filtering if client side filtering isClientFilterEnabled on

Review comment:
   My understanding is that those hooks are used for authorization purposes 
so it is important that we include a new method in `MetaStoreFilterHook` and 
these partitions pass through that filter; the default implementation does not 
filter anything indeed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 427119)
Time Spent: 50m  (was: 40m)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, 
> HIVE-23216.6.patch, HIVE-23216.7.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?focusedWorklogId=427116&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-427116
 ]

ASF GitHub Bot logged work on HIVE-23216:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 22:06
Start Date: 24/Apr/20 22:06
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on a change in pull request #990:
URL: https://github.com/apache/hive/pull/990#discussion_r414889589



##
File path: ql/src/test/results/clientpositive/partition_wise_fileformat2.q.out
##
@@ -101,56 +101,6 @@ POSTHOOK: Input: default@partition_test_partitioned@dt=100
 POSTHOOK: Input: default@partition_test_partitioned@dt=101
 POSTHOOK: Input: default@partition_test_partitioned@dt=102
  A masked pattern was here 
-238val_238 100

Review comment:
   This is already fixed by Miklos in another jira.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 427116)
Time Spent: 40m  (was: 0.5h)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, 
> HIVE-23216.6.patch, HIVE-23216.7.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23216:
---
Attachment: HIVE-23216.7.patch

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, 
> HIVE-23216.6.patch, HIVE-23216.7.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?focusedWorklogId=427115&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-427115
 ]

ASF GitHub Bot logged work on HIVE-23216:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 22:05
Start Date: 24/Apr/20 22:05
Worklog Time Spent: 10m 
  Work Description: vineetgarg02 commented on a change in pull request #990:
URL: https://github.com/apache/hive/pull/990#discussion_r414889422



##
File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
##
@@ -1941,6 +1941,45 @@ public boolean listPartitionsByExpr(String catName, 
String db_name, String tbl_n
 return !r.isSetHasUnknownPartitions() || r.isHasUnknownPartitions(); // 
Assume the worst.
   }
 
+  @Override
+  public boolean listPartitionsSpecByExpr(String dbName, String tblName,
+  byte[] expr, String defaultPartName, short maxParts, List 
result)
+  throws TException {
+return listPartitionsSpecByExpr(getDefaultCatalog(conf), dbName, tblName, 
expr, defaultPartName,
+maxParts, result);
+  }
+
+  @Override
+  public boolean listPartitionsSpecByExpr(String catName, String dbName, 
String tblName, byte[] expr,
+  String defaultPartitionName, short maxParts, List result)
+  throws TException {
+assert result != null;
+PartitionsByExprRequest req = new PartitionsByExprRequest(
+dbName, tblName, ByteBuffer.wrap(expr));
+if (defaultPartitionName != null) {
+  req.setDefaultPartitionName(defaultPartitionName);
+}
+if (maxParts >= 0) {
+  req.setMaxParts(maxParts);
+}
+PartitionsSpecByExprResult r;
+try {
+  r = client.get_partitions_spec_by_expr(req);
+} catch (TApplicationException te) {
+  if (te.getType() != TApplicationException.UNKNOWN_METHOD
+  && te.getType() != TApplicationException.WRONG_METHOD_NAME) {
+throw te;
+  }
+  throw new IncompatibleMetastoreException(
+  "Metastore doesn't support listPartitionsByExpr: " + 
te.getMessage());
+}
+
+//TODO: filtering if client side filtering isClientFilterEnabled on

Review comment:
   Yes metastore client provides an ability to have a filter hook 
(configured using metastore.filter.hook), this can be used to filter partitions 
on client side. Currently the default implementation doesn't filter anything 
though so I decided to leave since it requires api for partitionspec instead of 
partition.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 427115)
Time Spent: 0.5h  (was: 20m)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, 
> HIVE-23216.6.patch, HIVE-23216.7.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23216:
---
Status: Open  (was: Patch Available)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, 
> HIVE-23216.6.patch, HIVE-23216.7.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread Vineet Garg (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-23216:
---
Status: Patch Available  (was: Open)

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, 
> HIVE-23216.6.patch, HIVE-23216.7.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-24 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-23040:
--
Attachment: HIVE-23040.07.patch

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch, 
> HIVE-23040.06.patch, HIVE-23040.06.patch, HIVE-23040.07.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-24 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-23040:
--
Attachment: (was: HIVE-23040.07.patch)

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch, 
> HIVE-23040.06.patch, HIVE-23040.06.patch, HIVE-23040.07.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23089) Add constraint checks to CBO plan

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091918#comment-17091918
 ] 

Hive QA commented on HIVE-23089:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001041/HIVE-23089.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint]
 (batchId=86)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull]
 (batchId=73)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21923/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21923/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21923/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001041 - PreCommit-HIVE-Build

> Add constraint checks to CBO plan
> -
>
> Key: HIVE-23089
> URL: https://issues.apache.org/jira/browse/HIVE-23089
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-23089.1.patch, HIVE-23089.2.patch, 
> HIVE-23089.3.patch
>
>
> {code}
> create table acid_uami(i int,
>  de decimal(5,2) constraint nn1 not null enforced,
>  vc varchar(128) constraint nn2 not null enforced) clustered 
> by (i) into 2 buckets stored as orc TBLPROPERTIES ('transactional'='true');
> explain
> update acid_uami set de=null where i=1;
> {code}
> Non-CBO path:
> {code:java}
> Map Operator Tree:
> TableScan
> alias: acid_uami
> filterExpr: ((i = 1) and enforce_constraint(vc is not null)) 
> (type: boolean)
> Statistics: Num rows: 1 Data size: 216 Basic stats: COMPLETE 
> Column stats: NONE
> Filter Operator
>   predicate: ((i = 1) and enforce_constraint(vc is not null)) 
> (type: boolean)
> {code}
> CBO path:
> {code:java}
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: acid_uami
> filterExpr: (i = 1) (type: boolean)
> Statistics: Num rows: 1 Data size: 216 Basic stats: COMPLETE 
> Column stats: NONE
> Filter Operator
>   predicate: (i = 1) (type: boolean)
> ...
>   Reduce Operator Tree:
> ...
>  Filter Operator
> predicate: enforce_constraint((null is not null and _col3 is not 
> null)) (type: boolean)
> {code}
> In CBO path the enforce_constraint function is added to the plan when CBO 
> plan is already generated and optimized.
> {code}
> HiveSortExchange(distribution=[any], collation=[[0]])
>   HiveProject(row__id=[$5], i=[CAST(1):INTEGER], _o__c2=[null:NULL], vc=[$2])
> HiveFilter(condition=[=($0, 1)])
>   HiveTableScan(table=[[default, acid_uami]], table:alias=[acid_uami])
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23089) Add constraint checks to CBO plan

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091903#comment-17091903
 ] 

Hive QA commented on HIVE-23089:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 4 new + 450 unchanged - 10 
fixed = 454 total (was 460) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21923/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21923/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21923/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add constraint checks to CBO plan
> -
>
> Key: HIVE-23089
> URL: https://issues.apache.org/jira/browse/HIVE-23089
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-23089.1.patch, HIVE-23089.2.patch, 
> HIVE-23089.3.patch
>
>
> {code}
> create table acid_uami(i int,
>  de decimal(5,2) constraint nn1 not null enforced,
>  vc varchar(128) constraint nn2 not null enforced) clustered 
> by (i) into 2 buckets stored as orc TBLPROPERTIES ('transactional'='true');
> explain
> update acid_uami set de=null where i=1;
> {code}
> Non-CBO path:
> {code:java}
> Map Operator Tree:
> TableScan
> alias: acid_uami
> filterExpr: ((i = 1) and enforce_constraint(vc is not null)) 
> (type: boolean)
> Statistics: Num rows: 1 Data size: 216 Basic stats: COMPLETE 
> Column stats: NONE
> Filter Operator
>   predicate: ((i = 1) and enforce_constraint(vc is not null)) 
> (type: boolean)
> {code}
> CBO path:
> {code:java}
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: acid_uami
> filterExpr: (i = 1) (type: boolean)
> Statistics: Num rows: 1 Data size: 216 Basic stats: COMPLETE 
> Column stats: NONE
> Filter Operator
>   predicate: (i = 1) (typ

[jira] [Assigned] (HIVE-23299) Ranger authorization of managed location

2020-04-24 Thread Sam An (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sam An reassigned HIVE-23299:
-


> Ranger authorization of managed location
> 
>
> Key: HIVE-23299
> URL: https://issues.apache.org/jira/browse/HIVE-23299
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Sam An
>Assignee: Sam An
>Priority: Minor
>
> With this feature in HIVE-22995, we have introduced a new location for 
> Databases. This location is meant to be the designated location for all 
> managed tables in this database. Ranger should also check if the user has 
> access to this location before the database can be created.
> This location can be retrieved via Database.getManagedLocationUri()



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-24 Thread PRAVIN KUMAR SINHA (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PRAVIN KUMAR SINHA updated HIVE-23040:
--
Attachment: HIVE-23040.07.patch

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch, 
> HIVE-23040.06.patch, HIVE-23040.06.patch, HIVE-23040.07.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091881#comment-17091881
 ] 

Hive QA commented on HIVE-23040:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001053/HIVE-23040.06.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17156 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery 
(batchId=300)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21922/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21922/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21922/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001053 - PreCommit-HIVE-Build

> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch, 
> HIVE-23040.06.patch, HIVE-23040.06.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23040) Checkpointing for repl dump incremental phase

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091873#comment-17091873
 ] 

Hive QA commented on HIVE-23040:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} ql generated 0 new + 1528 unchanged - 2 fixed = 1528 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21922/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21922/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Checkpointing for repl dump incremental phase
> -
>
> Key: HIVE-23040
> URL: https://issues.apache.org/jira/browse/HIVE-23040
> Project: Hive
>  Issue Type: Improvement
>Reporter: Aasha Medhi
>Assignee: PRAVIN KUMAR SINHA
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23040.01.patch, HIVE-23040.02.patch, 
> HIVE-23040.03.patch, HIVE-23040.04.patch, HIVE-23040.05.patch, 
> HIVE-23040.06.patch, HIVE-23040.06.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091846#comment-17091846
 ] 

David Mollitor edited comment on HIVE-19064 at 4/24/20, 7:37 PM:
-

[~jcamachorodriguez] and I spoke directly on this. I better understand the 
context of this patch. Here are my concerns with the current implementation, 
but just something to consider, you can move forward with this patch if you 
want, but I'll document the discussion as I understand it:

How I understand this is that "standard" mode will implement the SQL Standard 
(better support for PostgreSQL applications). If users wants to support MySQL 
workloads, they can continue to use the "column" mode.

"column" mode = MySQL = Identifiers are surrounded by back ticks
 "standard" mode = PostgreSQL = Identifiers are surrounded by double quotes

The thing I am struggling with still is that this same objective can be met 
with a feature like ANSI_QUOTES and it will less confusing for users and easier 
to implement.
{code:sql|title=Ansi Quotes}
set hive.support.quoted.identifiers=column;
-- This works
CREATE `db`.`table` (...);

set ansi_quotes=true;
-- This works
CREATE "db"."table" (...);

-- This still works
CREATE `db`.`table` (...);
{code}
{code:sql|title=Standard Mode}
set hive.support.quoted.identifiers=standard;
-- This works
CREATE "db"."table" (...);

-- This no longer works works
CREATE `db`.`table` (...);
{code}
OK, so you can add support for back ticks on top of double quotes later 
perhaps, but "standard" mode isn't backwards compatible with "column" mode so, 
to do this well, every time you output an identifier, in SHOW CREATE TABLE, in 
a log message, in an error message, you need to always track the current mode, 
and then, if the user changes modes, the output has to change to reflect that:
{code:sql}
-- Column + ansi_quotes
set hive.support.quoted.identifiers=column;

CREATE TABLE `db`.`table` (...);

CREATE TABLE `db`.`table` (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;

-- This just works when flipping back-and-forth between quotes/back ticks 
seamlessly 
set ansi_quotes=true;
CREATE TABLE "db"."table" (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;
set ansi_quotes=false;
select * from `db`.`table`;


-- Standard + Back Ticks

set hive.support.quoted.identifiers=standard;
CREATE TABLE "db"."table" (...);
> Table "db"."table" already exists.
select * from "db"."table";

set hive.support.quoted.identifiers=column;
CREATE TABLE `db`.`table` (...);
> Table `db`.`table` already exists.
select * from "db"."table";
> Syntax error
{code}


was (Author: belugabehr):
[~jcamachorodriguez] and I spoke directly on this. I better understand the 
context of this patch. Here are my concerns with the current implementation, 
but just something to consider, you can move forward with this patch if you 
want, but I'll document the discussion as I understand it:

How I understand this is that "standard" mode will implement the SQL Standard 
(better support for PostgreSQL applications). If users wants to support MySQL 
workloads, they can continue to use the "column" mode.

"column" mode = MySQL = Identifiers are surrounded by back ticks
 "standard" mode = PostgreSQL = Identifiers are surrounded by double quotes

The thing I am struggling with still is that this same objective can be met 
with a feature like ANSI_QUOTES and it will less confusing for users and easier 
to implement.
{code:sql|title=Ansi Quotes}
set hive.support.quoted.identifiers=column;
-- This works
CREATE `db`.`table` (...);

set ansi_quotes=true;
-- This works
CREATE "db"."table" (...);

-- This still works
CREATE `db`.`table` (...);
{code}
{code:sql|title=Standard Mode}
set hive.support.quoted.identifiers=standard;
-- This works
CREATE "db"."table" (...);

-- This no longer works works
CREATE `db`.`table` (...);
{code}
OK, so you can add support for back ticks on top of double quotes later 
perhaps, but "standard" mode isn't backwards compatible with "column" mode so, 
to do this well, every time you output an identifier, in SHOW CREATE TABLE, in 
a log message, in an error message, you need to always track the current mode, 
and then, if the user changes modes, the output has to change to reflect that:
{code:sql}
-- Column + ansi_quotes
set hive.support.quoted.identifiers=column;

CREATE TABLE `db`.`table` (...);

CREATE TABLE `db`.`table` (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;

-- This just works when flipping back-and-forth between quotes/back ticks 
seamlessly 
set ansi_quotes=true;
CREATE TABLE "db"."table" (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;
set ansi_quotes=false;
select * from `db`.`table`;


-- Standard + Back Ticks

set hive.support.quoted.identifiers=standard;
CREATE TABLE "db"."table" (...);
> Table "db"."table" al

[jira] [Comment Edited] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091846#comment-17091846
 ] 

David Mollitor edited comment on HIVE-19064 at 4/24/20, 7:32 PM:
-

[~jcamachorodriguez] and I spoke directly on this. I better understand the 
context of this patch. Here are my concerns with the current implementation, 
but just something to consider, you can move forward with this patch if you 
want, but I'll document the discussion as I understand it:

How I understand this is that "standard" mode will implement the SQL Standard 
(better support for PostgreSQL applications). If users wants to support MySQL 
workloads, they can continue to use the "column" mode.

"column" mode = MySQL = Identifiers are surrounded by back ticks
 "standard" mode = PostgreSQL = Identifiers are surrounded by double quotes

The thing I am struggling with still is that this same objective can be met 
with a feature like ANSI_QUOTES and it will less confusing for users and easier 
to implement.
{code:sql|title=Ansi Quotes}
set hive.support.quoted.identifiers=column;
-- This works
CREATE `db`.`table` (...);

set ansi_quotes=true;
-- This works
CREATE "db"."table" (...);

-- This still works
CREATE `db`.`table` (...);
{code}
{code:sql|title=Standard Mode}
set hive.support.quoted.identifiers=standard;
-- This works
CREATE "db"."table" (...);

-- This no longer works works
CREATE `db`.`table` (...);
{code}
OK, so you can add support for back ticks on top of double quotes later 
perhaps, but "standard" mode isn't backwards compatible with "column" mode so, 
to do this well, every time you output an identifier, in SHOW CREATE TABLE, in 
a log message, in an error message, you need to always track the current mode, 
and then, if the user changes modes, the output has to change to reflect that:
{code:sql}
-- Column + ansi_quotes
set hive.support.quoted.identifiers=column;

CREATE TABLE `db`.`table` (...);

CREATE TABLE `db`.`table` (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;

-- This just works when flipping back-and-forth between quotes/back ticks 
seamlessly 
set ansi_quotes=true;
CREATE TABLE "db"."table" (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;
set ansi_quotes=false;
select * from `db`.`table`;


-- Standard + Back Ticks

set hive.support.quoted.identifiers=standard;
CREATE TABLE "db"."table" (...);
> Table "db"."table" already exists.
select * from "db"."table";

set hive.support.quoted.identifiers=column;
select * from "db"."table";
> Syntax error
{code}


was (Author: belugabehr):
[~jcamachorodriguez] and I spoke directly on this. I better understand the 
context of this patch. Here are my concerns with the current implementation, 
but just something to consider, you can move forward with this patch if you 
want, but I'll document the discussion as I understand it:

How I understand this is that "standard" mode will implement the SQL Standard 
(better support for PostgreSQL applications). If users wants to support MySQL 
workloads, they can continue to use the "column" mode.

"column" mode = MySQL = Identifiers are surrounded by back ticks
 "standard" mode = PostgreSQL = Identifiers are surrounded by double quotes

The thing I am struggling with still is that this same objective can be met 
with a feature like ANSI_QUOTES and it will less confusing for users and easier 
to implement.
{code:sql|title=Ansi Quotes}
set hive.support.quoted.identifiers=column;
-- This works
CREATE `db`.`table` (...);

set ansi_quotes=true;
-- This works
CREATE "db"."table" (...);

-- This still works
CREATE `db`.`table` (...);
{code}
{code:sql|title=Standard Mode}
set hive.support.quoted.identifiers=standard;
-- This works
CREATE "db"."table" (...);

-- This no longer works works
CREATE `db`.`table` (...);
{code}
OK, so you can add support for back ticks on top of double quotes later 
perhaps, but "standard" mode isn't backwards compatible with "column" mode so, 
to do this well, every time you output an identifier, you need to always track 
the current mode, and then, if the user changes modes, the user has to also 
change their output:
{code:sql}
-- Column + ansi_quotes
set hive.support.quoted.identifiers=column;

CREATE TABLE `db`.`table` (...);

CREATE TABLE `db`.`table` (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;

-- This just works when flipping back-and-forth between quotes/back ticks 
seamlessly 
set ansi_quotes=true;
CREATE TABLE "db"."table" (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;
set ansi_quotes=false;
select * from `db`.`table`;


-- Standard + Back Ticks

set hive.support.quoted.identifiers=standard;
CREATE TABLE "db"."table" (...);
> Table "db"."table" already exists.
select * from "db"."table";

set hive.support.quoted.identifiers=column;
select * from "db"."table";
> Syntax error
{

[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091846#comment-17091846
 ] 

David Mollitor commented on HIVE-19064:
---

[~jcamachorodriguez] and I spoke directly on this. I better understand the 
context of this patch. Here are my concerns with the current implementation, 
but just something to consider, you can move forward with this patch if you 
want, but I'll document the discussion as I understand it:

How I understand this is that "standard" mode will implement the SQL Standard 
(better support for PostgreSQL applications). If users wants to support MySQL 
workloads, they can continue to use the "column" mode.

"column" mode = MySQL = Identifiers are surrounded by back ticks
 "standard" mode = PostgreSQL = Identifiers are surrounded by double quotes

The thing I am struggling with still is that this same objective can be met 
with a feature like ANSI_QUOTES and it will less confusing for users and easier 
to implement.
{code:sql|title=Ansi Quotes}
set hive.support.quoted.identifiers=column;
-- This works
CREATE `db`.`table` (...);

set ansi_quotes=true;
-- This works
CREATE "db"."table" (...);

-- This still works
CREATE `db`.`table` (...);
{code}
{code:sql|title=Standard Mode}
set hive.support.quoted.identifiers=standard;
-- This works
CREATE "db"."table" (...);

-- This no longer works works
CREATE `db`.`table` (...);
{code}
OK, so you can add support for back ticks on top of double quotes later 
perhaps, but "standard" mode isn't backwards compatible with "column" mode so, 
to do this well, every time you output an identifier, you need to always track 
the current mode, and then, if the user changes modes, the user has to also 
change their output:
{code:sql}
-- Column + ansi_quotes
set hive.support.quoted.identifiers=column;

CREATE TABLE `db`.`table` (...);

CREATE TABLE `db`.`table` (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;

-- This just works when flipping back-and-forth between quotes/back ticks 
seamlessly 
set ansi_quotes=true;
CREATE TABLE "db"."table" (...);
> Table `db`.`table` already exists.
select * from `db`.`table`;
set ansi_quotes=false;
select * from `db`.`table`;


-- Standard + Back Ticks

set hive.support.quoted.identifiers=standard;
CREATE TABLE "db"."table" (...);
> Table "db"."table" already exists.
select * from "db"."table";

set hive.support.quoted.identifiers=column;
select * from "db"."table";
> Syntax error
{code}

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch, 
> HIVE-19064.03.patch, HIVE-19064.4.patch, HIVE-19064.5.patch, 
> HIVE-19064.6.patch, HIVE-19064.7.patch, HIVE-19064.7.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091842#comment-17091842
 ] 

Hive QA commented on HIVE-23230:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001040/HIVE-23230.3.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 17137 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_insert_partitioned]
 (batchId=98)
org.apache.hadoop.hive.metastore.TestGetPartitionsUsingProjectionAndFilterSpecs.testGetPartitionsUsingValuesWithJDO
 (batchId=157)
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark
 (batchId=219)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21921/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21921/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21921/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001040 - PreCommit-HIVE-Build

> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, 
> HIVE-23230.3.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to create and populate the table
>  
> {noformat}
> create table test (id int);
> insert into table test values (1);
> insert into table test values (2);
> insert into table test values (3);
> insert into table test values (4);
> insert into table test values (5);
> insert into table test values (6);
> insert into table test values (7);
> delete from test where id = 7;{noformat}
> now running below query via spark-shell
> {noformat}
> import com.hortonworks.hwc.HiveWarehouseSession 
> val hive = HiveWarehouseSession.session(spark).build() 
> hive.executeQuery("select * from test limit 1").show()
> {noformat}
> will return more than 1 rows.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23230) "get_splits" udf ignores limit constraint while creating splits

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091832#comment-17091832
 ] 

Hive QA commented on HIVE-23230:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 18 
unchanged - 0 fixed = 19 total (was 18) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21921/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21921/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21921/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> "get_splits" udf ignores limit constraint while creating splits
> ---
>
> Key: HIVE-23230
> URL: https://issues.apache.org/jira/browse/HIVE-23230
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Adesh Kumar Rao
>Assignee: Adesh Kumar Rao
>Priority: Major
> Attachments: HIVE-23230.1.patch, HIVE-23230.2.patch, 
> HIVE-23230.3.patch, HIVE-23230.patch
>
>
> Issue: Running the query {noformat}select * from  limit n{noformat} 
> from spark via hive warehouse connector may return more rows than "n".
> This happens because "get_splits" udf creates splits ignoring the limit 
> constraint. These splits when submitted to multiple llap daemons will return 
> "n" rows each.
> How to reproduce: Needs spark-shell, hive-warehouse-connector and hive on 
> llap with more that 1 llap daemons running.
> run below commands via beeline to c

[jira] [Commented] (HIVE-23280) Trigger compaction with old aborted txns

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091813#comment-17091813
 ] 

Hive QA commented on HIVE-23280:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001039/HIVE-23280.01.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17142 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_insert_partitioned]
 (batchId=98)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21920/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21920/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21920/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001039 - PreCommit-HIVE-Build

> Trigger compaction with old aborted txns
> 
>
> Key: HIVE-23280
> URL: https://issues.apache.org/jira/browse/HIVE-23280
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-23280.01.patch, HIVE-23280.01.patch, 
> HIVE-23280.01.patch
>
>
> When a txn is aborted and the compaction threshold for number of aborted txns 
> is not reached then the aborted transaction can remain forever in the RDBMS 
> database. This could result in several serious performance degradations:
>  - getOpenTxns has to list this aborted txn forever
>  - TXN_TO_WRITE_ID table is not cleaned
> We should add a threshold, so after a given time the compaction is started 
> anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23280) Trigger compaction with old aborted txns

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091809#comment-17091809
 ] 

Hive QA commented on HIVE-23280:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
42s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
19s{color} | {color:blue} standalone-metastore/metastore-server in master has 
189 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 2 new + 106 unchanged - 3 fixed = 108 total (was 109) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 1 new + 29 unchanged - 0 fixed 
= 30 total (was 29) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} standalone-metastore/metastore-server generated 1 new 
+ 188 unchanged - 1 fixed = 189 total (was 189) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:standalone-metastore/metastore-server |
|  |  
org.apache.hadoop.hive.metastore.txn.CompactionTxnHandler.findPotentialCompactions(int,
 long, long) passes a nonconstant String to an execute or addBatch method on an 
SQL statement  At CompactionTxnHandler.java:nonconstant String to an execute or 
addBatch method on an SQL statement  At CompactionTxnHandler.java:[line 96] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21920/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21920/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21920/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21920

[jira] [Commented] (HIVE-22934) Hive server interactive log counters to error stream

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091756#comment-17091756
 ] 

Hive QA commented on HIVE-22934:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001036/HIVE-22934.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 944 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreNegativeCliDriver.testCliDriver[select_dropped_table]
 (batchId=237)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask]
 (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avrotblsjoin] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer8] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_convert_join]
 (batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_const_type] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_3] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_limit]
 (batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_offset_limit]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_pushdown]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_multiinsert] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_offset_limit]
 (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriverMethods.testProcessSelectDatabase 
(batchId=133)
org.apache.hadoop.hive.cli.TestCliDriverMethods.testRun (batchId=133)
org.apache.hadoop.hive.cli.TestCliDriverMethods.testprocessInitFiles 
(batchId=133)
org.apache.hadoop.hive.cli.TestContribNegativeCliDriver.testCliDriver[case_with_row_sequence]
 (batchId=232)
org.apache.hadoop.hive.cli.TestContribNegativeCliDriver.testCliDriver[invalid_row_sequence]
 (batchId=232)
org.apache.hadoop.hive.cli.TestContribNegativeCliDriver.testCliDriver[serde_regex]
 (batchId=232)
org.apache.hadoop.hive.cli.TestContribNegativeCliDriver.testCliDriver[udtf_explode2]
 (batchId=232)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_drop_table]
 (batchId=124)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=123)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[cascade_dbdrop]
 (batchId=233)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[generatehfiles_require_family_path]
 (batchId=233)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[hbase_ddl] 
(batchId=233)
org.apache.hadoop.hive.cli.TestKuduNegativeCliDriver.testCliDriver[kudu_config] 
(batchId=222)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge10] 
(batchId=58)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_bloom_filter_orc_file_dump]
 (batchId=117)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[authorization_wm]
 (batchId=105)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[column_access_stats]
 (batchId=98)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[compare_double_bigint_2]
 (batchId=78)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[filter_join_breaktask2]
 (batchId=111)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hook_order] 
(batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[join_literals]
 (batchId=119)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] 
(batchId=97)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] 
(batchId=90)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multiMapJoin1]
 (batchId=91)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multiMapJoin2]
 (batchId=110)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_sahooks]
 (batchId=89)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_file_dump]
 (batchId=98)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge11]
 (batchId=87)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[overridden_confs]
 (batchId=92)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver

[jira] [Commented] (HIVE-23298) Disable RS deduplication step in Optimizer if it is run in TezCompiler

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091750#comment-17091750
 ] 

Jesus Camacho Rodriguez commented on HIVE-23298:


Uploaded a patch to check whether this may cause any regressions.

> Disable RS deduplication step in Optimizer if it is run in TezCompiler
> --
>
> Key: HIVE-23298
> URL: https://issues.apache.org/jira/browse/HIVE-23298
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23298.patch
>
>
> HIVE-20703 introduced an additional RS deduplication step in TezCompiler. We 
> could possibly try to disable the one that runs in {{Optimizer}} if we are 
> using Tez so we do not run the optimization twice.
> This issue is to explore that possibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work started] (HIVE-23298) Disable RS deduplication step in Optimizer if it is run in TezCompiler

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23298 started by Jesus Camacho Rodriguez.
--
> Disable RS deduplication step in Optimizer if it is run in TezCompiler
> --
>
> Key: HIVE-23298
> URL: https://issues.apache.org/jira/browse/HIVE-23298
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23298.patch
>
>
> HIVE-20703 introduced an additional RS deduplication step in TezCompiler. We 
> could possibly try to disable the one that runs in {{Optimizer}} if we are 
> using Tez so we do not run the optimization twice.
> This issue is to explore that possibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23284) Remove dependency on mariadb-java-client

2020-04-24 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-23284:
-
Status: Open  (was: Patch Available)

> Remove dependency on mariadb-java-client
> 
>
> Key: HIVE-23284
> URL: https://issues.apache.org/jira/browse/HIVE-23284
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-23284.01.patch, HIVE-23284.01.patch
>
>
> It has GNU Lesser General Public License which is [Category 
> X|https://www.apache.org/legal/resolved.html#category-x].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23298) Disable RS deduplication step in Optimizer if it is run in TezCompiler

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23298:
---
Status: Patch Available  (was: In Progress)

> Disable RS deduplication step in Optimizer if it is run in TezCompiler
> --
>
> Key: HIVE-23298
> URL: https://issues.apache.org/jira/browse/HIVE-23298
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23298.patch
>
>
> HIVE-20703 introduced an additional RS deduplication step in TezCompiler. We 
> could possibly try to disable the one that runs in {{Optimizer}} if we are 
> using Tez so we do not run the optimization twice.
> This issue is to explore that possibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23298) Disable RS deduplication step in Optimizer if it is run in TezCompiler

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23298:
---
Attachment: HIVE-23298.patch

> Disable RS deduplication step in Optimizer if it is run in TezCompiler
> --
>
> Key: HIVE-23298
> URL: https://issues.apache.org/jira/browse/HIVE-23298
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23298.patch
>
>
> HIVE-20703 introduced an additional RS deduplication step in TezCompiler. We 
> could possibly try to disable the one that runs in {{Optimizer}} if we are 
> using Tez so we do not run the optimization twice.
> This issue is to explore that possibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23284) Remove dependency on mariadb-java-client

2020-04-24 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-23284:
-
Attachment: HIVE-23284.01.patch
Status: Patch Available  (was: Open)

> Remove dependency on mariadb-java-client
> 
>
> Key: HIVE-23284
> URL: https://issues.apache.org/jira/browse/HIVE-23284
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-23284.01.patch, HIVE-23284.01.patch
>
>
> It has GNU Lesser General Public License which is [Category 
> X|https://www.apache.org/legal/resolved.html#category-x].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23298) Disable RS deduplication step in Optimizer if it is run in TezCompiler

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-23298:
--


> Disable RS deduplication step in Optimizer if it is run in TezCompiler
> --
>
> Key: HIVE-23298
> URL: https://issues.apache.org/jira/browse/HIVE-23298
> Project: Hive
>  Issue Type: Improvement
>  Components: Physical Optimizer
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> HIVE-20703 introduced an additional RS deduplication step in TezCompiler. We 
> could possibly try to disable the one that runs in {{Optimizer}} if we are 
> using Tez so we do not run the optimization twice.
> This issue is to explore that possibility.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22934) Hive server interactive log counters to error stream

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091734#comment-17091734
 ] 

Hive QA commented on HIVE-22934:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21919/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21919/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive server interactive log counters to error stream
> 
>
> Key: HIVE-22934
> URL: https://issues.apache.org/jira/browse/HIVE-22934
> Project: Hive
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Assignee: Antal Sinkovits
>Priority: Major
> Attachments: HIVE-22934.01.patch, HIVE-22934.02.patch, 
> HIVE-22934.patch
>
>
> Hive server is logging the console output to system error stream.
> This need to be fixed because 
> First we do not roll the file.
> Second writing to such file is done sequential and can lead to throttle/poor 
> perf.
> {code}
> -rw-r--r--  1 hive hadoop 9.5G Feb 26 17:22 hive-server2-interactive.err
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23206) Project not defined correctly after reordering a join

2020-04-24 Thread Krisztian Kasa (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091718#comment-17091718
 ] 

Krisztian Kasa commented on HIVE-23206:
---

HiveJoinProjectTransposeRule.RIGHT_PROJECT_BTW_JOIN pulls up the left Project 
however this rule should pull up the right one. 
HiveJoinProjectTransposeRule.LEFT_PROJECT_BTW_JOIN should deal with the left 
one.
 The reason of this HiveJoinProjectTransposeRule extends 
JoinProjectTransposeRule from Calcite which generally pulls the Project from 
the side where it exists. To determine whether there is a project or not it 
uses the *hasLeftChild(call)* and *hasRightChild(call)* methods. In case of the 
mentioned query above the Join has Projects on both sides.

Patch [^HIVE-23206.2.patch]  restrict these rules to pull Project from the 
proper side only.

> Project not defined correctly after reordering a join
> -
>
> Key: HIVE-23206
> URL: https://issues.apache.org/jira/browse/HIVE-23206
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Steve Carlin
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-23206.1.patch, HIVE-23206.2.patch
>
>
> The following highlighted line seems to be incorrect in the test suite:
> [https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/perf/tez/cbo_query95.q.out#L89]
> Note that the project takes all the columns from the table scan, yet it only 
> needs a couple of them.
> I did some very small debugging on this.  When I removed the 
> applyJoinOrderingTransform here: 
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1897]
> ... the problem goes away.  So presumably one of the rules in there is 
> causing the problem.
> Here is a slightly simplified version of the query which has the same problem 
> (using the same tpc-ds database):
> explain cbo with ws_wh as
> (select ws1.ws_order_number
> from web_sales ws1,web_returns wr2 
> where ws1.ws_order_number = wr2.wr_order_number)
> select 
>    ws_order_number
> from
>    web_sales ws1 
> where
> ws1.ws_order_number in (select wr_order_number
>                             from web_returns,ws_wh
>                             where wr_order_number = ws_wh.ws_order_number)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23206) Project not defined correctly after reordering a join

2020-04-24 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-23206:
--
Status: Open  (was: Patch Available)

> Project not defined correctly after reordering a join
> -
>
> Key: HIVE-23206
> URL: https://issues.apache.org/jira/browse/HIVE-23206
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Steve Carlin
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-23206.1.patch, HIVE-23206.2.patch
>
>
> The following highlighted line seems to be incorrect in the test suite:
> [https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/perf/tez/cbo_query95.q.out#L89]
> Note that the project takes all the columns from the table scan, yet it only 
> needs a couple of them.
> I did some very small debugging on this.  When I removed the 
> applyJoinOrderingTransform here: 
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1897]
> ... the problem goes away.  So presumably one of the rules in there is 
> causing the problem.
> Here is a slightly simplified version of the query which has the same problem 
> (using the same tpc-ds database):
> explain cbo with ws_wh as
> (select ws1.ws_order_number
> from web_sales ws1,web_returns wr2 
> where ws1.ws_order_number = wr2.wr_order_number)
> select 
>    ws_order_number
> from
>    web_sales ws1 
> where
> ws1.ws_order_number in (select wr_order_number
>                             from web_returns,ws_wh
>                             where wr_order_number = ws_wh.ws_order_number)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23206) Project not defined correctly after reordering a join

2020-04-24 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-23206:
--
Status: Patch Available  (was: Open)

> Project not defined correctly after reordering a join
> -
>
> Key: HIVE-23206
> URL: https://issues.apache.org/jira/browse/HIVE-23206
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Steve Carlin
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-23206.1.patch, HIVE-23206.2.patch
>
>
> The following highlighted line seems to be incorrect in the test suite:
> [https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/perf/tez/cbo_query95.q.out#L89]
> Note that the project takes all the columns from the table scan, yet it only 
> needs a couple of them.
> I did some very small debugging on this.  When I removed the 
> applyJoinOrderingTransform here: 
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1897]
> ... the problem goes away.  So presumably one of the rules in there is 
> causing the problem.
> Here is a slightly simplified version of the query which has the same problem 
> (using the same tpc-ds database):
> explain cbo with ws_wh as
> (select ws1.ws_order_number
> from web_sales ws1,web_returns wr2 
> where ws1.ws_order_number = wr2.wr_order_number)
> select 
>    ws_order_number
> from
>    web_sales ws1 
> where
> ws1.ws_order_number in (select wr_order_number
>                             from web_returns,ws_wh
>                             where wr_order_number = ws_wh.ws_order_number)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23206) Project not defined correctly after reordering a join

2020-04-24 Thread Krisztian Kasa (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa updated HIVE-23206:
--
Attachment: HIVE-23206.2.patch

> Project not defined correctly after reordering a join
> -
>
> Key: HIVE-23206
> URL: https://issues.apache.org/jira/browse/HIVE-23206
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Steve Carlin
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-23206.1.patch, HIVE-23206.2.patch
>
>
> The following highlighted line seems to be incorrect in the test suite:
> [https://github.com/apache/hive/blob/master/ql/src/test/results/clientpositive/perf/tez/cbo_query95.q.out#L89]
> Note that the project takes all the columns from the table scan, yet it only 
> needs a couple of them.
> I did some very small debugging on this.  When I removed the 
> applyJoinOrderingTransform here: 
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1897]
> ... the problem goes away.  So presumably one of the rules in there is 
> causing the problem.
> Here is a slightly simplified version of the query which has the same problem 
> (using the same tpc-ds database):
> explain cbo with ws_wh as
> (select ws1.ws_order_number
> from web_sales ws1,web_returns wr2 
> where ws1.ws_order_number = wr2.wr_order_number)
> select 
>    ws_order_number
> from
>    web_sales ws1 
> where
> ws1.ws_order_number in (select wr_order_number
>                             from web_returns,ws_wh
>                             where wr_order_number = ws_wh.ws_order_number)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21304) Make bucketing version usage more robust

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091694#comment-17091694
 ] 

Hive QA commented on HIVE-21304:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001028/HIVE-21304.32.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17142 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21918/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21918/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21918/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001028 - PreCommit-HIVE-Build

> Make bucketing version usage more robust
> 
>
> Key: HIVE-21304
> URL: https://issues.apache.org/jira/browse/HIVE-21304
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21304.01.patch, HIVE-21304.02.patch, 
> HIVE-21304.03.patch, HIVE-21304.04.patch, HIVE-21304.05.patch, 
> HIVE-21304.06.patch, HIVE-21304.07.patch, HIVE-21304.08.patch, 
> HIVE-21304.09.patch, HIVE-21304.10.patch, HIVE-21304.11.patch, 
> HIVE-21304.12.patch, HIVE-21304.13.patch, HIVE-21304.14.patch, 
> HIVE-21304.15.patch, HIVE-21304.16.patch, HIVE-21304.17.patch, 
> HIVE-21304.18.patch, HIVE-21304.19.patch, HIVE-21304.20.patch, 
> HIVE-21304.21.patch, HIVE-21304.22.patch, HIVE-21304.23.patch, 
> HIVE-21304.24.patch, HIVE-21304.25.patch, HIVE-21304.26.patch, 
> HIVE-21304.27.patch, HIVE-21304.28.patch, HIVE-21304.29.patch, 
> HIVE-21304.30.patch, HIVE-21304.31.patch, HIVE-21304.32.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Show Bucketing version for ReduceSinkOp in explain extended plan - this 
> helps identify what hashing algorithm is being used by by ReduceSinkOp.
> * move the actually selected version to the "conf" so that it doesn't get lost
> * replace trait related logic with a separate optimizer rule
> * do version selection based on a group of operator - this is more reliable
> * skip bucketingversion selection for tables with 1 buckets
> * prefer to use version 2 if possible
> * fix operator creations which didn't set a new conf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23244) Extract Create View analyzer from SemanticAnalyzer

2020-04-24 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23244:
--
Attachment: HIVE-23244.03.patch

> Extract Create View analyzer from SemanticAnalyzer
> --
>
> Key: HIVE-23244
> URL: https://issues.apache.org/jira/browse/HIVE-23244
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23244.01.patch, HIVE-23244.02.patch, 
> HIVE-23244.03.patch
>
>
> Create View commands are not queries, but commands which have queries as a 
> part of them. Therefore a separate CreateViewAnalyzer is needed which uses 
> SemanticAnalyer to analyze it's query.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23244) Extract Create View analyzer from SemanticAnalyzer

2020-04-24 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23244:
--
Attachment: (was: HIVE-23244.03.patch)

> Extract Create View analyzer from SemanticAnalyzer
> --
>
> Key: HIVE-23244
> URL: https://issues.apache.org/jira/browse/HIVE-23244
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23244.01.patch, HIVE-23244.02.patch, 
> HIVE-23244.03.patch
>
>
> Create View commands are not queries, but commands which have queries as a 
> part of them. Therefore a separate CreateViewAnalyzer is needed which uses 
> SemanticAnalyer to analyze it's query.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21304) Make bucketing version usage more robust

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091681#comment-17091681
 ] 

Hive QA commented on HIVE-21304:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
58s{color} | {color:red} ql: The patch generated 9 new + 1324 unchanged - 10 
fixed = 1333 total (was 1334) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
0s{color} | {color:red} ql generated 3 new + 1530 unchanged - 0 fixed = 1533 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 53s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Suspicious comparison of Integer references in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:[line 65] |
|  |  Suspicious comparison of Integer references in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge2(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:in 
org.apache.hadoop.hive.ql.optimizer.BucketVersionPopulator$BucketingVersionResult.merge2(BucketVersionPopulator$BucketingVersionResult)
  At BucketVersionPopulator.java:[line 75] |
|  |  Nullcheck of table_desc at line 8232 of value previously dereferenced in 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createFileSinkDesc(String, 
TableDesc, Partition, Path, int, boolean, boolean, boolean, Path, 
SemanticAnalyzer$SortBucketRSCtx, DynamicPartitionCtx, ListBucketingCtx, 
RowSchema, boolean, Table, Long, boolean, Integer, QB, boolean)  At 
SemanticAnalyzer.java:8232 of value previously dereferenced in 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.createFileSinkDesc(String, 
TableDesc, Partition, Path, int, boolean, boolean, boolean, Path, 
SemanticAnalyzer$SortBucketRSCtx, DynamicPartitionCtx, ListBucketingCtx, 
RowSchema, boolean, Table, Long, boolean, Integer, QB, boolean)  At 
SemanticAnalyzer.java:[line 8225] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21918/dev-support/hive-personality.sh
 |

[jira] [Updated] (HIVE-23048) Use sequences for TXN_ID generation

2020-04-24 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23048:
---
Status: Patch Available  (was: In Progress)

> Use sequences for TXN_ID generation
> ---
>
> Key: HIVE-23048
> URL: https://issues.apache.org/jira/browse/HIVE-23048
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Varga
>Priority: Major
> Attachments: HIVE-23048.1.patch, HIVE-23048.10.patch, 
> HIVE-23048.11.patch, HIVE-23048.12.patch, HIVE-23048.13.patch, 
> HIVE-23048.2.patch, HIVE-23048.3.patch, HIVE-23048.4.patch, 
> HIVE-23048.5.patch, HIVE-23048.6.patch, HIVE-23048.7.patch, 
> HIVE-23048.8.patch, HIVE-23048.9.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23048) Use sequences for TXN_ID generation

2020-04-24 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23048:
---
Attachment: HIVE-23048.13.patch

> Use sequences for TXN_ID generation
> ---
>
> Key: HIVE-23048
> URL: https://issues.apache.org/jira/browse/HIVE-23048
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Varga
>Priority: Major
> Attachments: HIVE-23048.1.patch, HIVE-23048.10.patch, 
> HIVE-23048.11.patch, HIVE-23048.12.patch, HIVE-23048.13.patch, 
> HIVE-23048.2.patch, HIVE-23048.3.patch, HIVE-23048.4.patch, 
> HIVE-23048.5.patch, HIVE-23048.6.patch, HIVE-23048.7.patch, 
> HIVE-23048.8.patch, HIVE-23048.9.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23048) Use sequences for TXN_ID generation

2020-04-24 Thread Peter Varga (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Varga updated HIVE-23048:
---
Status: In Progress  (was: Patch Available)

> Use sequences for TXN_ID generation
> ---
>
> Key: HIVE-23048
> URL: https://issues.apache.org/jira/browse/HIVE-23048
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Varga
>Priority: Major
> Attachments: HIVE-23048.1.patch, HIVE-23048.10.patch, 
> HIVE-23048.11.patch, HIVE-23048.12.patch, HIVE-23048.2.patch, 
> HIVE-23048.3.patch, HIVE-23048.4.patch, HIVE-23048.5.patch, 
> HIVE-23048.6.patch, HIVE-23048.7.patch, HIVE-23048.8.patch, HIVE-23048.9.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23031?focusedWorklogId=427007&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-427007
 ]

ASF GitHub Bot logged work on HIVE-23031:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 15:34
Start Date: 24/Apr/20 15:34
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #988:
URL: https://github.com/apache/hive/pull/988#discussion_r414669324



##
File path: ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
##
@@ -1967,6 +1968,13 @@ private RelNode applyPreJoinOrderingTransforms(RelNode 
basePlan, RelMetadataProv
   generatePartialProgram(program, false, HepMatchOrder.DEPTH_FIRST,
   HiveExceptRewriteRule.INSTANCE);
 
+  // ?

Review comment:
   Good point.
   I am not sure the decorrelation logic or any rule executed before this one 
would introduce a `count distinct` or any of the other functions that we will 
be targeting.
   However, if that would be the case at some point, probably we do not want to 
be mangling with them in our new rules.
   Thus, executing it as the very first step, even before decorrelation, would 
make sense.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 427007)
Time Spent: 1h 20m  (was: 1h 10m)

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch, HIVE-23031.03.patch, HIVE-23031.03.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23031?focusedWorklogId=427005&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-427005
 ]

ASF GitHub Bot logged work on HIVE-23031:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 15:25
Start Date: 24/Apr/20 15:25
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #988:
URL: https://github.com/apache/hive/pull/988#discussion_r414662832



##
File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
##
@@ -2465,6 +2465,12 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
 "If the number of references to a CTE clause exceeds this threshold, 
Hive will materialize it\n" +
 "before executing the main query block. -1 will disable this 
feature."),
 
+
HIVE_OPTIMIZE_REWRITE_COUNTDISTINCT_ENABLED("hive.optimize.sketches.rewrite.countdistintct.enabled",
 false,
+"Enables to rewrite COUNT(DISTINCT(X)) queries to be rewritten to use 
sketch functions."),
+
+
HIVE_OPTIMIZE_REWRITE_COUNT_DISTINCT_SKETCHCLASS("hive.optimize.sketches.rewrite.countdistintct.sketchclass",
 "hll",

Review comment:
   What about simply `sketch`? Or `sketch type` I guess?
   Family may be confusing because in their documentation they associate 
families with how they are commonly used, so it seems wider indeed.
   http://datasketches.apache.org/docs/Architecture/SketchFeaturesMatrix.html.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 427005)
Time Spent: 1h 10m  (was: 1h)

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch, HIVE-23031.03.patch, HIVE-23031.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23031?focusedWorklogId=427002&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-427002
 ]

ASF GitHub Bot logged work on HIVE-23031:
-

Author: ASF GitHub Bot
Created on: 24/Apr/20 15:20
Start Date: 24/Apr/20 15:20
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #988:
URL: https://github.com/apache/hive/pull/988#discussion_r414658473



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteCountDistinctToDataSketches.java
##
@@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hive.ql.optimizer.calcite.rules;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import org.apache.calcite.plan.RelOptRule;
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.RelCollation;
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.Aggregate;
+import org.apache.calcite.rel.core.AggregateCall;
+import org.apache.calcite.rel.core.RelFactories.AggregateFactory;
+import org.apache.calcite.rel.core.RelFactories.ProjectFactory;
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.sql.SqlAggFunction;
+import org.apache.calcite.sql.SqlOperator;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
+import org.apache.hadoop.hive.ql.exec.DataSketchesFunctions;
+import org.apache.hadoop.hive.ql.optimizer.calcite.HiveRelFactories;
+import org.apache.hadoop.hive.ql.optimizer.calcite.reloperators.HiveAggregate;
+import org.apache.hive.plugin.api.HiveUDFPlugin.UDFDescriptor;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.collect.ImmutableList;
+
+/**
+ * This rule could rewrite {@code count(distinct(x))} calls to be calculated 
using sketch based functions.
+ */
+public final class HiveRewriteCountDistinctToDataSketches extends RelOptRule {
+
+  protected static final Logger LOG = 
LoggerFactory.getLogger(HiveRewriteCountDistinctToDataSketches.class);
+  private String sketchClass;
+
+  public HiveRewriteCountDistinctToDataSketches(HiveConf conf) {
+super(operand(HiveAggregate.class, any()));
+sketchClass = 
conf.getVar(ConfVars.HIVE_OPTIMIZE_REWRITE_COUNT_DISTINCT_SKETCHCLASS);
+  }
+
+  @Override
+  public void onMatch(RelOptRuleCall call) {
+final Aggregate aggregate = call.rel(0);
+
+if (aggregate.getGroupSets().size() != 1) {
+  // not yet supported
+  return;
+}
+
+List newAggCalls = new ArrayList();
+
+AggregateFactory f = HiveRelFactories.HIVE_AGGREGATE_FACTORY;

Review comment:
   About `HiveConf`, I suggested this because it makes rules easier to 
instantiate by passing a well defined parameter.
   
   About the factories, one could make a case that 1) if the rule is a final 
static instance, you can pass the `HIVE_REL_BUILDER` to the RelOptRule 
constructor since you will not incur any additional instantiation cost per 
query, and thus use call.builder, and 2) if the rule is parameterized (as it is 
the case for this one), you have to instantiate it for every query compilation, 
thus your initial implementation using the factory directly would work better 
from performance point of view (instantiation of a builder based on 
`HIVE_REL_BUILDER` adds some overhead). If factories are not needed at all, I 
guess `copy` may be an option too.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 427002)
Time Spent: 1h  (was: 50m)

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -

[jira] [Commented] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091645#comment-17091645
 ] 

Hive QA commented on HIVE-23031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001029/HIVE-23031.03.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17142 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21917/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21917/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21917/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001029 - PreCommit-HIVE-Build

> Add option to enable transparent rewrite of count(distinct) into sketch 
> functions
> -
>
> Key: HIVE-23031
> URL: https://issues.apache.org/jira/browse/HIVE-23031
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-23031.01.patch, HIVE-23031.02.patch, 
> HIVE-23031.03.patch, HIVE-23031.03.patch, HIVE-23031.03.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091639#comment-17091639
 ] 

Jesus Camacho Rodriguez commented on HIVE-23275:


[~vgarg], [~kgyrtkirk], can you take a look? Thanks
https://github.com/apache/hive/pull/993

> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.01.patch, HIVE-23275.01.patch, 
> HIVE-23275.patch, HIVE-23275.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-23275:
---
Attachment: HIVE-23275.01.patch

> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.01.patch, HIVE-23275.01.patch, 
> HIVE-23275.patch, HIVE-23275.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23291) Add Hive to DatabaseType in JDBC storage handler

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091638#comment-17091638
 ] 

Jesus Camacho Rodriguez commented on HIVE-23291:


[~kgyrtkirk], [~vgarg], can you take a look? Thanks

> Add Hive to DatabaseType in JDBC storage handler
> 
>
> Key: HIVE-23291
> URL: https://issues.apache.org/jira/browse/HIVE-23291
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23291.patch
>
>
> Inception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread Jesus Camacho Rodriguez (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091635#comment-17091635
 ] 

Jesus Camacho Rodriguez commented on HIVE-19064:


[~belugabehr], we are aiming at the SQL standard and drop-in replacement for a 
large variety of RDBMSs (not only MySQL).

The change that you are proposing is an extension to the SQL standard mode 
proposed here. In fact, the ANSI_QUOTES mode in MySQL does not allow double 
quotes for the literals, similar to what we are doing here:
{quote}
The ANSI_QUOTES mode causes the server to interpret double-quoted strings as 
identifiers. Consequently, when this mode is enabled, string literals must be 
enclosed within single quotation marks. 
{quote}
That is why it is behind an option in MySQL too, because it is not backwards 
compatible. Queries like {{select "col1" from table}} become ambiguous in a 
combined mode (in legacy behavior, that should be a literal; in SQL standard, 
that should be an identifier).

If it is widely requested that we support backticks for identifiers in the SQL 
standard mode, we could consider adding that in a follow-up JIRA (fwiw, after 
this JIRA is committed, implementation-wise it should be straightforward). 
However, I am not sure why having a 'standard' mode that adheres to the SQL 
standard should be a cause of any concern.

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch, 
> HIVE-19064.03.patch, HIVE-19064.4.patch, HIVE-19064.5.patch, 
> HIVE-19064.6.patch, HIVE-19064.7.patch, HIVE-19064.7.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23031) Add option to enable transparent rewrite of count(distinct) into sketch functions

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091627#comment-17091627
 ] 

Hive QA commented on HIVE-23031:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 3 new + 103 unchanged - 0 
fixed = 106 total (was 103) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
1s{color} | {color:red} ql generated 3 new + 1530 unchanged - 0 fixed = 1533 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Format string "%s" needs argument 2 but only 1 are provided in 
org.apache.hadoop.hive.ql.exec.DataSketchesFunctions.getSketchFunction(String, 
String)  At DataSketchesFunctions.java:2 but only 1 are provided in 
org.apache.hadoop.hive.ql.exec.DataSketchesFunctions.getSketchFunction(String, 
String)  At DataSketchesFunctions.java:[line 106] |
|  |  Dead store to f in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At 
HiveRewriteCountDistinctToDataSketches.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At HiveRewriteCountDistinctToDataSketches.java:[line 70] |
|  |  Dead store to newAggCalls in 
org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At 
HiveRewriteCountDistinctToDataSketches.java:org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRewriteCountDistinctToDataSketches.onMatch(RelOptRuleCall)
  At HiveRewriteCountDistinctToDataSketches.java:[line 68] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21917/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21917/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreC

[jira] [Updated] (HIVE-23244) Extract Create View analyzer from SemanticAnalyzer

2020-04-24 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23244:
--
Attachment: HIVE-23244.03.patch

> Extract Create View analyzer from SemanticAnalyzer
> --
>
> Key: HIVE-23244
> URL: https://issues.apache.org/jira/browse/HIVE-23244
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23244.01.patch, HIVE-23244.02.patch, 
> HIVE-23244.03.patch
>
>
> Create View commands are not queries, but commands which have queries as a 
> part of them. Therefore a separate CreateViewAnalyzer is needed which uses 
> SemanticAnalyer to analyze it's query.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-24 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23272:
--
Attachment: HIVE-23272.03.patch

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch, 
> HIVE-23272.03.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23272) Fix and reenable timestamptz_2.q

2020-04-24 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23272:
--
Attachment: (was: HIVE-23272.03.patch)

> Fix and reenable timestamptz_2.q
> 
>
> Key: HIVE-23272
> URL: https://issues.apache.org/jira/browse/HIVE-23272
> Project: Hive
>  Issue Type: Test
>Reporter: Peter Vary
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23272.01.patch, HIVE-23272.02.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23244) Extract Create View analyzer from SemanticAnalyzer

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091602#comment-17091602
 ] 

Hive QA commented on HIVE-23244:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001023/HIVE-23244.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two]
 (batchId=7)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] 
(batchId=90)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21916/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21916/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21916/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001023 - PreCommit-HIVE-Build

> Extract Create View analyzer from SemanticAnalyzer
> --
>
> Key: HIVE-23244
> URL: https://issues.apache.org/jira/browse/HIVE-23244
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23244.01.patch, HIVE-23244.02.patch
>
>
> Create View commands are not queries, but commands which have queries as a 
> part of them. Therefore a separate CreateViewAnalyzer is needed which uses 
> SemanticAnalyer to analyze it's query.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23244) Extract Create View analyzer from SemanticAnalyzer

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091589#comment-17091589
 ] 

Hive QA commented on HIVE-23244:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
56s{color} | {color:blue} parser in master has 3 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 2 new + 590 unchanged - 8 
fixed = 592 total (was 598) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
7s{color} | {color:red} ql generated 1 new + 1529 unchanged - 1 fixed = 1530 
total (was 1530) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  The field org.apache.hadoop.hive.ql.plan.LoadFileDesc.createViewDesc is 
transient but isn't set by deserialization  In LoadFileDesc.java:but isn't set 
by deserialization  In LoadFileDesc.java |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21916/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21916/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21916/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21916/yetus/new-findbugs-ql.html
 |
| modules | C: common parser ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21916/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Extract Create View analyzer from SemanticAnalyzer
> --
>
> Key: HIVE-23244
> URL: https://issues.apache.org/jira/browse/HIVE-23244
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
> 

[jira] [Updated] (HIVE-23201) Improve logging in locking

2020-04-24 Thread Marton Bod (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Bod updated HIVE-23201:
--
Attachment: HIVE-23201.10.patch

> Improve logging in locking
> --
>
> Key: HIVE-23201
> URL: https://issues.apache.org/jira/browse/HIVE-23201
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23201.1.patch, HIVE-23201.1.patch, 
> HIVE-23201.10.patch, HIVE-23201.2.patch, HIVE-23201.2.patch, 
> HIVE-23201.3.patch, HIVE-23201.4.patch, HIVE-23201.5.patch, 
> HIVE-23201.5.patch, HIVE-23201.5.patch, HIVE-23201.5.patch, 
> HIVE-23201.6.patch, HIVE-23201.6.patch, HIVE-23201.7.patch, 
> HIVE-23201.8.patch, HIVE-23201.8.patch, HIVE-23201.9.patch
>
>
> Currently it can be quite difficult to troubleshoot issues related to 
> locking. To understand why a particular txn gave up after a while on 
> acquiring a lock, you have to connect directly to the backend DB, since we 
> are not logging right now which exact locks the txn is waiting for.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-19064) Add mode to support delimited identifiers enclosed within double quotation

2020-04-24 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091564#comment-17091564
 ] 

David Mollitor commented on HIVE-19064:
---

I'll also say that these "modes" add a lot more internal complications.  If the 
mode is {{standard}} what about {{SHOW CREATE TABLE}}?  Will this now be 
altered to display the CREATE TABLE statement in a format that will be valid in 
each mode?  What about error messages that may currently encapsulate the error 
information in back ticks, or if I want to use different 3rd party tools that 
use different syntax, some may use back ticks and some may use double 
quotes,... there is no mode that allows me to interoperate all of these tools 
at the same time.

Again, much better to have a way to accept literal values, back tick'ed values, 
AND double quotes on top of it.

> Add mode to support delimited identifiers enclosed within double quotation
> --
>
> Key: HIVE-19064
> URL: https://issues.apache.org/jira/browse/HIVE-19064
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser, SQL
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Krisztian Kasa
>Priority: Major
> Attachments: HIVE-19064.01.patch, HIVE-19064.02.patch, 
> HIVE-19064.03.patch, HIVE-19064.4.patch, HIVE-19064.5.patch, 
> HIVE-19064.6.patch, HIVE-19064.7.patch, HIVE-19064.7.patch
>
>
> As per SQL standard. Hive currently uses `` (backticks). Default will 
> continue being backticks, but we will support identifiers within double 
> quotation via configuration parameter.
> This issue will also extends support for arbitrary char sequences, e.g., 
> containing {{~ ! @ # $ % ^ & * () , < >}}, in database and table names. 
> Currently, special characters are only supported for column names.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091545#comment-17091545
 ] 

Hive QA commented on HIVE-23274:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001016/HIVE-23274.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 17114 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[alter_change_db_location]
 (batchId=57)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[alter_db_owner]
 (batchId=85)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[alter_partition_coltype]
 (batchId=70)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[annotate_stats_groupby]
 (batchId=88)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[annotate_stats_join_pkfk]
 (batchId=61)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[authorization_9]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[authorization_owner_actions_db]
 (batchId=83)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[avrotblsjoin]
 (batchId=74)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[char_serde] 
(batchId=92)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[database_location]
 (batchId=118)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[database_properties]
 (batchId=99)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[db_ddl_explain]
 (batchId=118)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[decimal_precision2]
 (batchId=90)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[describe_database]
 (batchId=78)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[distinct_windowing]
 (batchId=59)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[distinct_windowing_no_cbo]
 (batchId=100)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[gby_star] 
(batchId=63)
org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances.org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcrossInstances
 (batchId=195)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21915/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21915/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21915/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001016 - PreCommit-HIVE-Build

> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23274) Move q tests to TestMiniLlapLocal from TestCliDriver where the output is different, batch 1

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091507#comment-17091507
 ] 

Hive QA commented on HIVE-23274:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  3m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21915/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21915/yetus/whitespace-eol.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21915/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Move q tests to TestMiniLlapLocal from TestCliDriver where the output is 
> different, batch 1
> ---
>
> Key: HIVE-23274
> URL: https://issues.apache.org/jira/browse/HIVE-23274
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23274.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091501#comment-17091501
 ] 

Hive QA commented on HIVE-23216:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001015/HIVE-23216.6.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multi_insert_partitioned]
 (batchId=98)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21914/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21914/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21914/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001015 - PreCommit-HIVE-Build

> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> --
>
> Key: HIVE-23216
> URL: https://issues.apache.org/jira/browse/HIVE-23216
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-23216.1.patch, HIVE-23216.2.patch, 
> HIVE-23216.3.patch, HIVE-23216.4.patch, HIVE-23216.5.patch, HIVE-23216.6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23216) Add new api as replacement of get_partitions_by_expr to return PartitionSpec instead of Partitions

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091495#comment-17091495
 ] 

Hive QA commented on HIVE-23216:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
44s{color} | {color:blue} standalone-metastore/metastore-common in master has 
35 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
24s{color} | {color:blue} standalone-metastore/metastore-server in master has 
189 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 3 new + 499 unchanged - 0 fixed = 502 total (was 499) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 663 unchanged - 0 fixed = 664 total (was 663) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m  
4s{color} | {color:red} standalone-metastore_metastore-common generated 4 new + 
61 unchanged - 0 fixed = 65 total (was 61) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21914/dev-support/hive-personality.sh
 |
| git revision | master / 88053b2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21914/yetus/diff-checkstyle-standalone-metastore_metastore-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21914/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21914/yetus/diff-javadoc-javadoc-standalone-metastore_metastore-common.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21914/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add new api as replacement of get_partitions_by_expr to return PartitionSpec 
> instead of Partitions
> ---

[jira] [Updated] (HIVE-23295) Possible NPE when on getting predicate literal list when dynamic values are not available

2020-04-24 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23295:
-
Status: Patch Available  (was: Open)

> Possible NPE when on getting predicate literal list when dynamic values are 
> not available
> -
>
> Key: HIVE-23295
> URL: https://issues.apache.org/jira/browse/HIVE-23295
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23295.1.patch
>
>
> getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
> dynamic values are not available.
> {code:java}
> @Override
> public List getLiteralList() {
>   if (literalList != null && literalList.size() > 0 && literalList.get(0) 
> instanceof LiteralDelegate) {
> List newLiteraList = new ArrayList();
> try {
>   for (Object litertalObj : literalList) {
> Object literal = ((LiteralDelegate) litertalObj).getLiteral();
> if (literal != null) {
>   newLiteraList.add(literal);
> }
>   }
> } catch (NoDynamicValuesException err) {
>   LOG.debug("Error while retrieving literalList, returning null", err);
>   return null;
> }
> return newLiteraList;
>   }
>   return literalList;
> } {code}
>  
> There are multiple call sites where the return value is used without a null 
> check. E.g:  leaf.getLiteralList().stream(). 
>  
> The return null was added as part of HIVE-18827 to avoid having an 
> unimportant warning message when dynamic values have not been delivered yet.
>  
> [~sershe], [~jdere], I propose return an empty list instead of null in a case 
> like this. What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23295) Possible NPE when on getting predicate literal list when dynamic values are not available

2020-04-24 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23295:
-
Attachment: HIVE-23295.1.patch

> Possible NPE when on getting predicate literal list when dynamic values are 
> not available
> -
>
> Key: HIVE-23295
> URL: https://issues.apache.org/jira/browse/HIVE-23295
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23295.1.patch
>
>
> getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
> dynamic values are not available.
> {code:java}
> @Override
> public List getLiteralList() {
>   if (literalList != null && literalList.size() > 0 && literalList.get(0) 
> instanceof LiteralDelegate) {
> List newLiteraList = new ArrayList();
> try {
>   for (Object litertalObj : literalList) {
> Object literal = ((LiteralDelegate) litertalObj).getLiteral();
> if (literal != null) {
>   newLiteraList.add(literal);
> }
>   }
> } catch (NoDynamicValuesException err) {
>   LOG.debug("Error while retrieving literalList, returning null", err);
>   return null;
> }
> return newLiteraList;
>   }
>   return literalList;
> } {code}
>  
> There are multiple call sites where the return value is used without a null 
> check. E.g:  leaf.getLiteralList().stream(). 
>  
> The return null was added as part of HIVE-18827 to avoid having an 
> unimportant warning message when dynamic values have not been delivered yet.
>  
> [~sershe], [~jdere], I propose return an empty list instead of null in a case 
> like this. What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-21000) Upgrade thrift to at least 0.10.0

2020-04-24 Thread Ivan Suller (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller reassigned HIVE-21000:
--

Assignee: (was: Ivan Suller)

> Upgrade thrift to at least 0.10.0
> -
>
> Key: HIVE-21000
> URL: https://issues.apache.org/jira/browse/HIVE-21000
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21000.01.patch, HIVE-21000.02.patch, 
> HIVE-21000.03.patch, HIVE-21000.04.patch, HIVE-21000.05.patch, 
> HIVE-21000.06.patch, HIVE-21000.07.patch, HIVE-21000.08.patch, 
> sampler_before.png
>
>
> I was looking into some compile profiles for tables with lots of columns; and 
> it turned out that [thrift 0.9.3 is allocating a 
> List|https://github.com/apache/hive/blob/8e30b5e029570407d8a1db67d322a95db705750e/standalone-metastore/metastore-common/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/FieldSchema.java#L348]
>  during every hashcode calculation; but luckily THRIFT-2877 is improving on 
> that - so I propose to upgrade to at least 0.10.0 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22220) Upgrade Accumulo version to the latest 2.0 release

2020-04-24 Thread Ivan Suller (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller reassigned HIVE-0:
--

Assignee: (was: Ivan Suller)

> Upgrade Accumulo version to the latest 2.0 release
> --
>
> Key: HIVE-0
> URL: https://issues.apache.org/jira/browse/HIVE-0
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ivan Suller
>Priority: Major
>
> To make Thrift upgrade (HIVE-21000) possible we have to upgrade Accumulo to 
> at least 2.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091464#comment-17091464
 ] 

Hive QA commented on HIVE-23275:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001012/HIVE-23275.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17141 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.service.auth.TestImproperTrustDomainAuthenticationBinary.org.apache.hive.service.auth.TestImproperTrustDomainAuthenticationBinary
 (batchId=210)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21913/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21913/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21913/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001012 - PreCommit-HIVE-Build

> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.01.patch, HIVE-23275.patch, HIVE-23275.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23297) Precompile statements where needed across TxnHandler

2020-04-24 Thread Marton Bod (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Bod reassigned HIVE-23297:
-


> Precompile statements where needed across TxnHandler
> 
>
> Key: HIVE-23297
> URL: https://issues.apache.org/jira/browse/HIVE-23297
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
>
> There are multiple places in TxnHandler that could benefit from pre-compiling 
> sql queries using prepared statements. Some queries are complex in structure 
> (e.g. checkLock) and we use string concats to recreate them every time the 
> query is to run, in addition to re-parsing the query structure as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23283) Generate random temp ID for lock enqueue and commitTxn

2020-04-24 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091453#comment-17091453
 ] 

Denys Kuzmenko commented on HIVE-23283:
---

+1, pending tests

> Generate random temp ID for lock enqueue and commitTxn
> --
>
> Key: HIVE-23283
> URL: https://issues.apache.org/jira/browse/HIVE-23283
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23283.1.patch
>
>
> In order to optimize the S4U scope of enqueue lock and commitTxn, currently a 
> hardcoded constant (-1) is used to first insert all the lock and ws entries 
> with a temporary lockID/commitID. However, in a concurrent environment this 
> seems to cause some performance degradation (and deadlock issues with some 
> rdbms) as multiple concurrent transactions are trying to insert rows with the 
> same primary key (e.g. (-1, 1), (-1, 2), (-1, 3), .. etc. for (extID/intID) 
> in HIVE_LOCKS). The proposed solution is to replace the constant with a 
> random generated negative number, which seems to resolve this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23273) Add fix order to cbo_limit.q queries + improve readability

2020-04-24 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23273:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add fix order to cbo_limit.q queries + improve readability
> --
>
> Key: HIVE-23273
> URL: https://issues.apache.org/jira/browse/HIVE-23273
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23273.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23296) Setting Tez caller ID with the actual Hive user

2020-04-24 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung updated HIVE-23296:

Description: 
On the kerberized Hadoop environment, a submitter of an YARN job is the name 
part of the Hive server principal. A caller ID of the job is made of the OS 
user of the Hive server process.

The view and modify ACLs of the Hive server for all Tez tasks are set by 
org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() so 
that the admin who has the Hive server principal can see all tasks from tez-ui. 
But the admin hardly knows who executed each query.

I suggest to change the caller ID to include the actual Hive user. If the user 
is not known, the OS user of the Hive server process is included as is.

The attached picture shows that 'Caller ID' includes 'user1' which is the 
Kerberos user name of the actual Hive user.

!Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!

  was:
On the kerberized Hadoop environment, a submitter of an YARN job is the name 
part of the Hive server principal. A caller ID of the job is made of the OS 
user of the Hive server process.

The view and modify ACLs of the Hive server admin for all Tez tasks are set by 
org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() so 
that the admin can see all tasks from tez-ui. But the admin hardly knows who 
executed each query.

I suggest to change the caller ID to include the actual Hive user. If the user 
is not known, the OS user of the Hive server process is included as is.

The attached picture shows that 'Caller ID' includes 'user1' which is the 
Kerberos user name of the actual Hive user.

!Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!


> Setting Tez caller ID with the actual Hive user
> ---
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23296.01.patch, Screen Shot 2020-04-24 at 
> 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server for all Tez tasks are set by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin who has the Hive server principal can see all tasks from 
> tez-ui. But the admin hardly knows who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> The attached picture shows that 'Caller ID' includes 'user1' which is the 
> Kerberos user name of the actual Hive user.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23296) Setting Tez caller ID with the actual Hive user

2020-04-24 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung updated HIVE-23296:

Summary: Setting Tez caller ID with the actual Hive user  (was: Setting Tez 
caller ID with the Hive session user)

> Setting Tez caller ID with the actual Hive user
> ---
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23296.01.patch, Screen Shot 2020-04-24 at 
> 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server admin for all Tez tasks are set 
> by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin can see all tasks from tez-ui. But the admin hardly knows 
> who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> The attached picture shows that 'Caller ID' includes 'user1' which is the 
> Kerberos user name of the actual Hive user.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23296) Setting Tez caller ID with the Hive session user

2020-04-24 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung updated HIVE-23296:

Attachment: HIVE-23296.01.patch
Status: Patch Available  (was: Open)

> Setting Tez caller ID with the Hive session user
> 
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: HIVE-23296.01.patch, Screen Shot 2020-04-24 at 
> 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server admin for all Tez tasks are set 
> by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin can see all tasks from tez-ui. But the admin hardly knows 
> who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> The attached picture shows that 'Caller ID' includes 'user1' which is the 
> Kerberos user name of the actual Hive user.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23275) Represent UNBOUNDED in window functions in CBO correctly

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091447#comment-17091447
 ] 

Hive QA commented on HIVE-23275:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 1530 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-21913/dev-support/hive-personality.sh
 |
| git revision | master / e9aa220 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-21913/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Represent UNBOUNDED in window functions in CBO correctly
> 
>
> Key: HIVE-23275
> URL: https://issues.apache.org/jira/browse/HIVE-23275
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23275.01.patch, HIVE-23275.patch, HIVE-23275.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently we use a bounded representation with bound set to 
> Integer.MAX_VALUE, which works correctly since that is the Hive 
> implementation. However, Calcite has a specific boundary class 
> {{RexWindowBoundUnbounded}} that we should be using instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23296) Setting Tez caller ID with the Hive session user

2020-04-24 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung updated HIVE-23296:

Description: 
On the kerberized Hadoop environment, a submitter of an YARN job is the name 
part of the Hive server principal. A caller ID of the job is made of the OS 
user of the Hive server process.

The view and modify ACLs of the Hive server admin for all Tez tasks are set by 
org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() so 
that the admin can see all tasks from tez-ui. But the admin hardly knows who 
executed each query.

I suggest to change the caller ID to include the actual Hive user. If the user 
is not known, the OS user of the Hive server process is included as is.

The attached picture shows that 'Caller ID' includes 'user1' which is the 
Kerberos user name of the actual Hive user.

!Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!

  was:
On the kerberized Hadoop environment, a submitter of an YARN job is the name 
part of the Hive server principal. A caller ID of the job is made of the OS 
user of the Hive server process.

The view and modify ACLs of the Hive server admin for all Tez tasks are set by 
org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() so 
that the admin can see all tasks from tez-ui. But the admin hardly knows who 
executed each query.

I suggest to change the caller ID to include the actual Hive user. If the user 
is not known, the OS user of the Hive server process is included as is.

!Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!


> Setting Tez caller ID with the Hive session user
> 
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: Screen Shot 2020-04-24 at 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server admin for all Tez tasks are set 
> by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin can see all tasks from tez-ui. But the admin hardly knows 
> who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> The attached picture shows that 'Caller ID' includes 'user1' which is the 
> Kerberos user name of the actual Hive user.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23201) Improve logging in locking

2020-04-24 Thread Marton Bod (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Bod updated HIVE-23201:
--
Attachment: HIVE-23201.9.patch

> Improve logging in locking
> --
>
> Key: HIVE-23201
> URL: https://issues.apache.org/jira/browse/HIVE-23201
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23201.1.patch, HIVE-23201.1.patch, 
> HIVE-23201.2.patch, HIVE-23201.2.patch, HIVE-23201.3.patch, 
> HIVE-23201.4.patch, HIVE-23201.5.patch, HIVE-23201.5.patch, 
> HIVE-23201.5.patch, HIVE-23201.5.patch, HIVE-23201.6.patch, 
> HIVE-23201.6.patch, HIVE-23201.7.patch, HIVE-23201.8.patch, 
> HIVE-23201.8.patch, HIVE-23201.9.patch
>
>
> Currently it can be quite difficult to troubleshoot issues related to 
> locking. To understand why a particular txn gave up after a while on 
> acquiring a lock, you have to connect directly to the backend DB, since we 
> are not logging right now which exact locks the txn is waiting for.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23296) Setting Tez caller ID with the Hive session user

2020-04-24 Thread Eugene Chung (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Chung reassigned HIVE-23296:
---


> Setting Tez caller ID with the Hive session user
> 
>
> Key: HIVE-23296
> URL: https://issues.apache.org/jira/browse/HIVE-23296
> Project: Hive
>  Issue Type: Improvement
>  Components: Tez
>Reporter: Eugene Chung
>Assignee: Eugene Chung
>Priority: Major
> Attachments: Screen Shot 2020-04-24 at 17.20.34.png
>
>
> On the kerberized Hadoop environment, a submitter of an YARN job is the name 
> part of the Hive server principal. A caller ID of the job is made of the OS 
> user of the Hive server process.
> The view and modify ACLs of the Hive server admin for all Tez tasks are set 
> by 
> org.apache.hadoop.hive.ql.exec.tez.TezTask#setAccessControlsForCurrentUser() 
> so that the admin can see all tasks from tez-ui. But the admin hardly knows 
> who executed each query.
> I suggest to change the caller ID to include the actual Hive user. If the 
> user is not known, the OS user of the Hive server process is included as is.
> !Screen Shot 2020-04-24 at 17.20.34.png|width=683,height=29!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23283) Generate random temp ID for lock enqueue and commitTxn

2020-04-24 Thread Marton Bod (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Bod updated HIVE-23283:
--
Attachment: HIVE-23283.1.patch
Status: Patch Available  (was: Open)

> Generate random temp ID for lock enqueue and commitTxn
> --
>
> Key: HIVE-23283
> URL: https://issues.apache.org/jira/browse/HIVE-23283
> Project: Hive
>  Issue Type: Improvement
>Reporter: Marton Bod
>Assignee: Marton Bod
>Priority: Major
> Attachments: HIVE-23283.1.patch
>
>
> In order to optimize the S4U scope of enqueue lock and commitTxn, currently a 
> hardcoded constant (-1) is used to first insert all the lock and ws entries 
> with a temporary lockID/commitID. However, in a concurrent environment this 
> seems to cause some performance degradation (and deadlock issues with some 
> rdbms) as multiple concurrent transactions are trying to insert rows with the 
> same primary key (e.g. (-1, 1), (-1, 2), (-1, 3), .. etc. for (extID/intID) 
> in HIVE_LOCKS). The proposed solution is to replace the constant with a 
> random generated negative number, which seems to resolve this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23295) Possible NPE when on getting predicate literal list when dynamic values are not available

2020-04-24 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23295:
-
Description: 
getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
dynamic values are not available.
{code:java}
@Override
public List getLiteralList() {
  if (literalList != null && literalList.size() > 0 && literalList.get(0) 
instanceof LiteralDelegate) {
List newLiteraList = new ArrayList();
try {
  for (Object litertalObj : literalList) {
Object literal = ((LiteralDelegate) litertalObj).getLiteral();
if (literal != null) {
  newLiteraList.add(literal);
}
  }
} catch (NoDynamicValuesException err) {
  LOG.debug("Error while retrieving literalList, returning null", err);
  return null;
}
return newLiteraList;
  }
  return literalList;
} {code}
 

There are multiple call sites where the return value is used without a null 
check. E.g:  leaf.getLiteralList().stream(). 

 

The return null was added as part of HIVE-18827 to avoid having an unimportant 
warning message when dynamic values have not been delivered yet.

 

[~sershe], [~jdere], I propose return an empty list instead of null in a case 
like this. What do you think?

  was:
getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
dynamic values are not available. There are multiple call sites where the 
return value is used without a null check. E.g:  leaf.getLiteralList().stream().

 

The return null was added as part of HIVE-18827 to avoid having an unimportant 
warning message when dynamic values have not been delivered yet.

 

[~sershe], [~jdere], I propose return an empty list instead of null in a case 
like this.


> Possible NPE when on getting predicate literal list when dynamic values are 
> not available
> -
>
> Key: HIVE-23295
> URL: https://issues.apache.org/jira/browse/HIVE-23295
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
>
> getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
> dynamic values are not available.
> {code:java}
> @Override
> public List getLiteralList() {
>   if (literalList != null && literalList.size() > 0 && literalList.get(0) 
> instanceof LiteralDelegate) {
> List newLiteraList = new ArrayList();
> try {
>   for (Object litertalObj : literalList) {
> Object literal = ((LiteralDelegate) litertalObj).getLiteral();
> if (literal != null) {
>   newLiteraList.add(literal);
> }
>   }
> } catch (NoDynamicValuesException err) {
>   LOG.debug("Error while retrieving literalList, returning null", err);
>   return null;
> }
> return newLiteraList;
>   }
>   return literalList;
> } {code}
>  
> There are multiple call sites where the return value is used without a null 
> check. E.g:  leaf.getLiteralList().stream(). 
>  
> The return null was added as part of HIVE-18827 to avoid having an 
> unimportant warning message when dynamic values have not been delivered yet.
>  
> [~sershe], [~jdere], I propose return an empty list instead of null in a case 
> like this. What do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23295) Possible NPE when on getting predicate literal list when dynamic values are not available

2020-04-24 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23295:
-
Description: 
getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
dynamic values are not available. There are multiple call sites where the 
return value is used without a null check. E.g:  leaf.getLiteralList().stream().

 

The return null was added as part of HIVE-18827 to avoid having an unimportant 
warning message when dynamic values have not been delivered yet.

 

[~sershe], [~jdere], I propose return an empty list instead of null in a case 
like this.

  was:getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
dynamic values are not available. There are multiple call sites where the 
return value is used without a null check. E.g:  leaf.getLiteralList().stream().


> Possible NPE when on getting predicate literal list when dynamic values are 
> not available
> -
>
> Key: HIVE-23295
> URL: https://issues.apache.org/jira/browse/HIVE-23295
> Project: Hive
>  Issue Type: Bug
>  Components: storage-api
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
>
> getLiteralList() in SearchArgumentImpl$PredicateLeafImpl returns null if 
> dynamic values are not available. There are multiple call sites where the 
> return value is used without a null check. E.g:  
> leaf.getLiteralList().stream().
>  
> The return null was added as part of HIVE-18827 to avoid having an 
> unimportant warning message when dynamic values have not been delivered yet.
>  
> [~sershe], [~jdere], I propose return an empty list instead of null in a case 
> like this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23291) Add Hive to DatabaseType in JDBC storage handler

2020-04-24 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17091403#comment-17091403
 ] 

Hive QA commented on HIVE-23291:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13001009/HIVE-23291.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17137 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/21912/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/21912/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-21912/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13001009 - PreCommit-HIVE-Build

> Add Hive to DatabaseType in JDBC storage handler
> 
>
> Key: HIVE-23291
> URL: https://issues.apache.org/jira/browse/HIVE-23291
> Project: Hive
>  Issue Type: Improvement
>  Components: StorageHandler
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-23291.patch
>
>
> Inception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >