[jira] [Commented] (HIVE-21624) LLAP: Cpu metrics at thread level is broken

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120106#comment-17120106
 ] 

Hive QA commented on HIVE-21624:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004395/HIVE-21624.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17219 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22693/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22693/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22693/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004395 - PreCommit-HIVE-Build

> LLAP: Cpu metrics at thread level is broken
> ---
>
> Key: HIVE-21624
> URL: https://issues.apache.org/jira/browse/HIVE-21624
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Nita Dembla
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21624.1.patch, HIVE-21624.2.patch, 
> HIVE-21624.3.patch, HIVE-21624.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ExecutorThreadCPUTime and ExecutorThreadUserTime relies on thread mx bean cpu 
> metrics when available. At some point, the thread name which the metrics 
> publisher looks for has changed causing no metrics to be published for these 
> counters.  
> The above counters looks for thread with name starting with 
> "ContainerExecutor" but the llap task executor thread got changed to 
> "Task-Executor"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-19926) Remove deprecated hcatalog streaming

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19926?focusedWorklogId=439015=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-439015
 ]

ASF GitHub Bot logged work on HIVE-19926:
-

Author: ASF GitHub Bot
Created on: 30/May/20 03:45
Start Date: 30/May/20 03:45
Worklog Time Spent: 10m 
  Work Description: ashutoshc commented on pull request #1042:
URL: https://github.com/apache/hive/pull/1042#issuecomment-636270357


   @kgyrtkirk will unit tests trigger automatically? or do i need to do 
something?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 439015)
Time Spent: 20m  (was: 10m)

> Remove deprecated hcatalog streaming
> 
>
> Key: HIVE-19926
> URL: https://issues.apache.org/jira/browse/HIVE-19926
> Project: Hive
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19926.1.patch, HIVE-19926.2.patch, 
> HIVE-19926.3.patch, HIVE-19926.4.patch, HIVE-19926.5.patch, HIVE-19926.6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> hcatalog streaming is deprecated in 3.0.0. We should remove it in 4.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-19926) Remove deprecated hcatalog streaming

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19926?focusedWorklogId=439013=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-439013
 ]

ASF GitHub Bot logged work on HIVE-19926:
-

Author: ASF GitHub Bot
Created on: 30/May/20 03:43
Start Date: 30/May/20 03:43
Worklog Time Spent: 10m 
  Work Description: ashutoshc opened a new pull request #1042:
URL: https://github.com/apache/hive/pull/1042


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 439013)
Remaining Estimate: 0h
Time Spent: 10m

> Remove deprecated hcatalog streaming
> 
>
> Key: HIVE-19926
> URL: https://issues.apache.org/jira/browse/HIVE-19926
> Project: Hive
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19926.1.patch, HIVE-19926.2.patch, 
> HIVE-19926.3.patch, HIVE-19926.4.patch, HIVE-19926.5.patch, HIVE-19926.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hcatalog streaming is deprecated in 3.0.0. We should remove it in 4.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-19926) Remove deprecated hcatalog streaming

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-19926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-19926:
--
Labels: pull-request-available  (was: )

> Remove deprecated hcatalog streaming
> 
>
> Key: HIVE-19926
> URL: https://issues.apache.org/jira/browse/HIVE-19926
> Project: Hive
>  Issue Type: Improvement
>  Components: Streaming
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19926.1.patch, HIVE-19926.2.patch, 
> HIVE-19926.3.patch, HIVE-19926.4.patch, HIVE-19926.5.patch, HIVE-19926.6.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> hcatalog streaming is deprecated in 3.0.0. We should remove it in 4.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21624) LLAP: Cpu metrics at thread level is broken

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120090#comment-17120090
 ] 

Hive QA commented on HIVE-21624:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} llap-server in master has 88 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 1 new + 38 unchanged 
- 0 fixed = 39 total (was 38) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22693/dev-support/hive-personality.sh
 |
| git revision | master / 60ff1fc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22693/yetus/diff-checkstyle-llap-server.txt
 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22693/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: Cpu metrics at thread level is broken
> ---
>
> Key: HIVE-21624
> URL: https://issues.apache.org/jira/browse/HIVE-21624
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Nita Dembla
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21624.1.patch, HIVE-21624.2.patch, 
> HIVE-21624.3.patch, HIVE-21624.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ExecutorThreadCPUTime and ExecutorThreadUserTime relies on thread mx bean cpu 
> metrics when available. At some point, the thread name which the metrics 
> publisher looks for has changed causing no metrics to be published for these 
> counters.  
> The above counters looks for thread with name starting with 
> "ContainerExecutor" but the llap task executor thread got changed to 
> "Task-Executor"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23461) Needs to capture input/output entities in explainRewrite

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120082#comment-17120082
 ] 

Hive QA commented on HIVE-23461:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004393/HIVE-23461.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17219 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.api.TestHCatClient.testBasicDDLCommands (batchId=139)
org.apache.hive.hcatalog.streaming.TestStreaming.testConcurrentTransactionBatchCommits
 (batchId=151)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22692/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22692/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22692/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004393 - PreCommit-HIVE-Build

> Needs to capture input/output entities in explainRewrite
> 
>
> Key: HIVE-23461
> URL: https://issues.apache.org/jira/browse/HIVE-23461
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Wenchao Li
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-23461.1.patch, HIVE-23461.2.patch, 
> HIVE-23461.3.patch, HIVE-23461.patch
>
>
> HIVE-18778(CVE-2018-1314) capture input/output entitles in explain semantic 
> analyzer so when a query is disallowed by Ranger, Sentry or Sqlstd 
> authorizizer, the corresponding explain statement will be disallowed either.
> However, ExplainSQRewriteSemanticAnalyzer also uses an instance of 
> DDLSemanticAnalyzer to analyze the explain rewrite query.
> {code:java}
> SemanticAnalyzer sem = (SemanticAnalyzer)
>  SemanticAnalyzerFactory.get(queryState, input);
> sem.analyze(input, ctx);
> sem.validate();{code}
>  
> The inputs/outputs entities for this query are never set on the instance of 
> ExplainSQRewriteSemanticAnalyzer itself and thus is not propagated into the 
> HookContext in the calling Driver code. It is a similar issue to HIVE-18778.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23461) Needs to capture input/output entities in explainRewrite

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120068#comment-17120068
 ] 

Hive QA commented on HIVE-23461:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 1524 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} ql: The patch generated 0 new + 4 unchanged - 3 
fixed = 4 total (was 7) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22692/dev-support/hive-personality.sh
 |
| git revision | master / 60ff1fc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22692/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Needs to capture input/output entities in explainRewrite
> 
>
> Key: HIVE-23461
> URL: https://issues.apache.org/jira/browse/HIVE-23461
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Wenchao Li
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-23461.1.patch, HIVE-23461.2.patch, 
> HIVE-23461.3.patch, HIVE-23461.patch
>
>
> HIVE-18778(CVE-2018-1314) capture input/output entitles in explain semantic 
> analyzer so when a query is disallowed by Ranger, Sentry or Sqlstd 
> authorizizer, the corresponding explain statement will be disallowed either.
> However, ExplainSQRewriteSemanticAnalyzer also uses an instance of 
> DDLSemanticAnalyzer to analyze the explain rewrite query.
> {code:java}
> SemanticAnalyzer sem = (SemanticAnalyzer)
>  SemanticAnalyzerFactory.get(queryState, input);
> sem.analyze(input, ctx);
> sem.validate();{code}
>  
> The inputs/outputs entities for this query are never set on the instance of 
> ExplainSQRewriteSemanticAnalyzer itself and thus is not propagated into the 
> HookContext in the calling Driver code. It is a similar issue to HIVE-18778.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21624) LLAP: Cpu metrics at thread level is broken

2020-05-29 Thread Prasanth Jayachandran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21624:
-
Attachment: HIVE-21624.4.patch

> LLAP: Cpu metrics at thread level is broken
> ---
>
> Key: HIVE-21624
> URL: https://issues.apache.org/jira/browse/HIVE-21624
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Nita Dembla
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21624.1.patch, HIVE-21624.2.patch, 
> HIVE-21624.3.patch, HIVE-21624.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ExecutorThreadCPUTime and ExecutorThreadUserTime relies on thread mx bean cpu 
> metrics when available. At some point, the thread name which the metrics 
> publisher looks for has changed causing no metrics to be published for these 
> counters.  
> The above counters looks for thread with name starting with 
> "ContainerExecutor" but the llap task executor thread got changed to 
> "Task-Executor"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23461) Needs to capture input/output entities in explainRewrite

2020-05-29 Thread Naresh P R (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-23461:
--
Attachment: HIVE-23461.3.patch

> Needs to capture input/output entities in explainRewrite
> 
>
> Key: HIVE-23461
> URL: https://issues.apache.org/jira/browse/HIVE-23461
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Wenchao Li
>Assignee: Naresh P R
>Priority: Major
> Attachments: HIVE-23461.1.patch, HIVE-23461.2.patch, 
> HIVE-23461.3.patch, HIVE-23461.patch
>
>
> HIVE-18778(CVE-2018-1314) capture input/output entitles in explain semantic 
> analyzer so when a query is disallowed by Ranger, Sentry or Sqlstd 
> authorizizer, the corresponding explain statement will be disallowed either.
> However, ExplainSQRewriteSemanticAnalyzer also uses an instance of 
> DDLSemanticAnalyzer to analyze the explain rewrite query.
> {code:java}
> SemanticAnalyzer sem = (SemanticAnalyzer)
>  SemanticAnalyzerFactory.get(queryState, input);
> sem.analyze(input, ctx);
> sem.validate();{code}
>  
> The inputs/outputs entities for this query are never set on the instance of 
> ExplainSQRewriteSemanticAnalyzer itself and thus is not propagated into the 
> HookContext in the calling Driver code. It is a similar issue to HIVE-18778.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120057#comment-17120057
 ] 

Hive QA commented on HIVE-23582:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004389/HIVE-23582.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17220 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22691/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22691/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22691/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004389 - PreCommit-HIVE-Build

> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23582.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120052#comment-17120052
 ] 

Hive QA commented on HIVE-23582:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 63 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 1524 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
50s{color} | {color:red} ql: The patch generated 9 new + 8 unchanged - 0 fixed 
= 17 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22691/dev-support/hive-personality.sh
 |
| git revision | master / 60ff1fc |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22691/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22691/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23582.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread Prasanth Jayachandran (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120032#comment-17120032
 ] 

Prasanth Jayachandran commented on HIVE-23582:
--

[~hashutosh] [~gopalv] could you please help review this change?

> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23582.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23582?focusedWorklogId=438990=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438990
 ]

ASF GitHub Bot logged work on HIVE-23582:
-

Author: ASF GitHub Bot
Created on: 30/May/20 00:04
Start Date: 30/May/20 00:04
Worklog Time Spent: 10m 
  Work Description: prasanthj opened a new pull request #1041:
URL: https://github.com/apache/hive/pull/1041


   LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
non zookeeper based environments, a different split location provider may be 
used. To facilitate that make the SplitLocationProvider implementation class a 
pluggable. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438990)
Remaining Estimate: 0h
Time Spent: 10m

> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-23582.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-23582:
--
Labels: pull-request-available  (was: )

> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23582.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread Prasanth Jayachandran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-23582:
-
Status: Patch Available  (was: Open)

> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-23582.1.patch
>
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread Prasanth Jayachandran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-23582:
-
Attachment: HIVE-23582.1.patch

> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-23582.1.patch
>
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23582) LLAP: Make SplitLocationProvider impl pluggable

2020-05-29 Thread Prasanth Jayachandran (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-23582:



> LLAP: Make SplitLocationProvider impl pluggable
> ---
>
> Key: HIVE-23582
> URL: https://issues.apache.org/jira/browse/HIVE-23582
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>
> LLAP uses HostAffinitySplitLocationProvider implementation by default. For 
> non zookeeper based environments, a different split location provider may be 
> used. To facilitate that make the SplitLocationProvider implementation class 
> a pluggable. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23068) Error when submitting fragment to LLAP via external client: IllegalStateException: Only a single registration allowed per entity

2020-05-29 Thread Jason Dere (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-23068:
--
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to master.

> Error when submitting fragment to LLAP via external client: 
> IllegalStateException: Only a single registration allowed per entity
> 
>
> Key: HIVE-23068
> URL: https://issues.apache.org/jira/browse/HIVE-23068
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23068.1.patch, HIVE-23068.2.patch
>
>
> LLAP external client (via hive-warehouse-connector) somehow seems to be 
> sending duplicate submissions for the same fragment/attempt. When the 2nd 
> request is sent this results in the following error:
> {noformat}
> 2020-03-17T06:49:11,239 WARN  [IPC Server handler 2 on 15001 ()] 
> org.apache.hadoop.ipc.Server: IPC Server handler 2 on 15001, call Call#75 
> Retry#0 
> org.apache.hadoop.hive.llap.protocol.LlapProtocolBlockingPB.submitWork from 
> 19.40.252.114:33906
> java.lang.IllegalStateException: Only a single registration allowed per 
> entity. Duplicate for 
> TaskWrapper{task=attempt_1854104024183112753_6052_0_00_000128_1, 
> inWaitQueue=true, inPreemptionQueue=false, registeredForNotifications=true, 
> canFinish=true, canFinish(in queue)=true, isGuaranteed=false, 
> firstAttemptStartTime=1584442003327, dagStartTime=1584442003327, 
> withinDagPriority=0, vertexParallelism= 2132, selfAndUpstreamParallelism= 
> 2132, selfAndUpstreamComplete= 0}
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo$FinishableStateTracker.registerForUpdates(QueryInfo.java:233)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo.registerForFinishableStateUpdates(QueryInfo.java:205)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryFragmentInfo.registerForFinishableStateUpdates(QueryFragmentInfo.java:160)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$TaskWrapper.maybeRegisterForFinishedStateNotifications(TaskExecutorService.java:1167)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:564)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:93)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.submitWork(ContainerRunnerImpl.java:292)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.submitWork(LlapDaemon.java:610)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl.submitWork(LlapProtocolServerImpl.java:122)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$LlapDaemonProtocol$2.callBlockingMethod(LlapDaemonProtocolProtos.java:22695)
>  ~[hive-exec-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.32-1]
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_191]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_191]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> {noformat}
> I think the issue here is that this error occurred too late - based on the 
> stack trace, LLAP has already accepted/registered the fragment. The 
> subsequent cleanup of this fragment/attempt also affects the first request. 
> Which results in the LLAP crash described in HIVE-23061:
> {noformat}
> 

[jira] [Commented] (HIVE-23435) Full outer join result is missing rows

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120010#comment-17120010
 ] 

Hive QA commented on HIVE-23435:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004382/HIVE-23435.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17219 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22690/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22690/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22690/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004382 - PreCommit-HIVE-Build

> Full outer join result is missing rows 
> ---
>
> Key: HIVE-23435
> URL: https://issues.apache.org/jira/browse/HIVE-23435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23435.1.patch, HIVE-23435.1.patch, 
> HIVE-23435.patch, HIVE-23435.patch, HIVE-23435.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Full Outer join result has missing rows. Appears to be a bug with the full 
> outer join logic. Expected output is receiving when we do a left and right 
> outer join.
> Reproducible steps are mentioned below.
> ~~
> SUPPORT ANALYSIS
> Steps to Reproduce:
> 1. Create a table and insert data:
> create table x (z char(5), x int, y int);
> insert into x values ('one', 1, 50),
>  ('two', 2, 30),
>  ('three', 3, 30),
>  ('four', 4, 60),
>  ('five', 5, 70),
>  ('six', 6, 80);
> 2. Try full outer with the below command. The result is incomplete, it is 
> missing the row:
> NULL NULL NULL three 3 30.0
>  Full Outer Join:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 full outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> --+
> x1.z x1.x x1.y x2.z x2.x x2.y
>  --+
> one 1 50 NULL NULL NULL
>  NULL NULL NULL one 1 50
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL two 2 30
>  three 3 30 NULL NULL NULL
>  four 4 60 NULL NULL NULL
>  NULL NULL NULL four 4 60
>  five 5 70 NULL NULL NULL
>  NULL NULL NULL five 5 70
>  six 6 80 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  --+
> 3. Expected output is coming when we use left/right join + union:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 left outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`)
>  union
>  select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 right outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> +
> z x y _col3 _col4 _col5
>  +
> NULL NULL NULL five 5 70
>  NULL NULL NULL four 4 60
>  NULL NULL NULL one 1 50
>  four 4 60 NULL NULL NULL
>  one 1 50 NULL NULL NULL
>  six 6 80 NULL NULL NULL
>  three 3 30 NULL NULL NULL
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  NULL NULL NULL three 3 30
>  NULL NULL NULL two 2 30
>  five 5 70 NULL NULL NULL
>  +
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23435) Full outer join result is missing rows

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120003#comment-17120003
 ] 

Hive QA commented on HIVE-23435:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 1524 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} ql: The patch generated 0 new + 63 unchanged - 4 
fixed = 63 total (was 67) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22690/dev-support/hive-personality.sh
 |
| git revision | master / 8443e50 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22690/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Full outer join result is missing rows 
> ---
>
> Key: HIVE-23435
> URL: https://issues.apache.org/jira/browse/HIVE-23435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23435.1.patch, HIVE-23435.1.patch, 
> HIVE-23435.patch, HIVE-23435.patch, HIVE-23435.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Full Outer join result has missing rows. Appears to be a bug with the full 
> outer join logic. Expected output is receiving when we do a left and right 
> outer join.
> Reproducible steps are mentioned below.
> ~~
> SUPPORT ANALYSIS
> Steps to Reproduce:
> 1. Create a table and insert data:
> create table x (z char(5), x int, y int);
> insert into x values ('one', 1, 50),
>  ('two', 2, 30),
>  ('three', 3, 30),
>  ('four', 4, 60),
>  ('five', 5, 70),
>  ('six', 6, 80);
> 2. Try full outer with the below command. The result is incomplete, it is 
> missing the row:
> NULL NULL NULL three 3 30.0
>  Full Outer Join:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 full outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> --+
> x1.z x1.x x1.y x2.z x2.x x2.y
>  --+
> one 1 50 NULL NULL NULL
>  NULL NULL NULL one 1 50
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL two 

[jira] [Work started] (HIVE-23569) [RawStore] RawStore changes to facilitate HMS cache consistency

2020-05-29 Thread Kishen Das (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-23569 started by Kishen Das.
-
> [RawStore] RawStore changes to facilitate HMS cache consistency 
> 
>
> Key: HIVE-23569
> URL: https://issues.apache.org/jira/browse/HIVE-23569
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Kishen Das
>Assignee: Kishen Das
>Priority: Major
>
> ObjectStore should use additional fields tableId and validWriteIdList fields 
> for all read methods to compare with cached ValidWriteIdList. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23569) [RawStore] RawStore changes to facilitate HMS cache consistency

2020-05-29 Thread Kishen Das (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishen Das reassigned HIVE-23569:
-

Assignee: Kishen Das

> [RawStore] RawStore changes to facilitate HMS cache consistency 
> 
>
> Key: HIVE-23569
> URL: https://issues.apache.org/jira/browse/HIVE-23569
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Kishen Das
>Assignee: Kishen Das
>Priority: Major
>
> ObjectStore should use additional fields tableId and validWriteIdList fields 
> for all read methods to compare with cached ValidWriteIdList. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23435) Full outer join result is missing rows

2020-05-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-23435:

Attachment: HIVE-23435.1.patch
Status: Patch Available  (was: In Progress)

> Full outer join result is missing rows 
> ---
>
> Key: HIVE-23435
> URL: https://issues.apache.org/jira/browse/HIVE-23435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23435.1.patch, HIVE-23435.1.patch, 
> HIVE-23435.patch, HIVE-23435.patch, HIVE-23435.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Full Outer join result has missing rows. Appears to be a bug with the full 
> outer join logic. Expected output is receiving when we do a left and right 
> outer join.
> Reproducible steps are mentioned below.
> ~~
> SUPPORT ANALYSIS
> Steps to Reproduce:
> 1. Create a table and insert data:
> create table x (z char(5), x int, y int);
> insert into x values ('one', 1, 50),
>  ('two', 2, 30),
>  ('three', 3, 30),
>  ('four', 4, 60),
>  ('five', 5, 70),
>  ('six', 6, 80);
> 2. Try full outer with the below command. The result is incomplete, it is 
> missing the row:
> NULL NULL NULL three 3 30.0
>  Full Outer Join:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 full outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> --+
> x1.z x1.x x1.y x2.z x2.x x2.y
>  --+
> one 1 50 NULL NULL NULL
>  NULL NULL NULL one 1 50
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL two 2 30
>  three 3 30 NULL NULL NULL
>  four 4 60 NULL NULL NULL
>  NULL NULL NULL four 4 60
>  five 5 70 NULL NULL NULL
>  NULL NULL NULL five 5 70
>  six 6 80 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  --+
> 3. Expected output is coming when we use left/right join + union:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 left outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`)
>  union
>  select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 right outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> +
> z x y _col3 _col4 _col5
>  +
> NULL NULL NULL five 5 70
>  NULL NULL NULL four 4 60
>  NULL NULL NULL one 1 50
>  four 4 60 NULL NULL NULL
>  one 1 50 NULL NULL NULL
>  six 6 80 NULL NULL NULL
>  three 3 30 NULL NULL NULL
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  NULL NULL NULL three 3 30
>  NULL NULL NULL two 2 30
>  five 5 70 NULL NULL NULL
>  +
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23435) Full outer join result is missing rows

2020-05-29 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-23435:

Status: In Progress  (was: Patch Available)

> Full outer join result is missing rows 
> ---
>
> Key: HIVE-23435
> URL: https://issues.apache.org/jira/browse/HIVE-23435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23435.1.patch, HIVE-23435.1.patch, 
> HIVE-23435.patch, HIVE-23435.patch, HIVE-23435.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Full Outer join result has missing rows. Appears to be a bug with the full 
> outer join logic. Expected output is receiving when we do a left and right 
> outer join.
> Reproducible steps are mentioned below.
> ~~
> SUPPORT ANALYSIS
> Steps to Reproduce:
> 1. Create a table and insert data:
> create table x (z char(5), x int, y int);
> insert into x values ('one', 1, 50),
>  ('two', 2, 30),
>  ('three', 3, 30),
>  ('four', 4, 60),
>  ('five', 5, 70),
>  ('six', 6, 80);
> 2. Try full outer with the below command. The result is incomplete, it is 
> missing the row:
> NULL NULL NULL three 3 30.0
>  Full Outer Join:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 full outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> --+
> x1.z x1.x x1.y x2.z x2.x x2.y
>  --+
> one 1 50 NULL NULL NULL
>  NULL NULL NULL one 1 50
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL two 2 30
>  three 3 30 NULL NULL NULL
>  four 4 60 NULL NULL NULL
>  NULL NULL NULL four 4 60
>  five 5 70 NULL NULL NULL
>  NULL NULL NULL five 5 70
>  six 6 80 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  --+
> 3. Expected output is coming when we use left/right join + union:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 left outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`)
>  union
>  select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 right outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> +
> z x y _col3 _col4 _col5
>  +
> NULL NULL NULL five 5 70
>  NULL NULL NULL four 4 60
>  NULL NULL NULL one 1 50
>  four 4 60 NULL NULL NULL
>  one 1 50 NULL NULL NULL
>  six 6 80 NULL NULL NULL
>  three 3 30 NULL NULL NULL
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  NULL NULL NULL three 3 30
>  NULL NULL NULL two 2 30
>  five 5 70 NULL NULL NULL
>  +
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23574) [CachedStore] Use notification log to keep the cache up-to-date with all the changes

2020-05-29 Thread Kishen Das (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishen Das updated HIVE-23574:
--
Summary: [CachedStore] Use notification log to keep the cache up-to-date 
with all the changes   (was: [HMS] Use notification log to keep the cache 
up-to-date with all the changes )

> [CachedStore] Use notification log to keep the cache up-to-date with all the 
> changes 
> -
>
> Key: HIVE-23574
> URL: https://issues.apache.org/jira/browse/HIVE-23574
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Kishen Das
>Priority: Major
>
> If the cache is stale, HMS will serve the request from ObjectStore. We need 
> to catch up the cache with the latest change. This can be done by the 
> existing notification log based cache update mechanism. A thread in HMS 
> constantly poll from notification log, update the cache with the entries from 
> notification log. The interesting entries in notification log are 
> table/partition writes, and corresponding commit transaction message. When 
> processing table/partition writes, HMS will put the table/partition entry in 
> cache. However, the entry is not immediately usable until the commit message 
> of the corresponding writes is processed, and mark writeid of corresponding 
> table entry committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23571) [CachedStore] Add ValidWriteIdList to SharedCache.TableWrapper

2020-05-29 Thread Kishen Das (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishen Das updated HIVE-23571:
--
Summary: [CachedStore] Add ValidWriteIdList to SharedCache.TableWrapper  
(was: [SharedCache] Add ValidWriteIdList to SharedCache.TableWrapper)

> [CachedStore] Add ValidWriteIdList to SharedCache.TableWrapper
> --
>
> Key: HIVE-23571
> URL: https://issues.apache.org/jira/browse/HIVE-23571
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Kishen Das
>Priority: Major
>
> Add ValidWriteIdList to SharedCache.TableWrapper. This would be used in 
> deciding whether a given read request can be served from the cache or we have 
> to reload it from the backing database. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23068) Error when submitting fragment to LLAP via external client: IllegalStateException: Only a single registration allowed per entity

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119926#comment-17119926
 ] 

Hive QA commented on HIVE-23068:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004372/HIVE-23068.2.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17219 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22689/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22689/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22689/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004372 - PreCommit-HIVE-Build

> Error when submitting fragment to LLAP via external client: 
> IllegalStateException: Only a single registration allowed per entity
> 
>
> Key: HIVE-23068
> URL: https://issues.apache.org/jira/browse/HIVE-23068
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-23068.1.patch, HIVE-23068.2.patch
>
>
> LLAP external client (via hive-warehouse-connector) somehow seems to be 
> sending duplicate submissions for the same fragment/attempt. When the 2nd 
> request is sent this results in the following error:
> {noformat}
> 2020-03-17T06:49:11,239 WARN  [IPC Server handler 2 on 15001 ()] 
> org.apache.hadoop.ipc.Server: IPC Server handler 2 on 15001, call Call#75 
> Retry#0 
> org.apache.hadoop.hive.llap.protocol.LlapProtocolBlockingPB.submitWork from 
> 19.40.252.114:33906
> java.lang.IllegalStateException: Only a single registration allowed per 
> entity. Duplicate for 
> TaskWrapper{task=attempt_1854104024183112753_6052_0_00_000128_1, 
> inWaitQueue=true, inPreemptionQueue=false, registeredForNotifications=true, 
> canFinish=true, canFinish(in queue)=true, isGuaranteed=false, 
> firstAttemptStartTime=1584442003327, dagStartTime=1584442003327, 
> withinDagPriority=0, vertexParallelism= 2132, selfAndUpstreamParallelism= 
> 2132, selfAndUpstreamComplete= 0}
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo$FinishableStateTracker.registerForUpdates(QueryInfo.java:233)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo.registerForFinishableStateUpdates(QueryInfo.java:205)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryFragmentInfo.registerForFinishableStateUpdates(QueryFragmentInfo.java:160)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$TaskWrapper.maybeRegisterForFinishedStateNotifications(TaskExecutorService.java:1167)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:564)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:93)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.submitWork(ContainerRunnerImpl.java:292)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.submitWork(LlapDaemon.java:610)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl.submitWork(LlapProtocolServerImpl.java:122)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$LlapDaemonProtocol$2.callBlockingMethod(LlapDaemonProtocolProtos.java:22695)
>  ~[hive-exec-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.32-1]
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) 
> 

[jira] [Work logged] (HIVE-23462) Add option to rewrite NTILE to sketch functions

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23462?focusedWorklogId=438932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438932
 ]

ASF GitHub Bot logged work on HIVE-23462:
-

Author: ASF GitHub Bot
Created on: 29/May/20 20:16
Start Date: 29/May/20 20:16
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on a change in pull request #1031:
URL: https://github.com/apache/hive/pull/1031#discussion_r432118212



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
##
@@ -235,12 +232,21 @@ public String getFunctionName() {
 return Optional.empty();
   } else {
 JavaTypeFactoryImpl typeFactory = new JavaTypeFactoryImpl(new 
HiveTypeSystemImpl());
+Type type = returnType;
+if (type instanceof ParameterizedType) {
+  ParameterizedType parameterizedType = (ParameterizedType) type;
+  if (parameterizedType.getRawType() == List.class) {
+  final RelDataType componentRelType = 
typeFactory.createType(parameterizedType.getActualTypeArguments()[0]);
+  return Optional
+  
.of(typeFactory.createArrayType(typeFactory.createTypeWithNullability(componentRelType,
 true), -1));
+  }
+}
 return Optional.of(typeFactory.createType(returnType));
   }

Review comment:
   I have a general comment about the approach we are taking in these 
methods to infer the return type.
   I think we should rethink inferring the return type from the Java returned 
object from 'evaluate' and we could possibly take a step back.
   One option could be create the necessary `SqlReturnTypeInference` strategies 
to be able to return the correct type depending on the functions. If the 
inference is simple, we could hardcode some of those return types. This is the 
general approach taken by Calcite functions. I think that will help simplifying 
this code a lot.
   What do you think? Do you have any other ideas?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438932)
Time Spent: 1.5h  (was: 1h 20m)

> Add option to rewrite NTILE to sketch functions
> ---
>
> Key: HIVE-23462
> URL: https://issues.apache.org/jira/browse/HIVE-23462
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23462.01.patch, HIVE-23462.02.patch, 
> HIVE-23462.03.patch, HIVE-23462.04.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23068) Error when submitting fragment to LLAP via external client: IllegalStateException: Only a single registration allowed per entity

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119906#comment-17119906
 ] 

Hive QA commented on HIVE-23068:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} llap-server in master has 88 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} llap-server: The patch generated 4 new + 16 unchanged 
- 0 fixed = 20 total (was 16) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22689/dev-support/hive-personality.sh
 |
| git revision | master / 8443e50 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22689/yetus/diff-checkstyle-llap-server.txt
 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22689/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Error when submitting fragment to LLAP via external client: 
> IllegalStateException: Only a single registration allowed per entity
> 
>
> Key: HIVE-23068
> URL: https://issues.apache.org/jira/browse/HIVE-23068
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-23068.1.patch, HIVE-23068.2.patch
>
>
> LLAP external client (via hive-warehouse-connector) somehow seems to be 
> sending duplicate submissions for the same fragment/attempt. When the 2nd 
> request is sent this results in the following error:
> {noformat}
> 2020-03-17T06:49:11,239 WARN  [IPC Server handler 2 on 15001 ()] 
> org.apache.hadoop.ipc.Server: IPC Server handler 2 on 15001, call Call#75 
> Retry#0 
> org.apache.hadoop.hive.llap.protocol.LlapProtocolBlockingPB.submitWork from 
> 19.40.252.114:33906
> java.lang.IllegalStateException: Only a single registration allowed per 
> entity. Duplicate for 
> TaskWrapper{task=attempt_1854104024183112753_6052_0_00_000128_1, 
> inWaitQueue=true, inPreemptionQueue=false, registeredForNotifications=true, 
> canFinish=true, canFinish(in queue)=true, isGuaranteed=false, 
> firstAttemptStartTime=1584442003327, dagStartTime=1584442003327, 
> withinDagPriority=0, vertexParallelism= 2132, 

[jira] [Updated] (HIVE-23068) Error when submitting fragment to LLAP via external client: IllegalStateException: Only a single registration allowed per entity

2020-05-29 Thread Jason Dere (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-23068:
--
Attachment: HIVE-23068.2.patch

> Error when submitting fragment to LLAP via external client: 
> IllegalStateException: Only a single registration allowed per entity
> 
>
> Key: HIVE-23068
> URL: https://issues.apache.org/jira/browse/HIVE-23068
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-23068.1.patch, HIVE-23068.2.patch
>
>
> LLAP external client (via hive-warehouse-connector) somehow seems to be 
> sending duplicate submissions for the same fragment/attempt. When the 2nd 
> request is sent this results in the following error:
> {noformat}
> 2020-03-17T06:49:11,239 WARN  [IPC Server handler 2 on 15001 ()] 
> org.apache.hadoop.ipc.Server: IPC Server handler 2 on 15001, call Call#75 
> Retry#0 
> org.apache.hadoop.hive.llap.protocol.LlapProtocolBlockingPB.submitWork from 
> 19.40.252.114:33906
> java.lang.IllegalStateException: Only a single registration allowed per 
> entity. Duplicate for 
> TaskWrapper{task=attempt_1854104024183112753_6052_0_00_000128_1, 
> inWaitQueue=true, inPreemptionQueue=false, registeredForNotifications=true, 
> canFinish=true, canFinish(in queue)=true, isGuaranteed=false, 
> firstAttemptStartTime=1584442003327, dagStartTime=1584442003327, 
> withinDagPriority=0, vertexParallelism= 2132, selfAndUpstreamParallelism= 
> 2132, selfAndUpstreamComplete= 0}
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo$FinishableStateTracker.registerForUpdates(QueryInfo.java:233)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo.registerForFinishableStateUpdates(QueryInfo.java:205)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryFragmentInfo.registerForFinishableStateUpdates(QueryFragmentInfo.java:160)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$TaskWrapper.maybeRegisterForFinishedStateNotifications(TaskExecutorService.java:1167)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:564)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:93)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.submitWork(ContainerRunnerImpl.java:292)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.submitWork(LlapDaemon.java:610)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl.submitWork(LlapProtocolServerImpl.java:122)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$LlapDaemonProtocol$2.callBlockingMethod(LlapDaemonProtocolProtos.java:22695)
>  ~[hive-exec-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.32-1]
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_191]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_191]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> {noformat}
> I think the issue here is that this error occurred too late - based on the 
> stack trace, LLAP has already accepted/registered the fragment. The 
> subsequent cleanup of this fragment/attempt also affects the first request. 
> Which results in the LLAP crash described in HIVE-23061:
> {noformat}
> 2020-03-17T06:49:11,304 ERROR [ExecutionCompletionThread #0 ()] 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon: Thread 
> 

[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119853#comment-17119853
 ] 

Hive QA commented on HIVE-23580:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004367/HIVE-23580.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17218 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22688/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22688/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22688/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004367 - PreCommit-HIVE-Build

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119819#comment-17119819
 ] 

Hive QA commented on HIVE-23580:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 1524 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22688/dev-support/hive-personality.sh
 |
| git revision | master / 8443e50 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22688/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HIVE-23562) Upgrade thrift version in hive

2020-05-29 Thread Naveen Gangam (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam resolved HIVE-23562.
--
  Assignee: Naveen Gangam
Resolution: Duplicate

> Upgrade thrift version in hive
> --
>
> Key: HIVE-23562
> URL: https://issues.apache.org/jira/browse/HIVE-23562
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
>
> Hive has been using thrift 0.9.3 for a long time. We might be able to avail 
> new features like deprecation support etc in the newer releases of thrift. 
> But this impacts interoperability between older clients and newer servers. We 
> need to assess what can break atleast for the purposes of documenting before 
> we make this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23562) Upgrade thrift version in hive

2020-05-29 Thread Naveen Gangam (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119809#comment-17119809
 ] 

Naveen Gangam commented on HIVE-23562:
--

Thanks [~isuller]. Looks like there are a few jiras related to this. Lets see 
if we can move these along. Closing this as dup.

> Upgrade thrift version in hive
> --
>
> Key: HIVE-23562
> URL: https://issues.apache.org/jira/browse/HIVE-23562
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Priority: Major
>
> Hive has been using thrift 0.9.3 for a long time. We might be able to avail 
> new features like deprecation support etc in the newer releases of thrift. 
> But this impacts interoperability between older clients and newer servers. We 
> need to assess what can break atleast for the purposes of documenting before 
> we make this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23068) Error when submitting fragment to LLAP via external client: IllegalStateException: Only a single registration allowed per entity

2020-05-29 Thread Jason Dere (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119791#comment-17119791
 ] 

Jason Dere commented on HIVE-23068:
---

Not totally sure about what the fragment ID looks like during speculative 
execution, but I think the external client during some of its retry logic might 
be sending the same fragment ID (which it should not do).

I'll resubmit the patch to try to get a clean precommit run.

> Error when submitting fragment to LLAP via external client: 
> IllegalStateException: Only a single registration allowed per entity
> 
>
> Key: HIVE-23068
> URL: https://issues.apache.org/jira/browse/HIVE-23068
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-23068.1.patch
>
>
> LLAP external client (via hive-warehouse-connector) somehow seems to be 
> sending duplicate submissions for the same fragment/attempt. When the 2nd 
> request is sent this results in the following error:
> {noformat}
> 2020-03-17T06:49:11,239 WARN  [IPC Server handler 2 on 15001 ()] 
> org.apache.hadoop.ipc.Server: IPC Server handler 2 on 15001, call Call#75 
> Retry#0 
> org.apache.hadoop.hive.llap.protocol.LlapProtocolBlockingPB.submitWork from 
> 19.40.252.114:33906
> java.lang.IllegalStateException: Only a single registration allowed per 
> entity. Duplicate for 
> TaskWrapper{task=attempt_1854104024183112753_6052_0_00_000128_1, 
> inWaitQueue=true, inPreemptionQueue=false, registeredForNotifications=true, 
> canFinish=true, canFinish(in queue)=true, isGuaranteed=false, 
> firstAttemptStartTime=1584442003327, dagStartTime=1584442003327, 
> withinDagPriority=0, vertexParallelism= 2132, selfAndUpstreamParallelism= 
> 2132, selfAndUpstreamComplete= 0}
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo$FinishableStateTracker.registerForUpdates(QueryInfo.java:233)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo.registerForFinishableStateUpdates(QueryInfo.java:205)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryFragmentInfo.registerForFinishableStateUpdates(QueryFragmentInfo.java:160)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$TaskWrapper.maybeRegisterForFinishedStateNotifications(TaskExecutorService.java:1167)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:564)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:93)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.submitWork(ContainerRunnerImpl.java:292)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.submitWork(LlapDaemon.java:610)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl.submitWork(LlapProtocolServerImpl.java:122)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$LlapDaemonProtocol$2.callBlockingMethod(LlapDaemonProtocolProtos.java:22695)
>  ~[hive-exec-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.32-1]
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_191]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_191]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> {noformat}
> I think the issue here is that this error occurred too late - based on the 
> stack trace, LLAP has already accepted/registered the fragment. The 
> subsequent cleanup of this 

[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Status: Patch Available  (was: Open)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Attachment: (was: HIVE-23580.1.patch)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Attachment: HIVE-23580.1.patch

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Status: Open  (was: Patch Available)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119750#comment-17119750
 ] 

Hive QA commented on HIVE-23580:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004354/HIVE-23580.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17159 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestReplWithJsonMessageFormat.org.apache.hadoop.hive.ql.parse.TestReplWithJsonMessageFormat
 (batchId=185)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22687/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22687/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22687/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004354 - PreCommit-HIVE-Build

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23068) Error when submitting fragment to LLAP via external client: IllegalStateException: Only a single registration allowed per entity

2020-05-29 Thread Prasanth Jayachandran (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119742#comment-17119742
 ] 

Prasanth Jayachandran commented on HIVE-23068:
--

lgtm, +1.
{quote}(for example speculative execution of a query fragment).
{quote}
Can external clients with speculative execution generate the same fragment id? 
Not sure how external clients generates the full id but I would expect it to 
have different attempt numbers atleast just so that the different attempts does 
not step on each other during speculative execution. 

> Error when submitting fragment to LLAP via external client: 
> IllegalStateException: Only a single registration allowed per entity
> 
>
> Key: HIVE-23068
> URL: https://issues.apache.org/jira/browse/HIVE-23068
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-23068.1.patch
>
>
> LLAP external client (via hive-warehouse-connector) somehow seems to be 
> sending duplicate submissions for the same fragment/attempt. When the 2nd 
> request is sent this results in the following error:
> {noformat}
> 2020-03-17T06:49:11,239 WARN  [IPC Server handler 2 on 15001 ()] 
> org.apache.hadoop.ipc.Server: IPC Server handler 2 on 15001, call Call#75 
> Retry#0 
> org.apache.hadoop.hive.llap.protocol.LlapProtocolBlockingPB.submitWork from 
> 19.40.252.114:33906
> java.lang.IllegalStateException: Only a single registration allowed per 
> entity. Duplicate for 
> TaskWrapper{task=attempt_1854104024183112753_6052_0_00_000128_1, 
> inWaitQueue=true, inPreemptionQueue=false, registeredForNotifications=true, 
> canFinish=true, canFinish(in queue)=true, isGuaranteed=false, 
> firstAttemptStartTime=1584442003327, dagStartTime=1584442003327, 
> withinDagPriority=0, vertexParallelism= 2132, selfAndUpstreamParallelism= 
> 2132, selfAndUpstreamComplete= 0}
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo$FinishableStateTracker.registerForUpdates(QueryInfo.java:233)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryInfo.registerForFinishableStateUpdates(QueryInfo.java:205)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.QueryFragmentInfo.registerForFinishableStateUpdates(QueryFragmentInfo.java:160)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$TaskWrapper.maybeRegisterForFinishedStateNotifications(TaskExecutorService.java:1167)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:564)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.schedule(TaskExecutorService.java:93)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.ContainerRunnerImpl.submitWork(ContainerRunnerImpl.java:292)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapDaemon.submitWork(LlapDaemon.java:610)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.impl.LlapProtocolServerImpl.submitWork(LlapProtocolServerImpl.java:122)
>  ~[hive-llap-server-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.26-3]
> at 
> org.apache.hadoop.hive.llap.daemon.rpc.LlapDaemonProtocolProtos$LlapDaemonProtocol$2.callBlockingMethod(LlapDaemonProtocolProtos.java:22695)
>  ~[hive-exec-3.1.0.3.1.4.26-3.jar:3.1.0.3.1.4.32-1]
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_191]
> at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_191]
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682) 
> ~[hadoop-common-3.1.1.3.1.4.26-3.jar:?]
> {noformat}
> I think the issue here is that this error occurred too late 

[jira] [Updated] (HIVE-23581) On service discovery mode, the initial port of hiveserver2 to which zookeeper is applied is not changed.

2020-05-29 Thread shinsunwoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shinsunwoo updated HIVE-23581:
--
Description: 
When accessing hiveserver2 with 
`hive.server2.support.dynamic.service.discovery` and ` 
hive.server2.limit.connections.per.user` applied through the hive jdbc driver, 
The jdbc driver is a method of randomly obtaining domain information (host, 
port) information of hivesever2 registered in the zookeeper.

However, if the hiveserver2 obtained from the zookeeper first fails to connect 
due to the setting value of `hive.server2.limit.connections.per.user`, the port 
will not be initialized due to the following code logic.

 

* 
[https://github.com/apache/hive/blob/8443e50fdfa284531300f3ab283a7e4959dba623/jdbc/src/java/org/apache/hive/jdbc/ZooKeeperHiveClientHelper.java#L320]

 
{code:java}
if ((matcher.group(1).equals("hive.server2.thrift.http.port"))
   && !(connParams.getPort() > 0)) {
  connParams.setPort(Integer.parseInt(matcher.group(2)));
}
{code}
 

Therefore, if the port of the next accessible hiveserver2 is not the first 
port, a problem occurs.

So I modified the port number to be initialized to "-1" whenever the update 
function (updateConnParamsFromZooKeeper) is executed.

  was:
When accessing hiveserver2 with 
`hive.server2.support.dynamic.service.discovery` and` 
hive.server2.limit.connections.per.user` applied through the hive jdbc driver, 
The jdbc driver is a method of randomly obtaining domain information (host, 
port) information of hivesever2 registered in the zookeeper.

However, if the hiveserver2 obtained from the zookeeper first fails to connect 
due to the setting value of `hive.server2.limit.connections.per.user`, the port 
will not be initialized due to the following code logic.

 

* 
https://github.com/apache/hive/blob/8443e50fdfa284531300f3ab283a7e4959dba623/jdbc/src/java/org/apache/hive/jdbc/ZooKeeperHiveClientHelper.java#L320

 
{code:java}
if ((matcher.group(1).equals("hive.server2.thrift.http.port"))
   && !(connParams.getPort() > 0)) {
  connParams.setPort(Integer.parseInt(matcher.group(2)));
}
{code}
 

Therefore, if the port of the next accessible hiveserver2 is not the first 
port, a problem occurs.

So I modified the port number to be initialized to "-1" whenever the update 
function (updateConnParamsFromZooKeeper) is executed.


> On service discovery mode, the initial port of hiveserver2 to which zookeeper 
> is applied is not changed.
> 
>
> Key: HIVE-23581
> URL: https://issues.apache.org/jira/browse/HIVE-23581
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: shinsunwoo
>Assignee: shinsunwoo
>Priority: Major
> Attachments: HIVE-23581.01.patch
>
>
> When accessing hiveserver2 with 
> `hive.server2.support.dynamic.service.discovery` and ` 
> hive.server2.limit.connections.per.user` applied through the hive jdbc 
> driver, The jdbc driver is a method of randomly obtaining domain information 
> (host, port) information of hivesever2 registered in the zookeeper.
> However, if the hiveserver2 obtained from the zookeeper first fails to 
> connect due to the setting value of 
> `hive.server2.limit.connections.per.user`, the port will not be initialized 
> due to the following code logic.
>  
> * 
> [https://github.com/apache/hive/blob/8443e50fdfa284531300f3ab283a7e4959dba623/jdbc/src/java/org/apache/hive/jdbc/ZooKeeperHiveClientHelper.java#L320]
>  
> {code:java}
> if ((matcher.group(1).equals("hive.server2.thrift.http.port"))
>&& !(connParams.getPort() > 0)) {
>   connParams.setPort(Integer.parseInt(matcher.group(2)));
> }
> {code}
>  
> Therefore, if the port of the next accessible hiveserver2 is not the first 
> port, a problem occurs.
> So I modified the port number to be initialized to "-1" whenever the update 
> function (updateConnParamsFromZooKeeper) is executed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23581) On service discovery mode, the initial port of hiveserver2 to which zookeeper is applied is not changed.

2020-05-29 Thread shinsunwoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shinsunwoo reassigned HIVE-23581:
-

Assignee: shinsunwoo

> On service discovery mode, the initial port of hiveserver2 to which zookeeper 
> is applied is not changed.
> 
>
> Key: HIVE-23581
> URL: https://issues.apache.org/jira/browse/HIVE-23581
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: shinsunwoo
>Assignee: shinsunwoo
>Priority: Major
> Attachments: HIVE-23581.01.patch
>
>
> When accessing hiveserver2 with 
> `hive.server2.support.dynamic.service.discovery` and` 
> hive.server2.limit.connections.per.user` applied through the hive jdbc 
> driver, The jdbc driver is a method of randomly obtaining domain information 
> (host, port) information of hivesever2 registered in the zookeeper.
> However, if the hiveserver2 obtained from the zookeeper first fails to 
> connect due to the setting value of 
> `hive.server2.limit.connections.per.user`, the port will not be initialized 
> due to the following code logic.
>  
> * 
> https://github.com/apache/hive/blob/8443e50fdfa284531300f3ab283a7e4959dba623/jdbc/src/java/org/apache/hive/jdbc/ZooKeeperHiveClientHelper.java#L320
>  
> {code:java}
> if ((matcher.group(1).equals("hive.server2.thrift.http.port"))
>&& !(connParams.getPort() > 0)) {
>   connParams.setPort(Integer.parseInt(matcher.group(2)));
> }
> {code}
>  
> Therefore, if the port of the next accessible hiveserver2 is not the first 
> port, a problem occurs.
> So I modified the port number to be initialized to "-1" whenever the update 
> function (updateConnParamsFromZooKeeper) is executed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23581) On service discovery mode, the initial port of hiveserver2 to which zookeeper is applied is not changed.

2020-05-29 Thread shinsunwoo (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shinsunwoo updated HIVE-23581:
--
Attachment: HIVE-23581.01.patch

> On service discovery mode, the initial port of hiveserver2 to which zookeeper 
> is applied is not changed.
> 
>
> Key: HIVE-23581
> URL: https://issues.apache.org/jira/browse/HIVE-23581
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: All Versions
>Reporter: shinsunwoo
>Priority: Major
> Attachments: HIVE-23581.01.patch
>
>
> When accessing hiveserver2 with 
> `hive.server2.support.dynamic.service.discovery` and` 
> hive.server2.limit.connections.per.user` applied through the hive jdbc 
> driver, The jdbc driver is a method of randomly obtaining domain information 
> (host, port) information of hivesever2 registered in the zookeeper.
> However, if the hiveserver2 obtained from the zookeeper first fails to 
> connect due to the setting value of 
> `hive.server2.limit.connections.per.user`, the port will not be initialized 
> due to the following code logic.
>  
> * 
> https://github.com/apache/hive/blob/8443e50fdfa284531300f3ab283a7e4959dba623/jdbc/src/java/org/apache/hive/jdbc/ZooKeeperHiveClientHelper.java#L320
>  
> {code:java}
> if ((matcher.group(1).equals("hive.server2.thrift.http.port"))
>&& !(connParams.getPort() > 0)) {
>   connParams.setPort(Integer.parseInt(matcher.group(2)));
> }
> {code}
>  
> Therefore, if the port of the next accessible hiveserver2 is not the first 
> port, a problem occurs.
> So I modified the port number to be initialized to "-1" whenever the update 
> function (updateConnParamsFromZooKeeper) is executed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?focusedWorklogId=438800=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438800
 ]

ASF GitHub Bot logged work on HIVE-23514:
-

Author: ASF GitHub Bot
Created on: 29/May/20 16:04
Start Date: 29/May/20 16:04
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on a change in pull request #1040:
URL: https://github.com/apache/hive/pull/1040#discussion_r432585845



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasDumpTask.java
##
@@ -73,7 +74,12 @@ public int execute() {
   String entityGuid = checkHiveEntityGuid(atlasRequestBuilder, 
atlasReplInfo.getSrcCluster(),
   atlasReplInfo.getSrcDB());
   long currentModifiedTime = getCurrentTimestamp(atlasReplInfo, 
entityGuid);
-  dumpAtlasMetaData(atlasRequestBuilder, atlasReplInfo);
+  AtlasDumpLogger replLogger = new 
AtlasDumpLogger(atlasReplInfo.getSrcDB(),
+  atlasReplInfo.getStagingDir().toString());
+  replLogger.startLog();
+  long numBytesWritten = dumpAtlasMetaData(atlasRequestBuilder, 
atlasReplInfo);
+  LOG.debug("Finished dumping atlas metadata, total:{} bytes written", 
numBytesWritten);
+  replLogger.endLog(0L);

Review comment:
   We don't need this return value. This is just a workaround for the 
mockito bug where it is calling a method of a mocked object. Like the one we 
had seen during atlas repl initial patch.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438800)
Time Spent: 1h  (was: 50m)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?focusedWorklogId=438795=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438795
 ]

ASF GitHub Bot logged work on HIVE-23514:
-

Author: ASF GitHub Bot
Created on: 29/May/20 16:00
Start Date: 29/May/20 16:00
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on a change in pull request #1040:
URL: https://github.com/apache/hive/pull/1040#discussion_r432583637



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasLoadTask.java
##
@@ -66,7 +72,7 @@ public int execute() {
 }
   }
 
-  private AtlasReplInfo createAtlasReplInfo() throws SemanticException, 
MalformedURLException {
+  public AtlasReplInfo createAtlasReplInfo() throws SemanticException, 
MalformedURLException {

Review comment:
   Yes, need it public for test.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438795)
Time Spent: 50m  (was: 40m)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?focusedWorklogId=438793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438793
 ]

ASF GitHub Bot logged work on HIVE-23514:
-

Author: ASF GitHub Bot
Created on: 29/May/20 16:00
Start Date: 29/May/20 16:00
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on a change in pull request #1040:
URL: https://github.com/apache/hive/pull/1040#discussion_r432583484



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasDumpTask.java
##
@@ -73,7 +74,12 @@ public int execute() {
   String entityGuid = checkHiveEntityGuid(atlasRequestBuilder, 
atlasReplInfo.getSrcCluster(),
   atlasReplInfo.getSrcDB());
   long currentModifiedTime = getCurrentTimestamp(atlasReplInfo, 
entityGuid);
-  dumpAtlasMetaData(atlasRequestBuilder, atlasReplInfo);
+  AtlasDumpLogger replLogger = new 
AtlasDumpLogger(atlasReplInfo.getSrcDB(),
+  atlasReplInfo.getStagingDir().toString());
+  replLogger.startLog();
+  long numBytesWritten = dumpAtlasMetaData(atlasRequestBuilder, 
atlasReplInfo);
+  LOG.debug("Finished dumping atlas metadata, total:{} bytes written", 
numBytesWritten);
+  replLogger.endLog(0L);

Review comment:
   Yes, need it public for test.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438793)
Time Spent: 40m  (was: 0.5h)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?focusedWorklogId=438791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438791
 ]

ASF GitHub Bot logged work on HIVE-23514:
-

Author: ASF GitHub Bot
Created on: 29/May/20 15:59
Start Date: 29/May/20 15:59
Worklog Time Spent: 10m 
  Work Description: pkumarsinha commented on a change in pull request #1040:
URL: https://github.com/apache/hive/pull/1040#discussion_r432583117



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasDumpTask.java
##
@@ -73,7 +74,12 @@ public int execute() {
   String entityGuid = checkHiveEntityGuid(atlasRequestBuilder, 
atlasReplInfo.getSrcCluster(),
   atlasReplInfo.getSrcDB());
   long currentModifiedTime = getCurrentTimestamp(atlasReplInfo, 
entityGuid);
-  dumpAtlasMetaData(atlasRequestBuilder, atlasReplInfo);
+  AtlasDumpLogger replLogger = new 
AtlasDumpLogger(atlasReplInfo.getSrcDB(),
+  atlasReplInfo.getStagingDir().toString());
+  replLogger.startLog();

Review comment:
   dumpAtlasMetaData is doing actual dump, shouldn't we capture time taken 
the complete dump and not just a particular step? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438791)
Time Spent: 0.5h  (was: 20m)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119706#comment-17119706
 ] 

Attila Magyar commented on HIVE-23580:
--

[~ashutoshc], even with disabled cache getFileSystem is still fast:
{code:java}
2020-05-29 15:22:45,968 DEBUG org.apache.hadoop.fs.FileSystem: 
[a4f619d8-d776-4534-bb18-957c4204aa68 HiveServer2-Handler-Pool: Thread-114]: 
Bypassing cache to create filesystem 
hdfs://amagyar-2.amagyar.root.hwx.site:8020/tmp/hive/hive/a4f619d8-d776-4534-bb18-957c4204aa68/hive_2020-05-29_15-22-41_571_7827501270783748141-3
2020-05-29 15:22:45,969 INFO hive.ql.Context: 
[a4f619d8-d776-4534-bb18-957c4204aa68 HiveServer2-Handler-Pool: Thread-114]: 
FileSystem Created: 1 ms {code}

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Status: Patch Available  (was: Open)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Attachment: HIVE-23580.1.patch

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Status: Open  (was: Patch Available)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Attachment: (was: HIVE-23580.1.patch)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119690#comment-17119690
 ] 

Hive QA commented on HIVE-23580:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004345/HIVE-23580.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 17213 tests 
executed
*Failed tests:*
{noformat}
TestDanglingQOuts - did not produce a TEST-*.xml file (likely timed out) 
(batchId=172)
TestQOutProcessor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=172)
TestSplitSupport - did not produce a TEST-*.xml file (likely timed out) 
(batchId=172)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22686/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22686/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22686/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004345 - PreCommit-HIVE-Build

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23510) TestMiniLlapLocalCliDriver should be the default driver for q tests

2020-05-29 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-23510:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> TestMiniLlapLocalCliDriver should be the default driver for q tests
> ---
>
> Key: HIVE-23510
> URL: https://issues.apache.org/jira/browse/HIVE-23510
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-23510.01.patch, HIVE-23510.02.patch, 
> HIVE-23510.03.patch, HIVE-23510.04.patch, HIVE-23510.05.patch
>
>
> Set TestMiniLlapLocalCliDriver as the default driver. For now the few tests 
> still processed by TestCliDriver should be marked in the 
> testconfiguration.properties, until it is completely eliminated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119661#comment-17119661
 ] 

Hive QA commented on HIVE-23580:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
49s{color} | {color:blue} ql in master has 1524 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22686/dev-support/hive-personality.sh
 |
| git revision | master / 8443e50 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22686/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119654#comment-17119654
 ] 

Attila Magyar commented on HIVE-23580:
--

[~ashutoshc], if file system cache is enabled 
(fs..impl.disable.cache=true) then getFileSystem() should return fast.

We don't call it twice in the loop, if we enter the first _if_ we _continue_ 
and don't call the 2nd one. If we enter the second _if_ we don't call the first 
one.

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119654#comment-17119654
 ] 

Attila Magyar edited comment on HIVE-23580 at 5/29/20, 2:20 PM:


[~ashutoshc], if file system cache is enabled 
(fs..impl.disable.cache=false) then getFileSystem() should return fast.

We don't call it twice in the loop, if we enter the first _if_ we _continue_ 
and don't call the 2nd one. If we enter the second _if_ we don't call the first 
one.


was (Author: amagyar):
[~ashutoshc], if file system cache is enabled 
(fs..impl.disable.cache=true) then getFileSystem() should return fast.

We don't call it twice in the loop, if we enter the first _if_ we _continue_ 
and don't call the 2nd one. If we enter the second _if_ we don't call the first 
one.

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119623#comment-17119623
 ] 

Ashutosh Chauhan commented on HIVE-23580:
-

Also, we should benchmark this. Because if its slow, we may have to consider 
the suggestion on HIVE-23196 of doing deletes on separate thread so we are not 
blocking query to finish.

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Ashutosh Chauhan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119619#comment-17119619
 ] 

Ashutosh Chauhan commented on HIVE-23580:
-

FileSystem fs = p.getFileSystem(conf); is an expensive call. And is currently 
done twice in that loop. Can you move it above if() so that we can avoid making 
2 calls for it.

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Attachment: HIVE-23580.1.patch

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-23580:
-
Status: Patch Available  (was: Open)

> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23580.1.patch
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-23580) deleteOnExit set is not cleaned up, causing memory pressure

2020-05-29 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar reassigned HIVE-23580:



> deleteOnExit set is not cleaned up, causing memory pressure
> ---
>
> Key: HIVE-23580
> URL: https://issues.apache.org/jira/browse/HIVE-23580
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
>
> removeScratchDir doesn't always calls cancelDeleteOnExit() on context::clear



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23462) Add option to rewrite NTILE to sketch functions

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23462?focusedWorklogId=438737=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438737
 ]

ASF GitHub Bot logged work on HIVE-23462:
-

Author: ASF GitHub Bot
Created on: 29/May/20 13:25
Start Date: 29/May/20 13:25
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #1031:
URL: https://github.com/apache/hive/pull/1031#discussion_r432480302



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
##
@@ -368,4 +389,216 @@ void rewrite(AggregateCall aggCall) {
   }
 }
   }
+
+  /**
+   * Generic support for rewriting Windowing expression into a different form 
usually using joins.
+   */
+  private static abstract class WindowingToProjectAggregateJoinProject extends 
RelOptRule {
+
+protected final String sketchType;
+
+public WindowingToProjectAggregateJoinProject(String sketchType) {
+  super(operand(HiveProject.class, any()));
+  this.sketchType = sketchType;
+}
+
+@Override
+public void onMatch(RelOptRuleCall call) {
+
+  final Project project = call.rel(0);
+
+  VbuilderPAP vb = buildProcessor(call);
+  RelNode newProject = vb.processProject(project);
+
+  if (newProject instanceof Project && ((Project) 
newProject).getChildExps().equals(project.getChildExps())) {

Review comment:
   done it a bit differently - but the end result is the same





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438737)
Time Spent: 1h 20m  (was: 1h 10m)

> Add option to rewrite NTILE to sketch functions
> ---
>
> Key: HIVE-23462
> URL: https://issues.apache.org/jira/browse/HIVE-23462
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23462.01.patch, HIVE-23462.02.patch, 
> HIVE-23462.03.patch, HIVE-23462.04.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23462) Add option to rewrite NTILE to sketch functions

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23462?focusedWorklogId=438736=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438736
 ]

ASF GitHub Bot logged work on HIVE-23462:
-

Author: ASF GitHub Bot
Created on: 29/May/20 13:23
Start Date: 29/May/20 13:23
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #1031:
URL: https://github.com/apache/hive/pull/1031#discussion_r432479017



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DataSketchesFunctions.java
##
@@ -235,12 +232,21 @@ public String getFunctionName() {
 return Optional.empty();
   } else {
 JavaTypeFactoryImpl typeFactory = new JavaTypeFactoryImpl(new 
HiveTypeSystemImpl());
+Type type = returnType;
+if (type instanceof ParameterizedType) {
+  ParameterizedType parameterizedType = (ParameterizedType) type;
+  if (parameterizedType.getRawType() == List.class) {
+  final RelDataType componentRelType = 
typeFactory.createType(parameterizedType.getActualTypeArguments()[0]);
+  return Optional
+  
.of(typeFactory.createArrayType(typeFactory.createTypeWithNullability(componentRelType,
 true), -1));
+  }
+}
 return Optional.of(typeFactory.createType(returnType));
   }

Review comment:
   yes; I also wanted to use this only temporarily - because this approach 
would not work for GenericUDF - but instead of hardcoding stuff; I think the 
best would be to create a `SqlReturnTypeInference`  which could translate the 
opbindings to things which could be processed by `GenericUDF#initialize`
   opened: HIVE-23579





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438736)
Time Spent: 1h 10m  (was: 1h)

> Add option to rewrite NTILE to sketch functions
> ---
>
> Key: HIVE-23462
> URL: https://issues.apache.org/jira/browse/HIVE-23462
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23462.01.patch, HIVE-23462.02.patch, 
> HIVE-23462.03.patch, HIVE-23462.04.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?focusedWorklogId=438735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438735
 ]

ASF GitHub Bot logged work on HIVE-23514:
-

Author: ASF GitHub Bot
Created on: 29/May/20 13:22
Start Date: 29/May/20 13:22
Worklog Time Spent: 10m 
  Work Description: aasha commented on a change in pull request #1040:
URL: https://github.com/apache/hive/pull/1040#discussion_r432476084



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasDumpTask.java
##
@@ -73,7 +74,12 @@ public int execute() {
   String entityGuid = checkHiveEntityGuid(atlasRequestBuilder, 
atlasReplInfo.getSrcCluster(),
   atlasReplInfo.getSrcDB());
   long currentModifiedTime = getCurrentTimestamp(atlasReplInfo, 
entityGuid);
-  dumpAtlasMetaData(atlasRequestBuilder, atlasReplInfo);
+  AtlasDumpLogger replLogger = new 
AtlasDumpLogger(atlasReplInfo.getSrcDB(),
+  atlasReplInfo.getStagingDir().toString());
+  replLogger.startLog();

Review comment:
   start should be called when the atlas dump task starts. This will give 
the complete estimate of time taken.

##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasLoadTask.java
##
@@ -66,7 +72,7 @@ public int execute() {
 }
   }
 
-  private AtlasReplInfo createAtlasReplInfo() throws SemanticException, 
MalformedURLException {
+  public AtlasReplInfo createAtlasReplInfo() throws SemanticException, 
MalformedURLException {

Review comment:
   Do you need this to be public?

##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/repl/AtlasDumpTask.java
##
@@ -73,7 +74,12 @@ public int execute() {
   String entityGuid = checkHiveEntityGuid(atlasRequestBuilder, 
atlasReplInfo.getSrcCluster(),
   atlasReplInfo.getSrcDB());
   long currentModifiedTime = getCurrentTimestamp(atlasReplInfo, 
entityGuid);
-  dumpAtlasMetaData(atlasRequestBuilder, atlasReplInfo);
+  AtlasDumpLogger replLogger = new 
AtlasDumpLogger(atlasReplInfo.getSrcDB(),
+  atlasReplInfo.getStagingDir().toString());
+  replLogger.startLog();
+  long numBytesWritten = dumpAtlasMetaData(atlasRequestBuilder, 
atlasReplInfo);
+  LOG.debug("Finished dumping atlas metadata, total:{} bytes written", 
numBytesWritten);
+  replLogger.endLog(0L);

Review comment:
   Do you want to send 0 or send the bytes written? Not sure how useful 
that is though





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438735)
Time Spent: 20m  (was: 10m)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23578) Collect ignored tests

2020-05-29 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119553#comment-17119553
 ] 

Zoltan Haindrich commented on HIVE-23578:
-

{code}
'http://34.66.156.144:8080/job/hive-precommit/view/change-requests/job/PR-948/35/testReport/api/xml?tree=suites[name,duration,cases[className,name,duration,status,skippedMessage]]=//case[status="SKIPPED"]=ignored=true'
{code}

> Collect ignored tests
> -
>
> Key: HIVE-23578
> URL: https://issues.apache.org/jira/browse/HIVE-23578
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119549#comment-17119549
 ] 

Hive QA commented on HIVE-23514:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004330/HIVE-23514.01.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17220 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22685/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22685/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22685/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004330 - PreCommit-HIVE-Build

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119530#comment-17119530
 ] 

Hive QA commented on HIVE-23514:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 1524 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22685/dev-support/hive-personality.sh
 |
| git revision | master / 8443e50 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22685/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22017) [ Interface changes ] Keep HMS interfaces backward compatible with changes for HIVE-21637

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119507#comment-17119507
 ] 

Hive QA commented on HIVE-22017:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004327/HIVE-22017.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 17318 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionReq[Embedded]
 (batchId=156)
org.apache.hadoop.hive.metastore.client.TestGetPartitions.testGetPartitionReq[Remote]
 (batchId=156)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testGetPartitionsRequestWithArgs[Embedded]
 (batchId=154)
org.apache.hadoop.hive.metastore.client.TestListPartitions.testGetPartitionsRequestWithArgs[Remote]
 (batchId=154)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientGetPartitionsTempTable.testGetPartitionReq[Embedded]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientGetPartitionsTempTable.testGetPartitionReq[Remote]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testGetPartitionsRequestWithArgs[Embedded]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testGetPartitionsRequestWithArgs[Remote]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testGetPartitionsRequest[Embedded]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testGetPartitionsRequest[Remote]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testListPartitionNamesRequestByValues[Embedded]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testListPartitionNamesRequestByValues[Remote]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testListPartitionsWithAuthRequestByValues[Embedded]
 (batchId=270)
org.apache.hadoop.hive.ql.metadata.TestSessionHiveMetastoreClientListPartitionsTempTable.testListPartitionsWithAuthRequestByValues[Remote]
 (batchId=270)
org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread.testTxnTable 
(batchId=258)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22684/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22684/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22684/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004327 - PreCommit-HIVE-Build

> [ Interface changes ] Keep HMS interfaces backward compatible with changes 
> for HIVE-21637
> -
>
> Key: HIVE-22017
> URL: https://issues.apache.org/jira/browse/HIVE-22017
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.3.7
>Reporter: Daniel Dai
>Assignee: Kishen Das
>Priority: Major
> Attachments: HIVE-216371.1.patch, HIVE-216371.2.patch, 
> HIVE-22017.1.patch
>
>
> As part of HIVE-21637 we would have to introduce ValidWriteIdList in several 
> methods. Also, in the long term, we should deprecate and remove all the 
> methods that take direct arguments, as the service definition keeps changing 
> whenever we add/remove arguments, making it hard to maintain backward 
> compatibility. So, instead of adding writeId  in bunch of get_xxx calls that 
> take direct arguments, we will create new set of methods that take Request 
> object and return Response object. We shall mark those deprecated and remove 
> in future version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22942) Replace PTest with an alternative

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22942?focusedWorklogId=438678=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438678
 ]

ASF GitHub Bot logged work on HIVE-22942:
-

Author: ASF GitHub Bot
Created on: 29/May/20 11:22
Start Date: 29/May/20 11:22
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk closed pull request #948:
URL: https://github.com/apache/hive/pull/948


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438678)
Time Spent: 1.5h  (was: 1h 20m)

> Replace PTest with an alternative
> -
>
> Key: HIVE-22942
> URL: https://issues.apache.org/jira/browse/HIVE-22942
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22942.01.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I never opened a jira about this...but it might actually help collect ideas 
> and actually start going somewhere sooner than later :D
> Right now we maintain the ptest2 project inside Hive to be able to run Hive 
> tests in a distributed fashion...the backstab of this solution is that we are 
> putting much effort into maintaining a distributed test execution framework...
> I think it would be better if we could find an off the shelf solution for the 
> task and migrate to that instead of putting more efforts into the ptest 
> framework
> some info/etc about how it compares to existing one:
> https://docs.google.com/document/d/1dhL5B-eBvYNKEsNV3kE6RrkV5w-LtDgw5CtHV5pdoX4/edit#heading=h.e51vlxui3e6n



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22942) Replace PTest with an alternative

2020-05-29 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22942:

Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you Jesus for the feedback

> Replace PTest with an alternative
> -
>
> Key: HIVE-22942
> URL: https://issues.apache.org/jira/browse/HIVE-22942
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22942.01.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> I never opened a jira about this...but it might actually help collect ideas 
> and actually start going somewhere sooner than later :D
> Right now we maintain the ptest2 project inside Hive to be able to run Hive 
> tests in a distributed fashion...the backstab of this solution is that we are 
> putting much effort into maintaining a distributed test execution framework...
> I think it would be better if we could find an off the shelf solution for the 
> task and migrate to that instead of putting more efforts into the ptest 
> framework
> some info/etc about how it compares to existing one:
> https://docs.google.com/document/d/1dhL5B-eBvYNKEsNV3kE6RrkV5w-LtDgw5CtHV5pdoX4/edit#heading=h.e51vlxui3e6n



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-19548) Enable TestSSL#testSSLFetchHttp

2020-05-29 Thread David Lavati (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-19548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119501#comment-17119501
 ] 

David Lavati commented on HIVE-19548:
-

I ran into this in a much older version as well, my findings might help someone 
fixing this:

Error was:
{code:java}
 testSSLFetchHttp(org.apache.hive.jdbc.TestSSL)  Time elapsed: 12.568 sec  <<< 
ERROR!
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: 
FAILED: SemanticException [Error 10072]: Database does not exist: default
at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:279)
at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:265)
at 
org.apache.hive.jdbc.HiveStatement.runAsyncOnServer(HiveStatement.java:304)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:245)
at org.apache.hive.jdbc.TestSSL.setupTestTableWithData(TestSSL.java:492)
at org.apache.hive.jdbc.TestSSL.testSSLFetchHttp(TestSSL.java:369)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling 
statement: FAILED: SemanticException [Error 10072]: Database does not exist: 
default{code}
 

This happens during the create table command in the data setup:
{code:java}
Statement stmt = hs2Conn.createStatement();
stmt.execute("set hive.support.concurrency = false");

stmt.execute("drop table if exists " + tableName);
stmt.execute("create table " + tableName
+ " (under_col int comment 'the under column', value string)"); {code}
This is probably some kind of concurrency/timing issue.

If I debug this and check {{show databases}} before the create command, I do 
get back the default value. Also I had a passing test when run alone.

> Enable TestSSL#testSSLFetchHttp
> ---
>
> Key: HIVE-19548
> URL: https://issues.apache.org/jira/browse/HIVE-19548
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Affects Versions: 3.1.0
>Reporter: Jesus Camacho Rodriguez
>Priority: Critical
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22017) [ Interface changes ] Keep HMS interfaces backward compatible with changes for HIVE-21637

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119487#comment-17119487
 ] 

Hive QA commented on HIVE-22017:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-common in master failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} standalone-metastore/metastore-common: The patch 
generated 12 new + 409 unchanged - 0 fixed = 421 total (was 409) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 22 new + 652 unchanged - 1 fixed = 674 total (was 653) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22684/dev-support/hive-personality.sh
 |
| git revision | master / 1c1336f |
| Default Java | 1.8.0_111 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/branch-findbugs-standalone-metastore_metastore-common.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/diff-checkstyle-standalone-metastore_metastore-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/whitespace-tabs.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/patch-findbugs-standalone-metastore_metastore-common.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22684/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server U: standalone-metastore |
| Console output | 

[jira] [Work logged] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?focusedWorklogId=438657=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438657
 ]

ASF GitHub Bot logged work on HIVE-23514:
-

Author: ASF GitHub Bot
Created on: 29/May/20 10:33
Start Date: 29/May/20 10:33
Worklog Time Spent: 10m 
  Work Description: pkumarsinha opened a new pull request #1040:
URL: https://github.com/apache/hive/pull/1040


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438657)
Remaining Estimate: 0h
Time Spent: 10m

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread Pravin Sinha (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pravin Sinha updated HIVE-23514:

Status: Patch Available  (was: In Progress)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-23514:
--
Labels: pull-request-available  (was: )

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23514.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread Pravin Sinha (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pravin Sinha updated HIVE-23514:

Attachment: (was: HIVE-23514.WIP.patch)

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
> Attachments: HIVE-23514.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23514) Add Atlas metadata replication metrics

2020-05-29 Thread Pravin Sinha (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pravin Sinha updated HIVE-23514:

Attachment: HIVE-23514.01.patch

> Add Atlas metadata replication metrics
> --
>
> Key: HIVE-23514
> URL: https://issues.apache.org/jira/browse/HIVE-23514
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
> Attachments: HIVE-23514.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23460) Add qoption to disable qtests

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23460?focusedWorklogId=438652=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438652
 ]

ASF GitHub Bot logged work on HIVE-23460:
-

Author: ASF GitHub Bot
Created on: 29/May/20 10:24
Start Date: 29/May/20 10:24
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk closed pull request #1018:
URL: https://github.com/apache/hive/pull/1018


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438652)
Time Spent: 40m  (was: 0.5h)

> Add qoption to disable qtests
> -
>
> Key: HIVE-23460
> URL: https://issues.apache.org/jira/browse/HIVE-23460
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-23460.01.patch, HIVE-23460.02.patch, 
> HIVE-23460.02.patch, HIVE-23460.02.patch, HIVE-23460.02.patch, 
> HIVE-23460.02.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> instead other ways to exclude them... (testconfiguration.properties; 
> CliConfig#excludeQuery)
> {code}
> --! qt:disabled:reason
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23368) MV rebuild should produce the same view as the one configured at creation time

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23368?focusedWorklogId=438653=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438653
 ]

ASF GitHub Bot logged work on HIVE-23368:
-

Author: ASF GitHub Bot
Created on: 29/May/20 10:24
Start Date: 29/May/20 10:24
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk closed pull request #1009:
URL: https://github.com/apache/hive/pull/1009


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438653)
Remaining Estimate: 0h
Time Spent: 10m

> MV rebuild should produce the same view as the one configured at creation time
> --
>
> Key: HIVE-23368
> URL: https://issues.apache.org/jira/browse/HIVE-23368
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-23368.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There might be some configrations which might affect the rel-tree of the 
> materialized views.
> In case rewrites to use datasketches for count(distinct) is enabled; the view 
> should store sketches instead of distinct values



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23368) MV rebuild should produce the same view as the one configured at creation time

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-23368:
--
Labels: pull-request-available  (was: )

> MV rebuild should produce the same view as the one configured at creation time
> --
>
> Key: HIVE-23368
> URL: https://issues.apache.org/jira/browse/HIVE-23368
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-23368.01.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There might be some configrations which might affect the rel-tree of the 
> materialized views.
> In case rewrites to use datasketches for count(distinct) is enabled; the view 
> should store sketches instead of distinct values



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22942) Replace PTest with an alternative

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22942?focusedWorklogId=438650=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438650
 ]

ASF GitHub Bot logged work on HIVE-22942:
-

Author: ASF GitHub Bot
Created on: 29/May/20 10:14
Start Date: 29/May/20 10:14
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #948:
URL: https://github.com/apache/hive/pull/948#discussion_r432389475



##
File path: Jenkinsfile
##
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+properties([
+// max 5 build/branch/day

Review comment:
   opened https://issues.apache.org/jira/browse/HIVE-23563 to not forget 
this





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438650)
Time Spent: 1h 20m  (was: 1h 10m)

> Replace PTest with an alternative
> -
>
> Key: HIVE-22942
> URL: https://issues.apache.org/jira/browse/HIVE-22942
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22942.01.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> I never opened a jira about this...but it might actually help collect ideas 
> and actually start going somewhere sooner than later :D
> Right now we maintain the ptest2 project inside Hive to be able to run Hive 
> tests in a distributed fashion...the backstab of this solution is that we are 
> putting much effort into maintaining a distributed test execution framework...
> I think it would be better if we could find an off the shelf solution for the 
> task and migrate to that instead of putting more efforts into the ptest 
> framework
> some info/etc about how it compares to existing one:
> https://docs.google.com/document/d/1dhL5B-eBvYNKEsNV3kE6RrkV5w-LtDgw5CtHV5pdoX4/edit#heading=h.e51vlxui3e6n



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22942) Replace PTest with an alternative

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22942?focusedWorklogId=438648=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438648
 ]

ASF GitHub Bot logged work on HIVE-22942:
-

Author: ASF GitHub Bot
Created on: 29/May/20 10:13
Start Date: 29/May/20 10:13
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #948:
URL: https://github.com/apache/hive/pull/948#discussion_r432389105



##
File path: 
ql/src/test/queries/clientpositive/schema_evol_par_vec_table_dictionary_encoding.q
##
@@ -1,3 +1,5 @@
+--! qt:disabled:falky?!

Review comment:
   added jira refs to which already had one - and opened a few new jiras to 
have that





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438648)
Time Spent: 1h 10m  (was: 1h)

> Replace PTest with an alternative
> -
>
> Key: HIVE-22942
> URL: https://issues.apache.org/jira/browse/HIVE-22942
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22942.01.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I never opened a jira about this...but it might actually help collect ideas 
> and actually start going somewhere sooner than later :D
> Right now we maintain the ptest2 project inside Hive to be able to run Hive 
> tests in a distributed fashion...the backstab of this solution is that we are 
> putting much effort into maintaining a distributed test execution framework...
> I think it would be better if we could find an off the shelf solution for the 
> task and migrate to that instead of putting more efforts into the ptest 
> framework
> some info/etc about how it compares to existing one:
> https://docs.google.com/document/d/1dhL5B-eBvYNKEsNV3kE6RrkV5w-LtDgw5CtHV5pdoX4/edit#heading=h.e51vlxui3e6n



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23353) Atlas metadata replication scheduling

2020-05-29 Thread Anishek Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anishek Agarwal updated HIVE-23353:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

+1 , Committed to Master, Thanks for the patch [~pkumarsinha] and review 
[~aasha]

> Atlas metadata replication scheduling
> -
>
> Key: HIVE-23353
> URL: https://issues.apache.org/jira/browse/HIVE-23353
> Project: Hive
>  Issue Type: Task
>Reporter: Pravin Sinha
>Assignee: Pravin Sinha
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23353.01.patch, HIVE-23353.02.patch, 
> HIVE-23353.03.patch, HIVE-23353.04.patch, HIVE-23353.05.patch, 
> HIVE-23353.06.patch, HIVE-23353.07.patch, HIVE-23353.08.patch, 
> HIVE-23353.08.patch, HIVE-23353.08.patch, HIVE-23353.08.patch, 
> HIVE-23353.09.patch, HIVE-23353.10.patch, HIVE-23353.10.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23242) Fix flaky tests testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119461#comment-17119461
 ] 

Hive QA commented on HIVE-23242:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004304/HIVE-23242.4.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 17289 tests 
executed
*Failed tests:*
{noformat}
TestStatsReplicationScenariosACID - did not produce a TEST-*.xml file (likely 
timed out) (batchId=186)
org.apache.hadoop.hive.metastore.TestPartitionManagement.testPartitionDiscoveryTablePattern
 (batchId=154)
org.apache.hadoop.hive.metastore.security.TestZookeeperTokenStoreSSLEnabled.org.apache.hadoop.hive.metastore.security.TestZookeeperTokenStoreSSLEnabled
 (batchId=177)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22683/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22683/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22683/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004304 - PreCommit-HIVE-Build

> Fix flaky tests testHouseKeepingThreadExistence in 
> TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader
> -
>
> Key: HIVE-23242
> URL: https://issues.apache.org/jira/browse/HIVE-23242
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Peter Varga
>Priority: Major
> Attachments: HIVE-23242.1.patch, HIVE-23242.2.patch, 
> HIVE-23242.3.patch, HIVE-23242.4.patch
>
>
> Tests were ignored, see https://issues.apache.org/jira/browse/HIVE-23221 for 
> details.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22017) [ Interface changes ] Keep HMS interfaces backward compatible with changes for HIVE-21637

2020-05-29 Thread Kishen Das (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kishen Das updated HIVE-22017:
--
Attachment: HIVE-22017.1.patch

> [ Interface changes ] Keep HMS interfaces backward compatible with changes 
> for HIVE-21637
> -
>
> Key: HIVE-22017
> URL: https://issues.apache.org/jira/browse/HIVE-22017
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.3.7
>Reporter: Daniel Dai
>Assignee: Kishen Das
>Priority: Major
> Attachments: HIVE-216371.1.patch, HIVE-216371.2.patch, 
> HIVE-22017.1.patch
>
>
> As part of HIVE-21637 we would have to introduce ValidWriteIdList in several 
> methods. Also, in the long term, we should deprecate and remove all the 
> methods that take direct arguments, as the service definition keeps changing 
> whenever we add/remove arguments, making it hard to maintain backward 
> compatibility. So, instead of adding writeId  in bunch of get_xxx calls that 
> take direct arguments, we will create new set of methods that take Request 
> object and return Response object. We shall mark those deprecated and remove 
> in future version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23242) Fix flaky tests testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119428#comment-17119428
 ] 

Hive QA commented on HIVE-23242:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} metastore-server in master failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 427 unchanged - 2 fixed = 427 total (was 429) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22683/dev-support/hive-personality.sh
 |
| git revision | master / 6c8d478 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22683/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22683/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22683/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix flaky tests testHouseKeepingThreadExistence in 
> TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader
> -
>
> Key: HIVE-23242
> URL: https://issues.apache.org/jira/browse/HIVE-23242
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Peter Varga
>Priority: Major
> Attachments: HIVE-23242.1.patch, HIVE-23242.2.patch, 
> HIVE-23242.3.patch, HIVE-23242.4.patch
>
>
> Tests were ignored, 

[jira] [Commented] (HIVE-23340) TxnHandler cleanup

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119401#comment-17119401
 ] 

Hive QA commented on HIVE-23340:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004305/HIVE-23340.6.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17294 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.parse.TestScheduledReplicationScenarios.testAcidTablesReplLoadBootstrapIncr
 (batchId=207)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22682/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22682/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22682/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004305 - PreCommit-HIVE-Build

> TxnHandler cleanup
> --
>
> Key: HIVE-23340
> URL: https://issues.apache.org/jira/browse/HIVE-23340
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Minor
> Attachments: HIVE-23340.1.patch, HIVE-23340.2.patch, 
> HIVE-23340.3.patch, HIVE-23340.4.patch, HIVE-23340.5.patch, HIVE-23340.6.patch
>
>
> * Merge getOpenTxns and getOpenTxnInfo to avoid code duplication
>  * Remove TxnStatus character constants and use the enum values



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23462) Add option to rewrite NTILE to sketch functions

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23462?focusedWorklogId=438608=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438608
 ]

ASF GitHub Bot logged work on HIVE-23462:
-

Author: ASF GitHub Bot
Created on: 29/May/20 08:08
Start Date: 29/May/20 08:08
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #1031:
URL: https://github.com/apache/hive/pull/1031#discussion_r432323344



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelBuilder.java
##
@@ -157,6 +159,12 @@ public static SqlAggFunction getRollup(SqlAggFunction 
aggregation) {
 return null;
   }
 
+  @Override
+  public AggCall aggregateCall(SqlAggFunction aggFunction, boolean distinct, 
boolean approximate, boolean ignoreNulls,
+  RexNode filter, ImmutableList orderKeys, String alias, 
ImmutableList operands) {
+return super.aggregateCall(aggFunction, distinct, approximate, 
ignoreNulls, filter, orderKeys, alias, operands);

Review comment:
   actually 
[AggregateCall#isApproximate](https://github.com/apache/calcite/blob/abe772018a6adb5007429e0c1c83b6e7d83a1c71/core/src/main/java/org/apache/calcite/rel/core/AggregateCall.java#L218)
 is only used at one place - to decide to merge or not aggregates (in 
[AggregateMergeRule](https://github.com/apache/calcite/blob/abe772018a6adb5007429e0c1c83b6e7d83a1c71/core/src/main/java/org/apache/calcite/rel/rules/AggregateMergeRule.java#L70));
 I now think that I've incorrectly set it to true - although the sketch will 
collect some features of the dataset; the approximation will be done by the 
`sketch_estimate` function
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 438608)
Time Spent: 1h  (was: 50m)

> Add option to rewrite NTILE to sketch functions
> ---
>
> Key: HIVE-23462
> URL: https://issues.apache.org/jira/browse/HIVE-23462
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23462.01.patch, HIVE-23462.02.patch, 
> HIVE-23462.03.patch, HIVE-23462.04.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23340) TxnHandler cleanup

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119383#comment-17119383
 ] 

Hive QA commented on HIVE-23340:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} standalone-metastore/metastore-server: The patch 
generated 0 new + 496 unchanged - 32 fixed = 496 total (was 528) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-22682/dev-support/hive-personality.sh
 |
| git revision | master / 6c8d478 |
| Default Java | 1.8.0_111 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22682/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22682/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-22682/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TxnHandler cleanup
> --
>
> Key: HIVE-23340
> URL: https://issues.apache.org/jira/browse/HIVE-23340
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Peter Varga
>Assignee: Peter Varga
>Priority: Minor
> Attachments: HIVE-23340.1.patch, HIVE-23340.2.patch, 
> HIVE-23340.3.patch, HIVE-23340.4.patch, HIVE-23340.5.patch, HIVE-23340.6.patch
>
>
> * Merge getOpenTxns and getOpenTxnInfo to avoid code duplication
>  * Remove TxnStatus character constants and use the enum values



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-23577) Utility to generate/manage delegation token for Hive Metastore.

2020-05-29 Thread Deepashree Gandhi (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119379#comment-17119379
 ] 

Deepashree Gandhi commented on HIVE-23577:
--

This JIRA is to implement a utility which generates delegation token for 
Hivemetastore similar to that of DelegationTokenFetcher of hadoop. 
Kerberos-authenticated client may retrieve a binary token from the server, 
renew it and also print it. It should also be able to print the token in proper 
consumable format into a file.

> Utility to generate/manage delegation token for Hive Metastore.
> ---
>
> Key: HIVE-23577
> URL: https://issues.apache.org/jira/browse/HIVE-23577
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication, Metastore, Security, Standalone Metastore
>Affects Versions: 3.0.0
> Environment: Secure(Kerberos enabled) environment.
>Reporter: Dharmesh Jain
>Assignee: Deepashree Gandhi
>Priority: Major
>
> Create a utility to generate/manage delegation token for Hivemetastore on the 
> same line of DelegationTokenFetcher for HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23560) Optimize bootstrap dump to abort only write Transactions

2020-05-29 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23560:
---
Description: 
Currently before doing a bootstrap dump, we abort all open transactions after 
waiting for a configured time. We are proposing to abort only write 
transactions for the db under replication and leave the read and repl created 
transactions as is.
This doc attached talks about it in detail

  was:
Currently before doing a bootstrap dump, we abort all open transactions after 
waiting for a configured time. We are proposing to abort only write 
transactions and leave the read and repl created transactions as is.
This doc attached talks about it in detail


> Optimize bootstrap dump to abort only write Transactions
> 
>
> Key: HIVE-23560
> URL: https://issues.apache.org/jira/browse/HIVE-23560
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: Optimize bootstrap dump to avoid aborting all 
> transactions.pdf
>
>
> Currently before doing a bootstrap dump, we abort all open transactions after 
> waiting for a configured time. We are proposing to abort only write 
> transactions for the db under replication and leave the read and repl created 
> transactions as is.
> This doc attached talks about it in detail



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23560) Optimize bootstrap dump to abort only write Transactions

2020-05-29 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23560:
---
Description: 
Currently before doing a bootstrap dump, we abort all open transactions after 
waiting for a configured time. We are proposing to abort only write 
transactions and leave the read and repl created transactions as is.
This doc attached talks about it in detail

  was:
Currently before doing a bootstrap dump, we abort all open transactions after 
waiting for a configured time. We are proposing to abort only write 
transactions and leave the read and repl created transactions as is.
This doc talks about it in detail
https://docs.google.com/document/d/1tLTg_K2EHRUaBaBfoto6l4x7726uohht3dPCGqyI_LI/edit#


> Optimize bootstrap dump to abort only write Transactions
> 
>
> Key: HIVE-23560
> URL: https://issues.apache.org/jira/browse/HIVE-23560
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: Optimize bootstrap dump to avoid aborting all 
> transactions.pdf
>
>
> Currently before doing a bootstrap dump, we abort all open transactions after 
> waiting for a configured time. We are proposing to abort only write 
> transactions and leave the read and repl created transactions as is.
> This doc attached talks about it in detail



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23560) Optimize bootstrap dump to abort only write Transactions

2020-05-29 Thread Aasha Medhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aasha Medhi updated HIVE-23560:
---
Attachment: Optimize bootstrap dump to avoid aborting all transactions.pdf

> Optimize bootstrap dump to abort only write Transactions
> 
>
> Key: HIVE-23560
> URL: https://issues.apache.org/jira/browse/HIVE-23560
> Project: Hive
>  Issue Type: Task
>Reporter: Aasha Medhi
>Assignee: Aasha Medhi
>Priority: Major
> Attachments: Optimize bootstrap dump to avoid aborting all 
> transactions.pdf
>
>
> Currently before doing a bootstrap dump, we abort all open transactions after 
> waiting for a configured time. We are proposing to abort only write 
> transactions and leave the read and repl created transactions as is.
> This doc attached talks about it in detail



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23462) Add option to rewrite NTILE to sketch functions

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23462?focusedWorklogId=438604=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438604
 ]

ASF GitHub Bot logged work on HIVE-23462:
-

Author: ASF GitHub Bot
Created on: 29/May/20 07:52
Start Date: 29/May/20 07:52
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #1031:
URL: https://github.com/apache/hive/pull/1031#discussion_r432315486



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
##
@@ -368,4 +389,216 @@ void rewrite(AggregateCall aggCall) {
   }
 }
   }
+
+  /**
+   * Generic support for rewriting Windowing expression into a different form 
usually using joins.
+   */
+  private static abstract class WindowingToProjectAggregateJoinProject extends 
RelOptRule {
+
+protected final String sketchType;
+
+public WindowingToProjectAggregateJoinProject(String sketchType) {
+  super(operand(HiveProject.class, any()));
+  this.sketchType = sketchType;
+}
+
+@Override
+public void onMatch(RelOptRuleCall call) {
+
+  final Project project = call.rel(0);
+
+  VbuilderPAP vb = buildProcessor(call);
+  RelNode newProject = vb.processProject(project);
+
+  if (newProject instanceof Project && ((Project) 
newProject).getChildExps().equals(project.getChildExps())) {
+return;
+  } else {
+call.transformTo(newProject);
+  }
+}
+
+protected abstract VbuilderPAP buildProcessor(RelOptRuleCall call);
+
+
+protected static abstract class VbuilderPAP {
+  private final String sketchClass;
+  protected final RelBuilder relBuilder;
+  protected final RexBuilder rexBuilder;
+
+  protected VbuilderPAP(String sketchClass, RelBuilder relBuilder) {
+this.sketchClass = sketchClass;
+this.relBuilder = relBuilder;
+rexBuilder = relBuilder.getRexBuilder();
+  }
+
+  final class ProcessShuttle extends RexShuttle {
+public RexNode visitOver(RexOver over) {
+  return processCall(over);
+}
+  };
+
+  protected RelNode processProject(Project project) {
+relBuilder.push(project.getInput());
+RexShuttle shuttle = new ProcessShuttle();
+List newProjects = new ArrayList();
+for (RexNode expr : project.getChildExps()) {
+newProjects.add(expr.accept(shuttle));
+}
+relBuilder.project(newProjects);
+return relBuilder.build();
+  }
+
+  private final RexNode processCall(RexNode expr) {
+if (expr instanceof RexOver) {
+  RexOver over = (RexOver) expr;
+  if (isApplicable(over)) {
+return rewrite(over);
+  }
+}
+return expr;
+  }
+
+  protected final SqlOperator getSqlOperator(String fnName) {
+UDFDescriptor fn = 
DataSketchesFunctions.INSTANCE.getSketchFunction(sketchClass, fnName);
+if (!fn.getCalciteFunction().isPresent()) {
+  throw new RuntimeException(fn.toString() + " doesn't have a Calcite 
function associated with it");
+}
+return fn.getCalciteFunction().get();
+  }
+
+  abstract RexNode rewrite(RexOver expr);
+
+  abstract boolean isApplicable(RexOver expr);
+
+}
+
+  }
+
+  public static class CumeDistRewrite extends 
WindowingToProjectAggregateJoinProject {
+
+public CumeDistRewrite(String sketchType) {
+  super(sketchType);
+}
+
+@Override
+protected VbuilderPAP buildProcessor(RelOptRuleCall call) {
+  return new VB(sketchType, call.builder());
+}
+
+private static class VB extends VbuilderPAP {
+
+  protected VB(String sketchClass, RelBuilder relBuilder) {
+super(sketchClass, relBuilder);
+  }
+
+  @Override
+  boolean isApplicable(RexOver over) {
+SqlAggFunction aggOp = over.getAggOperator();
+RexWindow window = over.getWindow();
+if (aggOp.getName().equalsIgnoreCase("cume_dist") && 
window.orderKeys.size() == 1
+&& window.getLowerBound().isUnbounded() && 
window.getUpperBound().isUnbounded()) {
+  return true;
+}
+return false;
+  }
+
+  @Override
+  RexNode rewrite(RexOver over) {
+
+over.getOperands();
+RexWindow w = over.getWindow();
+
+RexFieldCollation orderKey = w.orderKeys.get(0);
+// we don't really support nulls in aggregate/etc...they are actually 
ignored
+// so some hack will be needed for NULLs anyway..
+ImmutableList partitionKeys = w.partitionKeys;
+
+relBuilder.push(relBuilder.peek());
+// the CDF function utilizes the '<' operator;
+// negating the input will mirror the values on the x axis
+// by using 1-CDF(-x) we could get a <= operator
+RexNode key = 

[jira] [Assigned] (HIVE-23577) Utility to generate/manage delegation token for Hive Metastore.

2020-05-29 Thread Deepashree Gandhi (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepashree Gandhi reassigned HIVE-23577:


Assignee: Deepashree Gandhi

> Utility to generate/manage delegation token for Hive Metastore.
> ---
>
> Key: HIVE-23577
> URL: https://issues.apache.org/jira/browse/HIVE-23577
> Project: Hive
>  Issue Type: Bug
>  Components: Authentication, Metastore, Security, Standalone Metastore
>Affects Versions: 3.0.0
> Environment: Secure(Kerberos enabled) environment.
>Reporter: Dharmesh Jain
>Assignee: Deepashree Gandhi
>Priority: Major
>
> Create a utility to generate/manage delegation token for Hivemetastore on the 
> same line of DelegationTokenFetcher for HDFS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23462) Add option to rewrite NTILE to sketch functions

2020-05-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23462?focusedWorklogId=438603=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-438603
 ]

ASF GitHub Bot logged work on HIVE-23462:
-

Author: ASF GitHub Bot
Created on: 29/May/20 07:43
Start Date: 29/May/20 07:43
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on a change in pull request #1031:
URL: https://github.com/apache/hive/pull/1031#discussion_r432310866



##
File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRewriteToDataSketchesRules.java
##
@@ -368,4 +389,216 @@ void rewrite(AggregateCall aggCall) {
   }
 }
   }
+
+  /**
+   * Generic support for rewriting Windowing expression into a different form 
usually using joins.
+   */
+  private static abstract class WindowingToProjectAggregateJoinProject extends 
RelOptRule {
+
+protected final String sketchType;
+
+public WindowingToProjectAggregateJoinProject(String sketchType) {
+  super(operand(HiveProject.class, any()));
+  this.sketchType = sketchType;
+}
+
+@Override
+public void onMatch(RelOptRuleCall call) {
+
+  final Project project = call.rel(0);
+
+  VbuilderPAP vb = buildProcessor(call);
+  RelNode newProject = vb.processProject(project);
+
+  if (newProject instanceof Project && ((Project) 
newProject).getChildExps().equals(project.getChildExps())) {
+return;
+  } else {
+call.transformTo(newProject);
+  }
+}
+
+protected abstract VbuilderPAP buildProcessor(RelOptRuleCall call);
+
+
+protected static abstract class VbuilderPAP {
+  private final String sketchClass;
+  protected final RelBuilder relBuilder;
+  protected final RexBuilder rexBuilder;
+
+  protected VbuilderPAP(String sketchClass, RelBuilder relBuilder) {
+this.sketchClass = sketchClass;
+this.relBuilder = relBuilder;
+rexBuilder = relBuilder.getRexBuilder();
+  }
+
+  final class ProcessShuttle extends RexShuttle {
+public RexNode visitOver(RexOver over) {
+  return processCall(over);
+}
+  };
+
+  protected RelNode processProject(Project project) {
+relBuilder.push(project.getInput());
+RexShuttle shuttle = new ProcessShuttle();
+List newProjects = new ArrayList();
+for (RexNode expr : project.getChildExps()) {
+newProjects.add(expr.accept(shuttle));
+}
+relBuilder.project(newProjects);
+return relBuilder.build();
+  }
+
+  private final RexNode processCall(RexNode expr) {
+if (expr instanceof RexOver) {
+  RexOver over = (RexOver) expr;
+  if (isApplicable(over)) {
+return rewrite(over);
+  }
+}
+return expr;
+  }
+
+  protected final SqlOperator getSqlOperator(String fnName) {
+UDFDescriptor fn = 
DataSketchesFunctions.INSTANCE.getSketchFunction(sketchClass, fnName);
+if (!fn.getCalciteFunction().isPresent()) {
+  throw new RuntimeException(fn.toString() + " doesn't have a Calcite 
function associated with it");
+}
+return fn.getCalciteFunction().get();
+  }
+
+  abstract RexNode rewrite(RexOver expr);
+
+  abstract boolean isApplicable(RexOver expr);
+
+}
+
+  }
+
+  public static class CumeDistRewrite extends 
WindowingToProjectAggregateJoinProject {
+
+public CumeDistRewrite(String sketchType) {
+  super(sketchType);
+}
+
+@Override
+protected VbuilderPAP buildProcessor(RelOptRuleCall call) {
+  return new VB(sketchType, call.builder());
+}
+
+private static class VB extends VbuilderPAP {
+
+  protected VB(String sketchClass, RelBuilder relBuilder) {
+super(sketchClass, relBuilder);
+  }
+
+  @Override
+  boolean isApplicable(RexOver over) {
+SqlAggFunction aggOp = over.getAggOperator();
+RexWindow window = over.getWindow();
+if (aggOp.getName().equalsIgnoreCase("cume_dist") && 
window.orderKeys.size() == 1
+&& window.getLowerBound().isUnbounded() && 
window.getUpperBound().isUnbounded()) {
+  return true;
+}
+return false;
+  }
+
+  @Override
+  RexNode rewrite(RexOver over) {
+
+over.getOperands();
+RexWindow w = over.getWindow();
+
+RexFieldCollation orderKey = w.orderKeys.get(0);
+// we don't really support nulls in aggregate/etc...they are actually 
ignored
+// so some hack will be needed for NULLs anyway..
+ImmutableList partitionKeys = w.partitionKeys;
+
+relBuilder.push(relBuilder.peek());

Review comment:
   yes; this is the point where the other side of the join is getting 
started to being built
   added some explanation to the #rewrite method about this





[jira] [Commented] (HIVE-23435) Full outer join result is missing rows

2020-05-29 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119363#comment-17119363
 ] 

Hive QA commented on HIVE-23435:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/13004303/HIVE-23435.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17290 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[special_character_in_tabnames_1]
 (batchId=76)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/22681/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/22681/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-22681/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 13004303 - PreCommit-HIVE-Build

> Full outer join result is missing rows 
> ---
>
> Key: HIVE-23435
> URL: https://issues.apache.org/jira/browse/HIVE-23435
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.1.0
>Reporter: Naveen Gangam
>Assignee: Mustafa Iman
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-23435.1.patch, HIVE-23435.patch, HIVE-23435.patch, 
> HIVE-23435.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Full Outer join result has missing rows. Appears to be a bug with the full 
> outer join logic. Expected output is receiving when we do a left and right 
> outer join.
> Reproducible steps are mentioned below.
> ~~
> SUPPORT ANALYSIS
> Steps to Reproduce:
> 1. Create a table and insert data:
> create table x (z char(5), x int, y int);
> insert into x values ('one', 1, 50),
>  ('two', 2, 30),
>  ('three', 3, 30),
>  ('four', 4, 60),
>  ('five', 5, 70),
>  ('six', 6, 80);
> 2. Try full outer with the below command. The result is incomplete, it is 
> missing the row:
> NULL NULL NULL three 3 30.0
>  Full Outer Join:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 full outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> --+
> x1.z x1.x x1.y x2.z x2.x x2.y
>  --+
> one 1 50 NULL NULL NULL
>  NULL NULL NULL one 1 50
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL two 2 30
>  three 3 30 NULL NULL NULL
>  four 4 60 NULL NULL NULL
>  NULL NULL NULL four 4 60
>  five 5 70 NULL NULL NULL
>  NULL NULL NULL five 5 70
>  six 6 80 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  --+
> 3. Expected output is coming when we use left/right join + union:
> select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 left outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`)
>  union
>  select x1.`z`, x1.`x`, x1.`y`, x2.`z`,
>  x2.`x`, x2.`y`
>  from `x` x1 right outer join
>  `x` x2 on (x1.`x` > 3) and (x2.`x` < 4) and (x1.`x` =
>  x2.`x`);
> Result:
> +
> z x y _col3 _col4 _col5
>  +
> NULL NULL NULL five 5 70
>  NULL NULL NULL four 4 60
>  NULL NULL NULL one 1 50
>  four 4 60 NULL NULL NULL
>  one 1 50 NULL NULL NULL
>  six 6 80 NULL NULL NULL
>  three 3 30 NULL NULL NULL
>  two 2 30 NULL NULL NULL
>  NULL NULL NULL six 6 80
>  NULL NULL NULL three 3 30
>  NULL NULL NULL two 2 30
>  five 5 70 NULL NULL NULL
>  +
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-23576) Getting partition of type int from metastore sometimes fail on cast error

2020-05-29 Thread Lev Katzav (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lev Katzav updated HIVE-23576:
--
Description: 
+given the following situation:+

there are 2 tables (in db "intpartitionbugtest"), each with a few rows:
 # *test_table_int_1* partitioned by *y* of type *int*
 # *test_table_string_1* partitioned by *x* of type *string*

here is the output of the following query on the metastore db:
{code:sql}
select
"PARTITIONS"."PART_ID",
"TBLS"."TBL_NAME",
"FILTER0"."PART_KEY_VAL",
"PART_NAME"
from
"PARTITIONS"
inner join "TBLS" on
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID"
inner join "DBS" on
"TBLS"."DB_ID" = "DBS"."DB_ID"
inner join "PARTITION_KEY_VALS" "FILTER0" on
"FILTER0"."PART_ID" = "PARTITIONS"."PART_ID"
{code}
 

!image-2020-05-29-14-16-29-356.png!

+the problem+

when running a hive query on the table *test_table_int_1* that filters on *y=1*
 sometimes the following exception will happen on the metastore

 
{code:java}
javax.jdo.JDODataStoreException: Error executing SQL query "select 
"PARTITIONS"."PART_ID" from "PARTITIONS"  inner join "TBLS" on 
"PARTITIONS"."TBL_ID" = "TBLS"."TBL_ID" and "TBLS"."TBL_NAME" = ?   inner 
join "DBS" on "TBLS"."DB_ID" = "DBS"."DB_ID"  and "DBS"."NAME" = ? inner 
join "PARTITION_KEY_VALS" "FILTER0" on "FILTER0"."PART_ID" = 
"PARTITIONS"."PART_ID" and "FILTER0"."INTEGER_IDX" = 0 where "DBS"."CTLG_NAME" 
= ?  and (((case when "FILTER0"."PART_KEY_VAL" <> ? then 
cast("FILTER0"."PART_KEY_VAL" as decimal(21,0)) else null end) = ?))".
at 
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
 ~[datanucleus-api-jdo-4.2.4.jar:?]
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391) 
~[datanucleus-api-jdo-4.2.4.jar:?]
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267) 
~[datanucleus-api-jdo-4.2.4.jar:?]
at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:2003)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:593)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPartitionsViaSqlFilter(MetaStoreDirectSql.java:481)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:3853)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:3843)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:3577)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:3861)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:3516)
 [hive-exec-3.1.2.jar:3.1.2]
at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source) ~[?:?]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_112]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) 
[hive-exec-3.1.2.jar:3.1.2]
at com.sun.proxy.$Proxy28.getPartitionsByFilter(Unknown Source) [?:?]
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:5883)
 [hive-exec-3.1.2.jar:3.1.2]
at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source) ~[?:?]
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:1.8.0_112]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
 [hive-exec-3.1.2.jar:3.1.2]
at com.sun.proxy.$Proxy30.get_partitions_by_filter(Unknown Source) [?:?]
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_filter.getResult(ThriftHiveMetastore.java:16234)
 [hive-exec-3.1.2.jar:3.1.2]
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_filter.getResult(ThriftHiveMetastore.java:16218)
 [hive-exec-3.1.2.jar:3.1.2]
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) 
[hive-exec-3.1.2.jar:3.1.2]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) 
[hive-exec-3.1.2.jar:3.1.2]
at 

  1   2   >