[ 
https://issues.apache.org/jira/browse/HIVE-16546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992718#comment-15992718
 ] 

Hive QA commented on HIVE-16546:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12865911/HIVE-16546.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 10632 tests 
executed
*Failed tests:*
{noformat}
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=214)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_index] 
(batchId=225)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4991/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4991/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4991/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12865911 - PreCommit-HIVE-Build

> LLAP: Fail map join tasks if hash table memory exceeds threshold
> ----------------------------------------------------------------
>
>                 Key: HIVE-16546
>                 URL: https://issues.apache.org/jira/browse/HIVE-16546
>             Project: Hive
>          Issue Type: Bug
>          Components: llap
>    Affects Versions: 3.0.0
>            Reporter: Prasanth Jayachandran
>            Assignee: Prasanth Jayachandran
>         Attachments: HIVE-16546.1.patch, HIVE-16546.2.patch, 
> HIVE-16546.3.patch, HIVE-16546.4.patch, HIVE-16546.5.patch, 
> HIVE-16546.WIP.patch
>
>
> When map join task is running in llap, it can potentially use lot more memory 
> than its limit which could be memory per executor or no conditional task 
> size. If it uses more memory, it can adversely affect other query performance 
> or it can even bring down the daemon. In such cases, it is better to fail the 
> query than to bring down the daemon. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to