[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-18 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620125#comment-16620125
 ] 

Nishant Bangarwa commented on HIVE-20349:
-

[~ashutoshc] please merge. This is good to go in. 

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.4.patch, HIVE-20349.5.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-18 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620108#comment-16620108
 ] 

Hive QA commented on HIVE-20349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940253/HIVE-20349.5.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14979 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13895/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13895/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13895/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940253 - PreCommit-HIVE-Build

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.4.patch, HIVE-20349.5.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-18 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16620082#comment-16620082
 ] 

Hive QA commented on HIVE-20349:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
26s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} druid-handler: The patch generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13895/dev-support/hive-personality.sh
 |
| git revision | master / 9c90776 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13895/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.4.patch, HIVE-20349.5.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was 

[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617104#comment-16617104
 ] 

Hive QA commented on HIVE-20349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939912/HIVE-20349.4.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14961 tests 
executed
*Failed tests:*
{noformat}
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=193)

[druidmini_dynamic_partition.q,druidmini_test_ts.q,druidmini_expressions.q,druidmini_test_alter.q,druidmini_test_insert.q]
org.apache.hive.jdbc.miniHS2.TestHs2ConnectionMetricsBinary.testOpenConnectionMetrics
 (batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13849/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13849/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13849/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939912 - PreCommit-HIVE-Build

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.4.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-16 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617084#comment-16617084
 ] 

Hive QA commented on HIVE-20349:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
14s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} druid-handler: The patch generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13849/dev-support/hive-personality.sh
 |
| git revision | master / a37827e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13849/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.4.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> 

[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613173#comment-16613173
 ] 

Hive QA commented on HIVE-20349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939424/HIVE-20349.3.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 14938 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_alter]
 (batchId=193)
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=251)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testDataTypes (batchId=251)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testEscapedStrings (batchId=251)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testLlapInputFormatEndToEnd 
(batchId=251)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testNonAsciiStrings (batchId=251)
org.apache.hive.jdbc.miniHS2.TestHs2ConnectionMetricsBinary.testOpenConnectionMetrics
 (batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13752/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13752/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13752/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939424 - PreCommit-HIVE-Build

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-13 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613134#comment-16613134
 ] 

Hive QA commented on HIVE-20349:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} druid-handler: The patch generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13752/dev-support/hive-personality.sh
 |
| git revision | master / a3b7a24 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13752/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime 

[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-12 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612314#comment-16612314
 ] 

Nishant Bangarwa commented on HIVE-20349:
-

updated patch. 

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-12 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612269#comment-16612269
 ] 

Ashutosh Chauhan commented on HIVE-20349:
-

[~nishantbangarwa] Can you rebase and reupload the patch?

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-04 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16603218#comment-16603218
 ] 

Jesus Camacho Rodriguez commented on HIVE-20349:


[~nishantbangarwa], is failure related? Should we rebase + push? Thanks

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-23 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16589795#comment-16589795
 ] 

Ashutosh Chauhan commented on HIVE-20349:
-

+1

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586524#comment-16586524
 ] 

Hive QA commented on HIVE-20349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12936303/HIVE-20349.2.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14885 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[windowing_range_multiorder]
 (batchId=7)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13354/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13354/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13354/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12936303 - PreCommit-HIVE-Build

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586482#comment-16586482
 ] 

Hive QA commented on HIVE-20349:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} ql in master has 2307 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} druid-handler: The patch generated 0 new + 30 
unchanged - 2 fixed = 30 total (was 32) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13354/dev-support/hive-personality.sh
 |
| git revision | master / f280361 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13354/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already 

[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586293#comment-16586293
 ] 

Hive QA commented on HIVE-20349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12936285/HIVE-20349.1.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14885 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13349/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13349/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13349/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12936285 - PreCommit-HIVE-Build

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-20 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586275#comment-16586275
 ] 

Hive QA commented on HIVE-20349:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
47s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  6m 
19s{color} | {color:blue} ql in master has 2307 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} druid-handler: The patch generated 2 new + 30 
unchanged - 2 fixed = 32 total (was 32) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13349/dev-support/hive-personality.sh
 |
| git revision | master / f280361 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13349/yetus/diff-checkstyle-druid-handler.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13349/yetus/patch-asflicense-problems.txt
 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13349/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the 

[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-20 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16586129#comment-16586129
 ] 

Nishant Bangarwa commented on HIVE-20349:
-

added review link and updated patch. 

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-17 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16584342#comment-16584342
 ] 

slim bouguerra commented on HIVE-20349:
---

[~nishantbangarwa] can you please fix the style checks? 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13306/yetus/diff-checkstyle-druid-handler.txt

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-17 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16584320#comment-16584320
 ] 

Hive QA commented on HIVE-20349:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12936005/HIVE-20349.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14884 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13306/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13306/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13306/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12936005 - PreCommit-HIVE-Build

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-17 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16584291#comment-16584291
 ] 

Hive QA commented on HIVE-20349:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 2307 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} druid-handler: The patch generated 21 new + 30 
unchanged - 2 fixed = 51 total (was 32) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13306/dev-support/hive-personality.sh
 |
| git revision | master / 5681647 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13306/yetus/diff-checkstyle-druid-handler.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13306/yetus/patch-asflicense-problems.txt
 |
| modules | C: druid-handler ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13306/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has 

[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-17 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16584147#comment-16584147
 ] 

slim bouguerra commented on HIVE-20349:
---

can you add a RB link please? 

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-08-17 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16583816#comment-16583816
 ] 

Nishant Bangarwa commented on HIVE-20349:
-

+cc [~ashutoshc] please review. 

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)