[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758055#comment-16758055
 ] 

Hive QA commented on HIVE-21188:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
39s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15877/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15877/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.02.patch, HIVE-21188.02.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758041#comment-16758041
 ] 

Hive QA commented on HIVE-21194:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957218/image-2019-02-01-16-17-52-419.png

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15876/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15876/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15876/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-01 07:26:51.936
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15876/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-01 07:26:51.946
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-01 07:26:52.612
+ rm -rf ../yetus_PreCommit-HIVE-Build-15876
+ mkdir ../yetus_PreCommit-HIVE-Build-15876
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15876
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15876/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: unrecognized input
fatal: unrecognized input
fatal: unrecognized input
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15876
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957218 - PreCommit-HIVE-Build

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png, image-2019-02-01-15-07-06-893.png, 
> image-2019-02-01-16-17-36-598.png, image-2019-02-01-16-17-52-419.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  

[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-16-32-17-093.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png, image-2019-02-01-15-07-06-893.png, 
> image-2019-02-01-16-17-36-598.png, image-2019-02-01-16-17-52-419.png, 
> image-2019-02-01-16-31-56-958.png, image-2019-02-01-16-32-17-093.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-16-31-56-958.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png, image-2019-02-01-15-07-06-893.png, 
> image-2019-02-01-16-17-36-598.png, image-2019-02-01-16-17-52-419.png, 
> image-2019-02-01-16-31-56-958.png, image-2019-02-01-16-32-17-093.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-16-17-36-598.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png, image-2019-02-01-15-07-06-893.png, 
> image-2019-02-01-16-17-36-598.png, image-2019-02-01-16-17-52-419.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758039#comment-16758039
 ] 

Hive QA commented on HIVE-17938:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957206/HIVE-17938.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15721 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15875/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15875/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15875/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957206 - PreCommit-HIVE-Build

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch, HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-16-17-52-419.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png, image-2019-02-01-15-07-06-893.png, 
> image-2019-02-01-16-17-36-598.png, image-2019-02-01-16-17-52-419.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21197) Hive Replication can add duplicate data during migration from 3.0 to 4

2019-01-31 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera reassigned HIVE-21197:
--


> Hive Replication can add duplicate data during migration from 3.0 to 4
> --
>
> Key: HIVE-21197
> URL: https://issues.apache.org/jira/browse/HIVE-21197
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> During bootstrap phase it may happen that the files copied to target are 
> created by events which are not part of the bootstrap. This is because of the 
> fact that, bootstrap first gets the last event id and then the file list. So 
> during this period if some event happens, then bootstrap will include files 
> created by these events also. So the same files will be copied again during 
> the first incremental replication just after the bootstrap. In normal 
> scenario, the duplicate copy does not cause any issue as hive allows the use 
> of target database only after the first incremental. But in case of 
> migration, the file at source and target are copied to different location 
> (based on the write id at target) and thus this may lead to duplicate data at 
> target. This can be avoided by having at check at load time for duplicate 
> file. This check can be done only for the first incremental and the search 
> can be done in the bootstrap directory (with write id 1). if the file is 
> already present then just ignore the copy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2019-01-31 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-20295:
---
Attachment: HIVE-20295.05.patch

> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch, 
> HIVE-20295.03.patch, HIVE-20295.04.patch, HIVE-20295.05.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758020#comment-16758020
 ] 

Hive QA commented on HIVE-17938:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} common: The patch generated 1 new + 427 unchanged - 1 
fixed = 428 total (was 428) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15875/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15875/yetus/diff-checkstyle-common.txt
 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15875/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch, HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14631) HiveServer2 regularly fails to connect to metastore

2019-01-31 Thread powerinf (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758027#comment-16758027
 ] 

powerinf commented on HIVE-14631:
-

Have you already resolved it ?

> HiveServer2 regularly fails to connect to metastore
> ---
>
> Key: HIVE-14631
> URL: https://issues.apache.org/jira/browse/HIVE-14631
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.0.0, 2.1.0
> Environment: Hive 2.1.0, Hue 3.10.0, Hadoop 2.7.2, Tez 0.8.3
>Reporter: Alexandre Linte
>Priority: Major
>
> I have a cluster secured with Kerberos and Hive is configured to work with 
> Tez by default. Everything works well through hive-cli and beeline; however, 
> I'm facing a strange behavior through Hue.
> I can have a lot of client connections (these can reach 600) and after a day, 
> the client connections fail. But this is not the case for all clients 
> connection attempts.
> When it fails, I have the following logs on the HiveServer2:
> {noformat}
> Aug  3 09:28:04 hiveserver2.bigdata.fr Executing 
> command(queryId=hiveserver2_20160803092803_a216edf1-bb51-43a7-81a6-f40f1574b112):
>  INSERT INTO TABLE shfs3453.camille_test VALUES ('coucou')
> Aug  3 09:28:04 hiveserver2.bigdata.fr Query ID = 
> hiveserver2_20160803092803_a216edf1-bb51-43a7-81a6-f40f1574b112
> Aug  3 09:28:04 hiveserver2.bigdata.fr Total jobs = 1
> Aug  3 09:28:04 hiveserver2.bigdata.fr Launching Job 1 out of 1
> Aug  3 09:28:04 hiveserver2.bigdata.fr Starting task [Stage-1:MAPRED] in 
> parallel
> Aug  3 09:28:04 hiveserver2.bigdata.fr Trying to connect to metastore with 
> URI thrift://metastore01.bigdata.fr:9083
> Aug  3 09:28:04 hiveserver2.bigdata.fr Failed to connect to the MetaStore 
> Server...
> Aug  3 09:28:04 hiveserver2.bigdata.fr Waiting 1 seconds before next 
> connection attempt.
> Aug  3 09:28:05 hiveserver2.bigdata.fr Trying to connect to metastore with 
> URI thrift://metastore01.bigdata.fr:9083
> Aug  3 09:28:05 hiveserver2.bigdata.fr Failed to connect to the MetaStore 
> Server...
> Aug  3 09:28:05 hiveserver2.bigdata.fr Waiting 1 seconds before next 
> connection attempt.
> Aug  3 09:28:06 hiveserver2.bigdata.fr Trying to connect to metastore with 
> URI thrift://metastore01.bigdata.fr:9083
> Aug  3 09:28:06 hiveserver2.bigdata.fr Failed to connect to the MetaStore 
> Server...
> Aug  3 09:28:06 hiveserver2.bigdata.fr Waiting 1 seconds before next 
> connection attempt.
> Aug  3 09:28:08 hiveserver2.bigdata.fr FAILED: Execution Error, return code 
> -1 from org.apache.hadoop.hive.ql.exec.tez.TezTask
> Aug  3 09:28:08 hiveserver2.bigdata.fr Completed executing 
> command(queryId=hiveserver2_20160803092803_a216edf1-bb51-43a7-81a6-f40f1574b112);
>  Time taken: 4.002 seconds
> {noformat}
> At the same time I have the following logs on the Metastore are:
> {noformat}
> Aug  3 09:28:03 metastore01.bigdata.fr 180: get_table : db=shfs3453 
> tbl=camille_test
> Aug  3 09:28:03 metastore01.bigdata.fr 
> ugi=shfs3453#011ip=10.77.64.228#011cmd=get_table : db=shfs3453 
> tbl=camille_test#011
> Aug  3 09:28:04 metastore01.bigdata.fr 180: get_table : db=shfs3453 
> tbl=camille_test
> Aug  3 09:28:04 metastore01.bigdata.fr 
> ugi=shfs3453#011ip=10.77.64.228#011cmd=get_table : db=shfs3453 
> tbl=camille_test#011
> Aug  3 09:28:04 metastore01.bigdata.fr 180: get_table : db=shfs3453 
> tbl=camille_test
> Aug  3 09:28:04 metastore01.bigdata.fr 
> ugi=shfs3453#011ip=10.77.64.228#011cmd=get_table : db=shfs3453 
> tbl=camille_test#011
> Aug  3 09:28:04 metastore01.bigdata.fr SASL negotiation failure
> Aug  3 09:28:04 metastore01.bigdata.fr Error occurred during processing of 
> message.
> Aug  3 09:28:05 metastore01.bigdata.fr SASL negotiation failure
> Aug  3 09:28:05 metastore01.bigdata.fr Error occurred during processing of 
> message.
> Aug  3 09:28:06 metastore01.bigdata.fr SASL negotiation failure
> Aug  3 09:28:06 metastore01.bigdata.fr Error occurred during processing of 
> message.
> {noformat}
> To solve the connections issue, I have to restart the HiveServer2.
> Note: I also created a JIRA for Hue: 
> https://issues.cloudera.org/browse/HUE-4748



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-15211) Provide support for complex expressions in ON clauses for INNER joins

2019-01-31 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-15211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15211:
---
Labels:   (was: TODOC2.2)

> Provide support for complex expressions in ON clauses for INNER joins
> -
>
> Key: HIVE-15211
> URL: https://issues.apache.org/jira/browse/HIVE-15211
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Parser
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 2.2.0
>
> Attachments: HIVE-15211.01.patch, HIVE-15211.patch
>
>
> Currently, we have some restrictions on the predicates that we can use in ON 
> clauses for inner joins (we have those restrictions for outer joins too, but 
> we will tackle that in a follow-up). Semantically equivalent queries can be 
> expressed if the predicate is introduced in the WHERE clause, but we would 
> like that user can express it both in ON and WHERE clause, as in standard SQL.
> This patch is an extension to overcome these restrictions for inner joins.
> It will allow to write queries that currently fail in Hive such as:
> {code:sql}
> -- Disjunctions
> SELECT *
> FROM src1 JOIN src
> ON (src1.key=src.key
>   OR src1.value between 100 and 102
>   OR src.value between 100 and 102)
> LIMIT 10;
> -- Conjunction with multiple inputs references in one side
> SELECT *
> FROM src1 JOIN src
> ON (src1.key+src.key >= 100
>   AND src1.key+src.key <= 102)
> LIMIT 10;
> -- Conjunct with no references
> SELECT *
> FROM src1 JOIN src
> ON (src1.value between 100 and 102
>   AND src.value between 100 and 102
>   AND true)
> LIMIT 10;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Attachment: HIVE-21188.02.patch

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.02.patch, HIVE-21188.02.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-15-07-06-893.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png, image-2019-02-01-15-07-06-893.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-31 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Attachment: HIVE-17938.3.patch

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch, HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-31 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Status: Patch Available  (was: Open)

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch, HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17938) Enable parallel query compilation in HS2

2019-01-31 Thread Thejas M Nair (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-17938:
-
Status: Open  (was: Patch Available)

> Enable parallel query compilation in HS2
> 
>
> Key: HIVE-17938
> URL: https://issues.apache.org/jira/browse/HIVE-17938
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>Priority: Major
> Attachments: HIVE-17938.1.patch, HIVE-17938.2.patch, 
> HIVE-17938.3.patch, HIVE-17938.3.patch
>
>
> This (hive.driver.parallel.compilation) has been enabled in many production 
> environments for a while (Hortonworks customers), and it has been stable.
> Just realized that this is not yet enabled in apache by default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757945#comment-16757945
 ] 

Hive QA commented on HIVE-21194:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957197/image-2019-02-01-12-44-22-331.png

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15874/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15874/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15874/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-01 04:03:49.322
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15874/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-01 04:03:49.325
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-01 04:03:53.607
+ rm -rf ../yetus_PreCommit-HIVE-Build-15874
+ mkdir ../yetus_PreCommit-HIVE-Build-15874
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15874
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15874/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: unrecognized input
fatal: unrecognized input
fatal: unrecognized input
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15874
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957197 - PreCommit-HIVE-Build

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 

[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-12-44-22-331.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png, 
> image-2019-02-01-12-44-22-331.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: image-2019-02-01-12-34-44-731.png

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: image-2019-02-01-12-34-44-731.png
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-5312) Let HiveServer2 run simultaneously in HTTP (over thrift) and Binary (normal thrift transport) mode

2019-01-31 Thread Narendra Penagulur (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Narendra Penagulur updated HIVE-5312:
-
Flags: Patch

> Let HiveServer2 run simultaneously in HTTP (over thrift) and Binary (normal 
> thrift transport) mode 
> ---
>
> Key: HIVE-5312
> URL: https://issues.apache.org/jira/browse/HIVE-5312
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Priority: Major
>
> [HIVE-4763|https://issues.apache.org/jira/browse/HIVE-4763] adds support for 
> HTTP transport over thrift. With that, HS2 can be configured to run either 
> using using HTTP or using normal thrift binary transport. Ideally HS2 should 
> be supporting both modes simultaneously and the client should be able to 
> specify the mode used in serving the request.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-5312) Let HiveServer2 run simultaneously in HTTP (over thrift) and Binary (normal thrift transport) mode

2019-01-31 Thread Narendra Penagulur (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Narendra Penagulur updated HIVE-5312:
-
Flags:   (was: Patch)

> Let HiveServer2 run simultaneously in HTTP (over thrift) and Binary (normal 
> thrift transport) mode 
> ---
>
> Key: HIVE-5312
> URL: https://issues.apache.org/jira/browse/HIVE-5312
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>Priority: Major
>
> [HIVE-4763|https://issues.apache.org/jira/browse/HIVE-4763] adds support for 
> HTTP transport over thrift. With that, HS2 can be configured to run either 
> using using HTTP or using normal thrift binary transport. Ideally HS2 should 
> be supporting both modes simultaneously and the client should be able to 
> specify the mode used in serving the request.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: (was: HIVE-21194.patch)

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757906#comment-16757906
 ] 

Hive QA commented on HIVE-21188:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957181/HIVE-21188.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 15722 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=264)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15873/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15873/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15873/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957181 - PreCommit-HIVE-Build

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.02.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757878#comment-16757878
 ] 

Hive QA commented on HIVE-21188:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
32s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15873/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15873/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.02.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Attachment: HIVE-21188.02.patch

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.02.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21196) Support semijoin reduction on multiple column join

2019-01-31 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757831#comment-16757831
 ] 

Vineet Garg commented on HIVE-21196:


[~djaiswal] Example query in the JIRA has only one semi-join branch (for key1). 

> Support semijoin reduction on multiple column join
> --
>
> Key: HIVE-21196
> URL: https://issues.apache.org/jira/browse/HIVE-21196
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
>
> Currently for a query involving join on multiple columns creates  separate 
> semi join edges for each key which in turn create a bloom filter for each of 
> them, like below,
> EXPLAIN select count(*) from srcpart_date_n7 join srcpart_small_n3 on 
> (srcpart_date_n7.key = srcpart_small_n3.key1 and srcpart_date_n7.value = 
> srcpart_small_n3.value1)
> {code:java}
> Map 1 <- Reducer 5 (BROADCAST_EDGE)
> Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (CUSTOM_SIMPLE_EDGE)
> Reducer 5 <- Map 4 (CUSTOM_SIMPLE_EDGE)
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: srcpart_date_n7
>   filterExpr: (key is not null and value is not null and (key 
> BETWEEN DynamicValue(RS_7_srcpart_small_n3_key1_min) AND 
> DynamicValue(RS_7_srcpart_small_n3_key1_max) and in_bloom_filter(key, 
> DynamicValue(RS_7_srcpart_small_n3_key1_bloom_filter (type: boolean)
>   Statistics: Num rows: 2000 Data size: 356000 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Filter Operator
> predicate: ((key BETWEEN 
> DynamicValue(RS_7_srcpart_small_n3_key1_min) AND 
> DynamicValue(RS_7_srcpart_small_n3_key1_max) and in_bloom_filter(key, 
> DynamicValue(RS_7_srcpart_small_n3_key1_bloom_filter))) and key is not null 
> and value is not null) (type: boolean)
> Statistics: Num rows: 2000 Data size: 356000 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Select Operator
>   expressions: key (type: string), value (type: string)
>   outputColumnNames: _col0, _col1
>   Statistics: Num rows: 2000 Data size: 356000 Basic 
> stats: COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: string), _col1 (type: 
> string)
> sort order: ++
> Map-reduce partition columns: _col0 (type: string), 
> _col1 (type: string)
> Statistics: Num rows: 2000 Data size: 356000 Basic 
> stats: COMPLETE Column stats: COMPLETE
> Execution mode: vectorized, llap
> LLAP IO: all inputs
> Map 4 
> Map Operator Tree:
> TableScan
>   alias: srcpart_small_n3
>   filterExpr: (key1 is not null and value1 is not null) 
> (type: boolean)
>   Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Filter Operator
> predicate: (key1 is not null and value1 is not null) 
> (type: boolean)
> Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
> Select Operator
>   expressions: key1 (type: string), value1 (type: string)
>   outputColumnNames: _col0, _col1
>   Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Reduce Output Operator
> key expressions: _col0 (type: string), _col1 (type: 
> string)
> sort order: ++
> Map-reduce partition columns: _col0 (type: string), 
> _col1 (type: string)
> Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Select Operator
> expressions: _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
> Group By Operator
>   aggregations: min(_col0), max(_col0), 
> bloom_filter(_col0, expectedEntries=20)
>   mode: hash
>   outputColumnNames: _col0, _col1, _col2
>   Statistics: Num rows: 1 Data size: 730 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Reduce Output Operator
>

[jira] [Commented] (HIVE-21195) Review of DefaultGraphWalker Class

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757830#comment-16757830
 ] 

Hive QA commented on HIVE-21195:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957175/HIVE-21195.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15872/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15872/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15872/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-01 00:34:31.938
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15872/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-01 00:34:31.941
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-01 00:34:33.077
+ rm -rf ../yetus_PreCommit-HIVE-Build-15872
+ mkdir ../yetus_PreCommit-HIVE-Build-15872
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15872
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15872/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/ql/src/java/org/apache/hadoop/hive/ql/lib/DefaultGraphWalker.java: 
does not exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc5402898439180160781.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc5402898439180160781.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc54735899005268837.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 41 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 

[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757829#comment-16757829
 ] 

Hive QA commented on HIVE-21188:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957137/HIVE-21188.02.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15722 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_groupby_reduce] 
(batchId=61)
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testAbandonedSessionMetrics
 (batchId=234)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15871/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15871/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15871/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957137 - PreCommit-HIVE-Build

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21183) Interrupt wait time for FileCacheCleanupThread

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757788#comment-16757788
 ] 

Hive QA commented on HIVE-21183:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957134/HIVE-21183.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 15722 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
 (batchId=243)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths 
(batchId=243)
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[2]
 (batchId=209)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15870/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15870/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15870/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957134 - PreCommit-HIVE-Build

> Interrupt wait time for FileCacheCleanupThread
> --
>
> Key: HIVE-21183
> URL: https://issues.apache.org/jira/browse/HIVE-21183
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Minor
> Attachments: HIVE-21183.1.patch, HIVE-21183.patch
>
>
> The FileCacheCleanupThread is waiting unnecessarily long for eviction counts 
> to increment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757806#comment-16757806
 ] 

Hive QA commented on HIVE-21188:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15871/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15871/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21196) Support semijoin reduction on multiple column join

2019-01-31 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal reassigned HIVE-21196:
-


> Support semijoin reduction on multiple column join
> --
>
> Key: HIVE-21196
> URL: https://issues.apache.org/jira/browse/HIVE-21196
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
>
> Currently for a query involving join on multiple columns creates  separate 
> semi join edges for each key which in turn create a bloom filter for each of 
> them, like below,
> EXPLAIN select count(*) from srcpart_date_n7 join srcpart_small_n3 on 
> (srcpart_date_n7.key = srcpart_small_n3.key1 and srcpart_date_n7.value = 
> srcpart_small_n3.value1)
> {code:java}
> Map 1 <- Reducer 5 (BROADCAST_EDGE)
> Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 4 (SIMPLE_EDGE)
> Reducer 3 <- Reducer 2 (CUSTOM_SIMPLE_EDGE)
> Reducer 5 <- Map 4 (CUSTOM_SIMPLE_EDGE)
>  A masked pattern was here 
>   Vertices:
> Map 1 
> Map Operator Tree:
> TableScan
>   alias: srcpart_date_n7
>   filterExpr: (key is not null and value is not null and (key 
> BETWEEN DynamicValue(RS_7_srcpart_small_n3_key1_min) AND 
> DynamicValue(RS_7_srcpart_small_n3_key1_max) and in_bloom_filter(key, 
> DynamicValue(RS_7_srcpart_small_n3_key1_bloom_filter (type: boolean)
>   Statistics: Num rows: 2000 Data size: 356000 Basic stats: 
> COMPLETE Column stats: COMPLETE
>   Filter Operator
> predicate: ((key BETWEEN 
> DynamicValue(RS_7_srcpart_small_n3_key1_min) AND 
> DynamicValue(RS_7_srcpart_small_n3_key1_max) and in_bloom_filter(key, 
> DynamicValue(RS_7_srcpart_small_n3_key1_bloom_filter))) and key is not null 
> and value is not null) (type: boolean)
> Statistics: Num rows: 2000 Data size: 356000 Basic stats: 
> COMPLETE Column stats: COMPLETE
> Select Operator
>   expressions: key (type: string), value (type: string)
>   outputColumnNames: _col0, _col1
>   Statistics: Num rows: 2000 Data size: 356000 Basic 
> stats: COMPLETE Column stats: COMPLETE
>   Reduce Output Operator
> key expressions: _col0 (type: string), _col1 (type: 
> string)
> sort order: ++
> Map-reduce partition columns: _col0 (type: string), 
> _col1 (type: string)
> Statistics: Num rows: 2000 Data size: 356000 Basic 
> stats: COMPLETE Column stats: COMPLETE
> Execution mode: vectorized, llap
> LLAP IO: all inputs
> Map 4 
> Map Operator Tree:
> TableScan
>   alias: srcpart_small_n3
>   filterExpr: (key1 is not null and value1 is not null) 
> (type: boolean)
>   Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Filter Operator
> predicate: (key1 is not null and value1 is not null) 
> (type: boolean)
> Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
> Select Operator
>   expressions: key1 (type: string), value1 (type: string)
>   outputColumnNames: _col0, _col1
>   Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Reduce Output Operator
> key expressions: _col0 (type: string), _col1 (type: 
> string)
> sort order: ++
> Map-reduce partition columns: _col0 (type: string), 
> _col1 (type: string)
> Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Select Operator
> expressions: _col0 (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 20 Data size: 3560 Basic stats: 
> PARTIAL Column stats: PARTIAL
> Group By Operator
>   aggregations: min(_col0), max(_col0), 
> bloom_filter(_col0, expectedEntries=20)
>   mode: hash
>   outputColumnNames: _col0, _col1, _col2
>   Statistics: Num rows: 1 Data size: 730 Basic stats: 
> PARTIAL Column stats: PARTIAL
>   Reduce Output Operator
> sort order: 
> Statistics: Num rows: 1 Data size: 730 Basic 
> 

[jira] [Assigned] (HIVE-21195) Review of DefaultGraphWalker Class

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HIVE-21195:
--

Assignee: BELUGA BEHR

> Review of DefaultGraphWalker Class
> --
>
> Key: HIVE-21195
> URL: https://issues.apache.org/jira/browse/HIVE-21195
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>
> {code:java}
> protected final List toWalk = new ArrayList();
> ...
> while (toWalk.size() > 0) {
>   Node nd = toWalk.remove(0);
> {code}
> Every time this loop runs, the first item of a list is removed.  For an 
> {{ArrayList}}, this means that every time the first item is removed, all of 
> the remaining items in the list are copied down one position so that the 
> first item is always at array index 0.  This is expensive in a tight loop.  
> Use a {{Queue}} implementation that does not have this behavior. 
> {{ArrayDeque}}
> {quote}
> This class is likely to be faster than Stack when used as a stack, and faster 
> than LinkedList when used as a queue.
> {quote}
> https://docs.oracle.com/javase/7/docs/api/java/util/ArrayDeque.html
> Add a little bit extra cleanup since it's being looked at.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21195) Review of DefaultGraphWalker Class

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21195:
---
Status: Patch Available  (was: Open)

> Review of DefaultGraphWalker Class
> --
>
> Key: HIVE-21195
> URL: https://issues.apache.org/jira/browse/HIVE-21195
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-21195.1.patch
>
>
> {code:java}
> protected final List toWalk = new ArrayList();
> ...
> while (toWalk.size() > 0) {
>   Node nd = toWalk.remove(0);
> {code}
> Every time this loop runs, the first item of a list is removed.  For an 
> {{ArrayList}}, this means that every time the first item is removed, all of 
> the remaining items in the list are copied down one position so that the 
> first item is always at array index 0.  This is expensive in a tight loop.  
> Use a {{Queue}} implementation that does not have this behavior. 
> {{ArrayDeque}}
> {quote}
> This class is likely to be faster than Stack when used as a stack, and faster 
> than LinkedList when used as a queue.
> {quote}
> https://docs.oracle.com/javase/7/docs/api/java/util/ArrayDeque.html
> Add a little bit extra cleanup since it's being looked at.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21195) Review of DefaultGraphWalker Class

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21195:
---
Attachment: HIVE-21195.1.patch

> Review of DefaultGraphWalker Class
> --
>
> Key: HIVE-21195
> URL: https://issues.apache.org/jira/browse/HIVE-21195
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-21195.1.patch
>
>
> {code:java}
> protected final List toWalk = new ArrayList();
> ...
> while (toWalk.size() > 0) {
>   Node nd = toWalk.remove(0);
> {code}
> Every time this loop runs, the first item of a list is removed.  For an 
> {{ArrayList}}, this means that every time the first item is removed, all of 
> the remaining items in the list are copied down one position so that the 
> first item is always at array index 0.  This is expensive in a tight loop.  
> Use a {{Queue}} implementation that does not have this behavior. 
> {{ArrayDeque}}
> {quote}
> This class is likely to be faster than Stack when used as a stack, and faster 
> than LinkedList when used as a queue.
> {quote}
> https://docs.oracle.com/javase/7/docs/api/java/util/ArrayDeque.html
> Add a little bit extra cleanup since it's being looked at.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18897) Hive is Double-Logging Invalid UDF Error

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-18897:
---
Labels: noob  (was: )

> Hive is Double-Logging Invalid UDF Error
> 
>
> Key: HIVE-18897
> URL: https://issues.apache.org/jira/browse/HIVE-18897
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Priority: Minor
>  Labels: noob
>
> It logs the "invalid" function twice.  Once at ERROR level and once at WARN 
> level.  Please change so that the error is logged once, at the WARN level.  
> The stack trace seems overkill here as well for such a trivial error... 
> usually a user typo or the function needs to be registered.
> {code:java}
> 2018-03-05 07:50:44,473  ERROR org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Handler-Pool: Thread-43]: FAILED: SemanticException [Error 
> 10011]: Line 1:7 Invalid function 'aes_encrypt'
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 1:7 Invalid function 
> 'aes_encrypt'
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:836)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1176)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:193)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:146)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:10422)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10378)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3771)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3550)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8830)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8785)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9652)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9545)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10018)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10029)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9909)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:223)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:488)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1274)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1261)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:143)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:215)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:337)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:425)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:402)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:258)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:500)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:746)
> 

[jira] [Updated] (HIVE-18897) Hive is Double-Logging Invalid UDF Error

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-18897:
---
Labels: newbie noob  (was: noob)

> Hive is Double-Logging Invalid UDF Error
> 
>
> Key: HIVE-18897
> URL: https://issues.apache.org/jira/browse/HIVE-18897
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Priority: Minor
>  Labels: newbie, noob
>
> It logs the "invalid" function twice.  Once at ERROR level and once at WARN 
> level.  Please change so that the error is logged once, at the WARN level.  
> The stack trace seems overkill here as well for such a trivial error... 
> usually a user typo or the function needs to be registered.
> {code:java}
> 2018-03-05 07:50:44,473  ERROR org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Handler-Pool: Thread-43]: FAILED: SemanticException [Error 
> 10011]: Line 1:7 Invalid function 'aes_encrypt'
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 1:7 Invalid function 
> 'aes_encrypt'
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:836)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1176)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:94)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:78)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:132)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:109)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:193)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:146)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:10422)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:10378)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3771)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:3550)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:8830)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:8785)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9652)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:9545)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:10018)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10029)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:9909)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:223)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:488)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1274)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1261)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:143)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:215)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:337)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:425)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:402)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:258)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:500)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> 

[jira] [Commented] (HIVE-21183) Interrupt wait time for FileCacheCleanupThread

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757759#comment-16757759
 ] 

Hive QA commented on HIVE-21183:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} llap-server in master has 81 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} llap-server generated 1 new + 81 unchanged - 0 fixed = 
82 total (was 81) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:llap-server |
|  |  Synchronization performed on java.util.concurrent.atomic.AtomicInteger in 
org.apache.hadoop.hive.llap.cache.SerDeLowLevelCacheImpl.notifyEvicted(MemoryBuffer)
  At 
SerDeLowLevelCacheImpl.java:org.apache.hadoop.hive.llap.cache.SerDeLowLevelCacheImpl.notifyEvicted(MemoryBuffer)
  At SerDeLowLevelCacheImpl.java:[line 666] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15870/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15870/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15870/yetus/new-findbugs-llap-server.html
 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15870/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Interrupt wait time for FileCacheCleanupThread
> --
>
> Key: HIVE-21183
> URL: https://issues.apache.org/jira/browse/HIVE-21183
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Minor
> Attachments: HIVE-21183.1.patch, HIVE-21183.patch
>
>
> The FileCacheCleanupThread is waiting unnecessarily long for eviction counts 
> to increment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757749#comment-16757749
 ] 

Hive QA commented on HIVE-21194:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957117/HIVE-21194.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15869/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15869/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15869/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-01-31 21:49:07.699
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-15869/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-01-31 21:49:07.702
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at f2e10f2 HIVE-21029: External table replication for existing 
deployments running incremental replication (Sankar Hariappan, reviewed by 
Mahesh Kumar Behera)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-01-31 21:49:08.357
+ rm -rf ../yetus_PreCommit-HIVE-Build-15869
+ mkdir ../yetus_PreCommit-HIVE-Build-15869
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-15869
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-15869/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java:752
error: repository lacks the necessary blob to fall back on 3-way merge.
error: 
druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java: 
patch does not apply
error: src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java: does not 
exist in index
error: java/org/apache/hadoop/hive/druid/DruidStorageHandler.java: does not 
exist in index
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-15869
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957117 - PreCommit-HIVE-Build

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: HIVE-21194.patch
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> 

[jira] [Commented] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757748#comment-16757748
 ] 

Hive QA commented on HIVE-21193:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957114/HIVE-21193.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15721 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.service.cli.session.TestSessionManagerMetrics.testAbandonedSessionMetrics
 (batchId=234)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15868/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15868/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15868/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957114 - PreCommit-HIVE-Build

> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21193.1.patch
>
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757720#comment-16757720
 ] 

Hive QA commented on HIVE-21193:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
42s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15868/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15868/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21193.1.patch
>
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-685) add UDFquote

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757701#comment-16757701
 ] 

Hive QA commented on HIVE-685:
--



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957076/HIVE.685.04.PATCH

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 889 failed/errored test(s), 7689 tests 
executed
*Failed tests:*
{noformat}
TestAcidInputFormat - did not produce a TEST-*.xml file (likely timed out) 
(batchId=300)
TestAcidOnTez - did not produce a TEST-*.xml file (likely timed out) 
(batchId=241)
TestAcidTableSerializer - did not produce a TEST-*.xml file (likely timed out) 
(batchId=216)
TestAcidTableSetup - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestActivePassiveHA - did not produce a TEST-*.xml file (likely timed out) 
(batchId=261)
TestAddResource - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=255)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=240)
TestAvroGenericRecordReader - did not produce a TEST-*.xml file (likely timed 
out) (batchId=300)
TestAvroHCatLoader - did not produce a TEST-*.xml file (likely timed out) 
(batchId=207)
TestAvroHCatStorer - did not produce a TEST-*.xml file (likely timed out) 
(batchId=207)
TestBasicStats - did not produce a TEST-*.xml file (likely timed out) 
(batchId=296)
TestBeeLineExceptionHandling - did not produce a TEST-*.xml file (likely timed 
out) (batchId=201)
TestBeeLineHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=201)
TestBeeLineOpts - did not produce a TEST-*.xml file (likely timed out) 
(batchId=201)
TestBeelineArgParsing - did not produce a TEST-*.xml file (likely timed out) 
(batchId=201)
TestBeelineConnectionUsingHiveSite - did not produce a TEST-*.xml file (likely 
timed out) (batchId=256)
TestBeelineSiteParser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=201)
TestBeelineWithUserHs2ConnectionFile - did not produce a TEST-*.xml file 
(likely timed out) (batchId=256)
TestBlockedUdf - did not produce a TEST-*.xml file (likely timed out) 
(batchId=286)
TestBucketIdResolverImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=216)
TestBufferedRows - did not produce a TEST-*.xml file (likely timed out) 
(batchId=201)
TestCBOMaxNumToCNF - did not produce a TEST-*.xml file (likely timed out) 
(batchId=305)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=265)
TestCLIServiceConnectionLimits - did not produce a TEST-*.xml file (likely 
timed out) (batchId=233)
TestCLIServiceRestore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=233)
TestCachedStoreUpdateUsingEvents - did not produce a TEST-*.xml file (likely 
timed out) (batchId=240)
TestCleaner - did not produce a TEST-*.xml file (likely timed out) (batchId=296)
TestCleaner2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=296)
TestCleanerWithReplication - did not produce a TEST-*.xml file (likely timed 
out) (batchId=243)
TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed 
out) (batchId=254)
TestCliDriverMethods - did not produce a TEST-*.xml file (likely timed out) 
(batchId=200)
TestClientCommandHookFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=201)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=256)
TestColumnAccess - did not produce a TEST-*.xml file (likely timed out) 
(batchId=305)
TestCommands - did not produce a TEST-*.xml file (likely timed out) 
(batchId=201)
TestCommands - did not produce a TEST-*.xml file (likely timed out) 
(batchId=204)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=242)
TestConditionalResolverCommonJoin - did not produce a TEST-*.xml file (likely 
timed out) (batchId=325)
TestContext - did not produce a TEST-*.xml file (likely timed out) (batchId=311)
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestCounterMapping - did not produce a TEST-*.xml file (likely timed out) 
(batchId=324)
TestCreateMacroDesc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=325)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=243)
TestCustomPartitionVertex - did not produce a TEST-*.xml file (likely timed 
out) (batchId=318)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=240)
TestDagUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=318)
TestDataSourceProviderFactory - did not 

[jira] [Updated] (HIVE-21192) TestReplicationScenariosIncrementalLoadAcidTables Flakey Test

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21192:
---
Summary: TestReplicationScenariosIncrementalLoadAcidTables Flakey Test  
(was: TestReplicationScenariosIncrementalLoadAcidTables Flakey test)

> TestReplicationScenariosIncrementalLoadAcidTables Flakey Test
> -
>
> Key: HIVE-21192
> URL: https://issues.apache.org/jira/browse/HIVE-21192
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Priority: Major
>
> Several of my patches are failing in YETUS due to the following unit test 
> failure:
> {code}
> TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
> TEST-*.xml file (likely timed out) (batchId=251)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21192) TestReplicationScenariosIncrementalLoadAcidTables Flakey test

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21192:
---
Summary: TestReplicationScenariosIncrementalLoadAcidTables Flakey test  
(was: TestReplicationScenariosIncrementalLoadAcidTables Fails Regularly)

> TestReplicationScenariosIncrementalLoadAcidTables Flakey test
> -
>
> Key: HIVE-21192
> URL: https://issues.apache.org/jira/browse/HIVE-21192
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Priority: Major
>
> Several of my patches are failing in YETUS due to the following unit test 
> failure:
> {code}
> TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
> TEST-*.xml file (likely timed out) (batchId=251)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20255) Review LevelOrderWalker.java

2019-01-31 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757628#comment-16757628
 ] 

BELUGA BEHR commented on HIVE-20255:


[~ngangam] Can you check out this one too please? :)

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.18.patch, HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-685) add UDFquote

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757608#comment-16757608
 ] 

Hive QA commented on HIVE-685:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15867/dev-support/hive-personality.sh
 |
| git revision | master / f2e10f2 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15867/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> add UDFquote
> 
>
> Key: HIVE-685
> URL: https://issues.apache.org/jira/browse/HIVE-685
> Project: Hive
>  Issue Type: New Feature
>Reporter: Namit Jain
>Assignee: Mani M
>Priority: Major
>  Labels: todoc4.0, udf
> Fix For: 4.0.0
>
> Attachments: HIVE.685.02.PATCH, HIVE.685.03.PATCH, HIVE.685.04.PATCH, 
> HIVE.685.PATCH
>
>
> add UDFquote
> look at
> http://dev.mysql.com/doc/refman/5.0/en/func-op-summary-ref.html
> for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21180) Fix branch-3 metastore test timeouts

2019-01-31 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757594#comment-16757594
 ] 

Vihang Karajgaonkar commented on HIVE-21180:


It looks like the test batches are created based of the master branch. Many of 
these supposedly timed-out tests don't even exist on branch-3

> Fix branch-3 metastore test timeouts
> 
>
> Key: HIVE-21180
> URL: https://issues.apache.org/jira/browse/HIVE-21180
> Project: Hive
>  Issue Type: Test
>Affects Versions: 3.2.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>
> The module name below is wrong since metastore-server doesn't exist on 
> branch-3. This is most likely the reason why test batches are timing out on 
> branch-3
> {noformat}
> 2019-01-29 00:32:17,765  INFO [HostExecutor 3] 
> HostExecutor.executeTestBatch:262 Drone [user=hiveptest, 
> host=104.198.216.224, instance=0] executing UnitTestBatch 
> [name=228_UTBatch_standalone-metastore__metastore-server_20_tests, id=228, 
> moduleName=standalone-metastore/metastore-server, batchSize=20, 
> isParallel=true, testList=[TestPartitionManagement, 
> TestCatalogNonDefaultClient, TestCatalogOldClient, TestHiveAlterHandler, 
> TestTxnHandlerNegative, TestTxnUtils, TestFilterHooks, TestRawStoreProxy, 
> TestLockRequestBuilder, TestHiveMetastoreCli, TestCheckConstraint, 
> TestAddPartitions, TestListPartitions, TestFunctions, TestGetTableMeta, 
> TestTablesCreateDropAlterTruncate, TestRuntimeStats, TestDropPartitions, 
> TestTablesList, TestUniqueConstraint]] with bash 
> /home/hiveptest/104.198.216.224-hiveptest-0/scratch/hiveptest-228_UTBatch_standalone-metastore__metastore-server_20_tests.sh
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757572#comment-16757572
 ] 

Hive QA commented on HIVE-21143:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957039/HIVE-21143.11.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15720 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_interval_2]
 (batchId=177)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15865/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15865/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15865/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957039 - PreCommit-HIVE-Build

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch, HIVE-21143.11.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757573#comment-16757573
 ] 

Hive QA commented on HIVE-21143:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957039/HIVE-21143.11.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15866/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15866/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15866/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12957039/HIVE-21143.11.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957039 - PreCommit-HIVE-Build

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch, HIVE-21143.11.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21188:
---
Attachment: HIVE-21188.02.patch

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.02.patch, 
> HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757530#comment-16757530
 ] 

Hive QA commented on HIVE-21143:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
45s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 11 new + 116 unchanged - 3 
fixed = 127 total (was 119) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 14 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15865/dev-support/hive-personality.sh
 |
| git revision | master / 5634140 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15865/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15865/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15865/yetus/whitespace-tabs.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15865/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch, HIVE-21143.11.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20484:
---
Description: 
{quote}
Scan instances can be set to use the block cache in the RegionServer via the 
setCacheBlocks method. For input Scans to MapReduce jobs, this should be false. 

https://hbase.apache.org/book.html#perf.hbase.client.blockcache
{quote}

However, from the Hive code, we can see that this is not the case.

{code}
public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";

...

String scanCacheBlocks = 
tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
if (scanCacheBlocks != null) {
  jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
}

...

String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
if (scanCacheBlocks != null) {
  scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
}
{code}

In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not specified 
in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called and the 
default value of the HBase {{Scan}} class is used.

{code:java|title=Scan.java}
  /**
   * Set whether blocks should be cached for this Scan.
   * 
   * This is true by default.  When true, default settings of the table and
   * family are used (this will never override caching blocks if the block
   * cache is disabled for that family or entirely).
   *
   * @param cacheBlocks if false, default settings are overridden and blocks
   * will not be cached
   */
  public Scan setCacheBlocks(boolean cacheBlocks) {
this.cacheBlocks = cacheBlocks;
return this;
  }
{code}

Hive is doing full scans of the table with MapReduce/Spark and therefore, 
according to the HBase docs, the default behavior here should be that blocks 
are not cached.  Hive should set this value to "false" by default unless the 
table {{SERDEPROPERTIES}} override this.

{code:sql}
-- Commands for HBase
-- create 'test', 't'

CREATE EXTERNAL TABLE test(value map, row_key string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" = "t:,:key",
"hbase.scan.cacheblock" = "true"
);
{code}

  was:
{quote}
Scan instances can be set to use the block cache in the RegionServer via the 
setCacheBlocks method. For input Scans to MapReduce jobs, this should be false. 

https://hbase.apache.org/book.html#perf.hbase.client.blockcache
{quote}

However, from the Hive code, we can see that this is not the case.

{code}
public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";

...

String scanCacheBlocks = 
tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
if (scanCacheBlocks != null) {
  jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
}

...

String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
if (scanCacheBlocks != null) {
  scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
}
{code}

In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not specified 
in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called and the 
default value of the HBase {{Scan}} class is used.

{code:java|title=Scan.java}
  /**
   * Set whether blocks should be cached for this Scan.
   * 
   * This is true by default.  When true, default settings of the table and
   * family are used (this will never override caching blocks if the block
   * cache is disabled for that family or entirely).
   *
   * @param cacheBlocks if false, default settings are overridden and blocks
   * will not be cached
   */
  public Scan setCacheBlocks(boolean cacheBlocks) {
this.cacheBlocks = cacheBlocks;
return this;
  }
{code}

Hive is doing full scans of the table with MapReduce/Spark and therefore, 
according to the HBase docs, the default behavior here should be that blocks 
are not cached.  Hive should set this value to "false" by default unless the 
table {{SERDEPROPERTIES}} override this.

{code:sql}
-- Commands for HBase
-- create 'test', 't'

CREATE EXTERNAL TABLE test(value map, row_key string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
"hbase.columns.mapping" = "t:,:key",
"hbase.scan.cacheblock" = "false"
);
{code}


> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can 

[jira] [Updated] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-31 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-21029:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

04.patch is committed to master.
Thanks [~maheshk114] for the review!

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0, 3.1.1
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch, HIVE-21029.04.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21183) Interrupt wait time for FileCacheCleanupThread

2019-01-31 Thread Oliver Draese (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oliver Draese updated HIVE-21183:
-
Attachment: HIVE-21183.1.patch

> Interrupt wait time for FileCacheCleanupThread
> --
>
> Key: HIVE-21183
> URL: https://issues.apache.org/jira/browse/HIVE-21183
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Minor
> Attachments: HIVE-21183.1.patch, HIVE-21183.patch
>
>
> The FileCacheCleanupThread is waiting unnecessarily long for eviction counts 
> to increment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-31 Thread Naveen Gangam (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-20484:
-
   Resolution: Fixed
Fix Version/s: 3.2.0
   4.0.0
   Status: Resolved  (was: Patch Available)

Fix has been pushed to master and branch-3. Thank you for your contribution 
[~belugabehr]. Closing the jira.

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757496#comment-16757496
 ] 

Hive QA commented on HIVE-21188:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957018/HIVE-21188.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 15719 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_ppr] (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_1] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppr_pushdown3] 
(batchId=30)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15864/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15864/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15864/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957018 - PreCommit-HIVE-Build

> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-31 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757497#comment-16757497
 ] 

mahesh kumar behera commented on HIVE-21029:


04.patch looks fine to me 

+1

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0, 3.1.1
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch, HIVE-21029.04.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21190) hive thrift server may be blocked by session level waiting,caused by udf!

2019-01-31 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HIVE-21190:
-

Assignee: (was: Josh Elser)

> hive thrift server may be blocked by session level waiting,caused by udf!
> -
>
> Key: HIVE-21190
> URL: https://issues.apache.org/jira/browse/HIVE-21190
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
> Environment: hdp ambari 26
> hive1.2.0
>Reporter: kongxianghe
>Priority: Critical
> Attachments: session_1.jpg, session_2.jpg
>
>
> # caused by an error UDF function!time_waiting(Long sleepSeconds)
> {code}
> public class UDFTimeWaiting extends UDF throws Exception{
>   public String evaluate(String sleepSeconds){
>  ...
>  Thread.sleep(Long.parseLong(sleepSeconds) * 1000);
>  return "ok";
>  }
> }
> {code}
> 
> # in session_1:
> {code}select time_waiting(100);{code}
> # in session_2:
> {code}select 1;  or show tables;{code}
> # session_2 will not have any response from thrift server util  session_1  
> waiting 100 seconds!
> this bug may cause hiveserver come into an unavailable status!  
> 
> # session_1 run waiting 200s,
>  !session_1.jpg! 
> # session_2 run at the same time ,but blocked by session_1 , see the 
> pic,waiting 197s after session_1 returned then returned
>  !session_2.jpg! 
> 
> # if someone want to attack or do sth ,hiveserver will not be down,but not 
> available again!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757448#comment-16757448
 ] 

slim bouguerra commented on HIVE-21194:
---

[~seunghyun.cheong] can you please explain more how you running into this issue?

Are you creating the data segments using Hive and removing it using Druid Json 
Kill task?

Have you actually tested that patch ? does it solve your issue ?

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: HIVE-21194.patch
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. KillTask compares versions
> [KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")
> [TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")
> [DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
> segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21188) SemanticException for query on view with masked table

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757401#comment-16757401
 ] 

Hive QA commented on HIVE-21188:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15864/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15864/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> SemanticException for query on view with masked table
> -
>
> Key: HIVE-21188
> URL: https://issues.apache.org/jira/browse/HIVE-21188
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21188.01.patch, HIVE-21188.patch
>
>
> When view reference is fully qualified. Following q file can be used to 
> reproduce the issue:
> {code}
> --! qt:dataset:srcpart
> --! qt:dataset:src
> set hive.mapred.mode=nonstrict;
> set 
> hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdHiveAuthorizerFactoryForTest;
> create database atlasmask;
> use atlasmask;
> create table masking_test_n8 (key int, value int);
> insert into masking_test_n8 values(1,1), (2,2);
> create view testv(c,d) as select * from masking_test_n8;
> select * from `atlasmask`.`testv`;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. KillTask compares versions

[KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")

[TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
  
h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")

[DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. Suggestion
h3. Because druid uses UTC only, DruidStorageHandler should set a version of 
segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. KillTask compares versions

[KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. KillTask compares versions

[KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z")

[TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
  
h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00")

[DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. Suggestion
h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
version of segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. KillTask compares versions 
[KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z") 

[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. KillTask compares versions 
[KillTask.java#L88|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. KillTask version (UTC, e.g. "2019-01-30T16:58:29.992Z") 
[TaskLockbox.java#L593|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
  
h3. Segment version (UTC+9, e.g. "2019-01-31T01:12:32.289+09:00") 
[DruidStorageHandler.java#L755|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. Suggestion
h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
version of segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. [KillTask version (UTC, e.g. 

[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Attachment: HIVE-21194.patch
Status: Patch Available  (was: In Progress)

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
> Attachments: HIVE-21194.patch
>
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. [KillTask compares 
> versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. [KillTask version (UTC, e.g. 
> "2019-01-30T16:58:29.992Z")|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. [Segment version (UTC+9, e.g. 
> "2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
> version of segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757359#comment-16757359
 ] 

Hive QA commented on HIVE-21029:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12957007/HIVE-21029.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15721 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15863/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15863/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15863/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12957007 - PreCommit-HIVE-Build

> External table replication for existing deployments running incremental 
> replication.
> 
>
> Key: HIVE-21029
> URL: https://issues.apache.org/jira/browse/HIVE-21029
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 3.0.0, 3.1.0, 3.1.1
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21029.01.patch, HIVE-21029.02.patch, 
> HIVE-21029.03.patch, HIVE-21029.04.patch
>
>
> Existing deployments using hive replication do not get external tables 
> replicated. For such deployments to enable external table replication they 
> will have to provide a specific switch to first bootstrap external tables as 
> part of hive incremental replication, following which the incremental 
> replication will take care of further changes in external tables.
> The switch will be provided by an additional hive configuration (for ex: 
> hive.repl.bootstrap.external.tables) and is to be used in 
> {code} WITH {code}  clause of 
> {code} REPL DUMP {code} command. 
> Additionally the existing hive config _hive.repl.include.external.tables_  
> will always have to be set to "true" in the above clause. 
> Proposed usage for enabling external tables replication on existing 
> replication policy.
> 1. Consider an ongoing repl policy  in incremental phase.
> Enable hive.repl.include.external.tables=true and 
> hive.repl.bootstrap.external.tables=true in next incremental REPL DUMP.
> - Dumps all events but skips events related to external tables.
> - Instead, combine bootstrap dump for all external tables under “_bootstrap” 
> directory.
> - Also, includes the data locations file "_external_tables_info”.
> - LIMIT or TO clause shouldn’t be there to ensure the latest events are 
> dumped before bootstrap dumping external tables.
> 2. REPL LOAD on this dump applies all the events first, copies external 
> tables data and then bootstrap external tables (metadata).
> - It is possible that the external tables (metadata) are not point-in time 
> consistent with rest of the tables.
> - But, it would be eventually consistent when the next incremental load is 
> applied.
> - This REPL LOAD is fault tolerant and can be retried if failed.
> 3. All future REPL DUMPs on this repl policy should set 
> hive.repl.bootstrap.external.tables=false.
> - If not set to false, then target might end up having inconsistent set of 
> external tables as bootstrap wouldn’t clean-up any dropped external tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21194 started by Seung-Hyun Cheong.

> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. Reason
> h3. [KillTask compares 
> versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>   
> h3. [KillTask version (UTC, e.g. 
> "2019-01-30T16:58:29.992Z")|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>   
> h3. [Segment version (UTC+9, e.g. 
> "2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. Suggestion
> h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
> version of segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, e.g. 
"2019-01-30T16:58:29.992Z")|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
 

 
h3. [Segment version (UTC+9, e.g. 
"2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. Suggestion
h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
version of segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, 
"{{2019-01-30T16:58:29.992Z}}{{")}}|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = 

[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
  
h3. [KillTask version (UTC, e.g. 
"2019-01-30T16:58:29.992Z")|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
  
h3. [Segment version (UTC+9, e.g. 
"2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. Suggestion
h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
version of segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, e.g. 
"2019-01-30T16:58:29.992Z")|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = 

[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. Reason
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, 
"{{2019-01-30T16:58:29.992Z}}{{")}}|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
 

 
h3. [Segment version (UTC+9, 
"2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. Suggestion
h3. Because druid uses UTC only for now, DruidStorageHandler should set a 
version of segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. AS-IS
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, 
"{{2019-01-30T16:58:29.992Z}}{{")}}|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = 

[jira] [Updated] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21193:
---
Status: Patch Available  (was: Open)

> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21193.1.patch
>
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong updated HIVE-21194:
-
Description: 
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. AS-IS
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, 
"{{2019-01-30T16:58:29.992Z}}{{")}}|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
{code:java}
version = DateTimes.nowUtc().toString();
{code}
 

 
h3. [Segment version (UTC+9, 
"2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
{code:java}
jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
DateTime().toString());
{code}
 

 
h1. TO-BE

 

Because druid uses UTC only for now, DruidStorageHandler should set a version 
of segment to UTC.

 

 

 

  was:
h1. Exception while running a KillTask
{code:java}
2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
 type=kill, dataSource=upload}]
io.druid.java.util.common.ISE: WTF?! Unused 
segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
 has version[2019-01-31T01:12:32.289+09:00] > task 
version[2019-01-30T16:58:29.992Z]
at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at 
io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
 [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_112]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
{code}
 
h1. AS-IS
h3. [KillTask compares 
versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]

 
{code:java}
if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
  throw new ISE(
  "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
  unusedSegment.getId(),
  unusedSegment.getVersion(),
  myLock.getVersion()
  );
}
{code}
 

 
h3. [KillTask version (UTC, 
"{{2019-01-30T16:58:29.992Z}}{{")}}|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]

 
{code:java}
version = 

[jira] [Assigned] (HIVE-21194) DruidStorageHandler should set a version of segment to UTC

2019-01-31 Thread Seung-Hyun Cheong (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seung-Hyun Cheong reassigned HIVE-21194:



> DruidStorageHandler should set a version of segment to UTC
> --
>
> Key: HIVE-21194
> URL: https://issues.apache.org/jira/browse/HIVE-21194
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Seung-Hyun Cheong
>Assignee: Seung-Hyun Cheong
>Priority: Minor
>
> h1. Exception while running a KillTask
> {code:java}
> 2019-01-30T16:58:35,354 ERROR [task-runner-0-priority-0] 
> io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running 
> task[KillTask{id=kill_upload_2018-12-31T00:00:00.000Z_2019-02-05T00:00:00.000Z_2019-02-01T16:52:31.851Z,
>  type=kill, dataSource=upload}]
> io.druid.java.util.common.ISE: WTF?! Unused 
> segment[upload_2019-01-01T00:00:00.000Z_2019-01-02T00:00:00.000Z_2019-01-31T01:12:32.289+09:00]
>  has version[2019-01-31T01:12:32.289+09:00] > task 
> version[2019-01-30T16:58:29.992Z]
>   at io.druid.indexing.common.task.KillTask.run(KillTask.java:94) 
> ~[druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at 
> io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416)
>  [druid-indexing-service-0.12.1.3.1.0.0-78.jar:0.12.1.3.1.0.0-78]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {code}
>  
> h1. AS-IS
> h3. [KillTask compares 
> versions|https://github.com/apache/incubator-druid/blob/master/indexing-service/src/main/java/org/apache/druid/indexing/common/task/KillTask.java#L88]
>  
> {code:java}
> if (unusedSegment.getVersion().compareTo(myLock.getVersion()) > 0) {
>   throw new ISE(
>   "WTF?! Unused segment[%s] has version[%s] > task version[%s]",
>   unusedSegment.getId(),
>   unusedSegment.getVersion(),
>   myLock.getVersion()
>   );
> }
> {code}
>  
>  
> h3. [KillTask version (UTC, 
> "{{2019-01-30T16:58:29.992Z}}{{")}}|https://github.com/apache/incubator-druid/blob/8eae26fd4e7572060d112864dd3d5f6a865b9c89/indexing-service/src/main/java/org/apache/druid/indexing/overlord/TaskLockbox.java#L593]
>  
> {code:java}
> version = DateTimes.nowUtc().toString();
> {code}
>  
>  
> h3. [Segment version (UTC+9, 
> "2019-01-31T01:12:32.289+09:00")|https://github.com/apache/hive/blob/master/druid-handler/src/java/org/apache/hadoop/hive/druid/DruidStorageHandler.java#L755]
> {code:java}
> jobProperties.put(DruidConstants.DRUID_SEGMENT_VERSION, new 
> DateTime().toString());
> {code}
>  
>  
> h1. TO-BE
>  
> Because druid uses UTC only for now, DruidStorageHandler should set a version 
> of segment to UTC.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-15406) Consider vectorizing the new 'trunc' function

2019-01-31 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor reassigned HIVE-15406:
---

Assignee: Laszlo Bodor

> Consider vectorizing the new 'trunc' function
> -
>
> Key: HIVE-15406
> URL: https://issues.apache.org/jira/browse/HIVE-15406
> Project: Hive
>  Issue Type: Bug
>Reporter: Matt McCline
>Assignee: Laszlo Bodor
>Priority: Critical
>
> Rounding function 'trunc' added by HIVE-14582.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21045) Add HMS total api count stats and connection pool stats to metrics

2019-01-31 Thread Yongzhi Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongzhi Chen updated HIVE-21045:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Merged into branch-3. 

> Add HMS total api count stats and connection pool stats to metrics
> --
>
> Key: HIVE-21045
> URL: https://issues.apache.org/jira/browse/HIVE-21045
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: Karthik Manamcheri
>Assignee: Karthik Manamcheri
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21045.1.patch, HIVE-21045.2.branch-3.patch, 
> HIVE-21045.2.patch, HIVE-21045.3.patch, HIVE-21045.4.patch, 
> HIVE-21045.5.patch, HIVE-21045.6.patch, HIVE-21045.7.patch, 
> HIVE-21045.branch-3.patch
>
>
> There are two key metrics which I think we lack and which would be really 
> great to help with scaling visibility in HMS.
> *Total API calls duration stats*
> We already compute and log the duration of API calls in the {{PerfLogger}}. 
> We don't have any gauge or timer on what the average duration of an API call 
> is for the past some bucket of time. This will give us an insight into if 
> there is load on the server which is increasing the average API response time.
>  
> *Connection Pool stats*
> We can use different connection pooling libraries such as bonecp or hikaricp. 
> These pool managers expose statistics such as average time waiting to get a 
> connection, number of connections active, etc. We should expose this as a 
> metric so that we can track if the the connection pool size configured is too 
> small and we are saturating!
> These metrics would help catch problems with HMS resource contention before 
> they actually have jobs failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21193:
---
Attachment: HIVE-21193.1.patch

> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-21193.1.patch
>
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21193) Support LZO Compression with CombineHiveInputFormat

2019-01-31 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HIVE-21193:
--

Assignee: BELUGA BEHR

> Support LZO Compression with CombineHiveInputFormat
> ---
>
> Key: HIVE-21193
> URL: https://issues.apache.org/jira/browse/HIVE-21193
> Project: Hive
>  Issue Type: Improvement
>  Components: Compression
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>
> In regards to LZO compression with Hive...
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+LZO
> It does not work out of the box if there are {{.lzo.index}} files present.  
> As I understand it, this is because of the default Hive input format 
> {{CombineHiveInputFormat}} does not handle this correctly.  It does not like 
> that there are a mix of data files and some index files, it lumps them 
> altogether when making the combined splits and Mappers fail when they try to 
> process the {{.lzo.index}} files as data.  When using the original 
> {{HiveInputFormat}}, it correctly identifies the {{.lzo.index}} files because 
> it considers each file individually.
> Allow {{CombineHiveInputFormat}} to short-circuit LZO files and to not 
> combine them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21029) External table replication for existing deployments running incremental replication.

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757310#comment-16757310
 ] 

Hive QA commented on HIVE-21029:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
47s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} ql: The patch generated 0 new + 266 unchanged - 2 
fixed = 266 total (was 268) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
55s{color} | {color:red} ql generated 1 new + 2303 unchanged - 1 fixed = 2304 
total (was 2304) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Write to static field 
org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.numIteration
 from instance method 
org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.build(DriverContext,
 Hive, Logger, TaskTracker)  At IncrementalLoadTasksBuilder.java:from instance 
method 
org.apache.hadoop.hive.ql.exec.repl.incremental.IncrementalLoadTasksBuilder.build(DriverContext,
 Hive, Logger, TaskTracker)  At IncrementalLoadTasksBuilder.java:[line 99] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15863/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15863/yetus/new-findbugs-ql.html
 |
| modules | C: common ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15863/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-31 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757307#comment-16757307
 ] 

Naveen Gangam commented on HIVE-20484:
--

Thanks for the new patch [~belugabehr]. Looks good to me. I will commit it 
today as it has already been reviewed in the past and +1'ed in the past. Fix 
hasnt changed much since.

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20255) Review LevelOrderWalker.java

2019-01-31 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757270#comment-16757270
 ] 

BELUGA BEHR commented on HIVE-20255:


[~pvary] [~aihuaxu] [~kgyrtkirk] After 18 attempts, the unit tests finally 
passed! Please consider for inclusion into the project.

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.18.patch, HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20255) Review LevelOrderWalker.java

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757264#comment-16757264
 ] 

Hive QA commented on HIVE-20255:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956998/HIVE-20255.18.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15720 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15862/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15862/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15862/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956998 - PreCommit-HIVE-Build

> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.18.patch, HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20255) Review LevelOrderWalker.java

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757226#comment-16757226
 ] 

Hive QA commented on HIVE-20255:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
43s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 2 
fixed = 1 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15862/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15862/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Review LevelOrderWalker.java
> 
>
> Key: HIVE-20255
> URL: https://issues.apache.org/jira/browse/HIVE-20255
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20255.10.patch, HIVE-20255.11.patch, 
> HIVE-20255.12.patch, HIVE-20255.13.patch, HIVE-20255.14.patch, 
> HIVE-20255.15.patch, HIVE-20255.16.patch, HIVE-20255.17.patch, 
> HIVE-20255.18.patch, HIVE-20255.9.patch
>
>
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lib/LevelOrderWalker.java
> * Make code more concise
> * Fix some check style issues
> {code}
>   if (toWalk.get(index).getChildren() != null) {
> for(Node child : toWalk.get(index).getChildren()) {
> {code}
> Actually, the underlying implementation of {{getChildren()}} has to do some 
> real work, so do not throw away the work after checking for null.  Simply 
> call once and store the results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757186#comment-16757186
 ] 

Hive QA commented on HIVE-20849:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956990/HIVE-20849.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15718 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15861/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15861/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15861/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956990 - PreCommit-HIVE-Build

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, HIVE-20849.5.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21191) I want to extends lag/lead functions to Implementing some special functions, And I met some problems

2019-01-31 Thread one (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

one updated HIVE-21191:
---
Description: 
i want a distinctLag functions ,The function is like lag, but the difference is 
to select different values in front of it.
 Example:
 {color:#14892c}select * from active{color}
||session||sq||channel||
|1|1|A|
|1|2|B|
|1|3|B|
|1|4|C|
|1|5|B|
|2|1|C|
|2|2|B|
|2|3|B|
|2|4|A|
|2|5|B|

{color:#14892c}
 select session,sq,lag(channel)over(partition by session order by sq) from 
active{color}
||session||sq||channel||
|1|1|null|
|1|2|A|
|1|3|B|
|1|4|B|
|1|5|C|
|2|1|null|
|2|2|C|
|2|3|B|
|2|4|B|
|2|5|A|

The function I want is:{color:#14892c}
 select session,sq,distinctLag(channel)over(partition by session order by sq) 
from active{color}
||session||sq||channel||
|1|1|null|
|1|2|A|
|1|3|A|
|1|4|B|
|1|5|C|
|2|1|null|
|2|2|C|
|2|3|C|
|2|4|B|
|2|5|A|

 

i try to extend GenericUDFLeadLag and Override:
{code:java}
import org.apache.hadoop.hive.ql.exec.Description;
import org.apache.hadoop.hive.ql.metadata.HiveException;
import org.apache.hadoop.hive.ql.udf.UDFType;
import org.apache.hadoop.hive.ql.udf.generic.GenericUDFLeadLag;
import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils;
import 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.ObjectInspectorCopyOption;
@Description(
name = "distinctLag",
value = "distinctLag  (scalar_expression [,offset] [,default]) OVER 
([query_partition_clause] order_by_clause); "
+ "The distinctLag function is used to access data from a 
distinct previous row.",
extended = "Example:\n "
+ "select p1.p_mfgr, p1.p_name, p1.p_size,\n"
+ " p1.p_size - distinctLag(p1.p_size,1,p1.p_size) over( distribute 
by p1.p_mfgr sort by p1.p_name) as deltaSz\n"
+ " from part p1 join part p2 on p1.p_partkey = p2.p_partkey")

@UDFType(impliesOrder = true)
public class GenericUDFDistinctLag extends GenericUDFLeadLag {

@Override
public Object evaluate(DeferredObject[] arguments) throws HiveException 
{
Object defaultVal = null;
if (arguments.length == 3) {
defaultVal = 
ObjectInspectorUtils.copyToStandardObject(getDefaultValueConverter().convert(arguments[2].get()),
 getDefaultArgOI());
}

int idx = getpItr().getIndex() - 1;
int start = 0;
int end = getpItr().getPartition().size();
try {
Object currValue = 
ObjectInspectorUtils.copyToStandardObject(getExprEvaluator().evaluate(getpItr().resetToIndex(idx)),
 getFirstArgOI(), ObjectInspectorCopyOption.WRITABLE);
Object ret = null;
int newIdx = idx;
do {
--newIdx;
if (newIdx >= end || newIdx < start) {
ret = defaultVal;
return ret;
}else{
ret = 
ObjectInspectorUtils.copyToStandardObject(getExprEvaluator().evaluate(getpItr().lag(1)),
 getFirstArgOI(), ObjectInspectorCopyOption.WRITABLE);
if(ret.equals(currValue)){
setAmt(getAmt() - 1);
}
}
} while (getAmt() > 0);
return ret;
} finally {
Object currRow = getpItr().resetToIndex(idx);
// reevaluate expression on current Row, to trigger the 
Lazy object
// caches to be reset to the current row.
getExprEvaluator().evaluate(currRow);
}

}

@Override
protected  String _getFnName(){
 return "distinctLag";
}

@Override
protected Object getRow(int amt) throws HiveException {
throw new HiveException("distinctLag error: cannot call 
getRow");
}

@Override
protected int getIndex(int amt) {
// TODO Auto-generated method stub
return 0;
}
}{code}
and package as a jar,add into hive,create a  temporary function.

then,i run:

{color:#14892c}select session,sq,distinctLag(channel)over(partition by session 
order by sq) from active;{color}

{color:#33}It reported an error:{color}

{color:#d04437}FAILED: SemanticException Failed to breakup Windowing 
invocations into Groups. At least 1 group must only depend on input columns. 
Also check for circular dependencies.
 Underlying error: Invalid function distinctLag{color}

{color:#33}I don't know exactly what the problem is. I hope 

[jira] [Updated] (HIVE-21191) I want to extends lag/lead functions to Implementing some special functions, And I met some problems

2019-01-31 Thread one (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

one updated HIVE-21191:
---
Labels: LAG() UDAF UDF window_function  (was: LAG() UDAF UDF window_funcion)

> I want to  extends lag/lead functions to Implementing some special functions, 
> And I met some problems
> -
>
> Key: HIVE-21191
> URL: https://issues.apache.org/jira/browse/HIVE-21191
> Project: Hive
>  Issue Type: Wish
>  Components: Hive, UDF, Windows
>Affects Versions: 1.1.0
>Reporter: one
>Priority: Minor
>  Labels: LAG(), UDAF, UDF, window_function
>
> i want a distinctLag functions ,The function is like lag, but the difference 
> is to select different values in front of it.
>  Example:
>  {color:#14892c}select * from active{color}
> ||session||sq||channel||
> |1|1|A|
> |1|2|B|
> |1|3|B|
> |1|4|C|
> |1|5|B|
> |2|1|C|
> |2|2|B|
> |2|3|B|
> |2|4|A|
> |2|5|B|
> {color:#14892c}
>  select session,sq,lag(channel)over(partition by session order by sq) from 
> active{color}
> ||session||sq||channel||
> |1|1|null|
> |1|2|A|
> |1|3|B|
> |1|4|B|
> |1|5|C|
> |2|1|null|
> |2|2|C|
> |2|3|B|
> |2|4|B|
> |2|5|A|
> The function I want is:{color:#14892c}
>  select session,sq,distinctLag(channel)over(partition by session order by sq) 
> from active{color}
> ||session||sq||channel||
> |1|1|null|
> |1|2|A|
> |1|3|A|
> |1|4|B|
> |1|5|C|
> |2|1|null|
> |2|2|C|
> |2|3|C|
> |2|4|B|
> |2|5|A|
>  
> i try to extend GenericUDFLeadLag and Override:
> {code:java}
> import org.apache.hadoop.hive.ql.exec.Description;
> import org.apache.hadoop.hive.ql.metadata.HiveException;
> import org.apache.hadoop.hive.ql.udf.UDFType;
> import org.apache.hadoop.hive.ql.udf.generic.GenericUDFLeadLag;
> import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils;
> import 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.ObjectInspectorCopyOption;
> @Description(
>   name = "distinctLag",
>   value = "distinctLag  (scalar_expression [,offset] [,default]) OVER 
> ([query_partition_clause] order_by_clause); "
>   + "The distinctLag function is used to access data from a 
> distinct previous row.",
>   extended = "Example:\n "
>   + "select p1.p_mfgr, p1.p_name, p1.p_size,\n"
>   + " p1.p_size - distinctLag(p1.p_size,1,p1.p_size) over( distribute 
> by p1.p_mfgr sort by p1.p_name) as deltaSz\n"
>   + " from part p1 join part p2 on p1.p_partkey = p2.p_partkey")
> @UDFType(impliesOrder = true)
> public class GenericUDFDistinctLag extends GenericUDFLeadLag {
>   @Override
>   public Object evaluate(DeferredObject[] arguments) throws HiveException 
> {
>   Object defaultVal = null;
>   if (arguments.length == 3) {
>   defaultVal = 
> ObjectInspectorUtils.copyToStandardObject(getDefaultValueConverter().convert(arguments[2].get()),
>  getDefaultArgOI());
>   }
>   int idx = getpItr().getIndex() - 1;
>   int start = 0;
>   int end = getpItr().getPartition().size();
>   try {
>   Object currValue = 
> ObjectInspectorUtils.copyToStandardObject(getExprEvaluator().evaluate(getpItr().resetToIndex(idx)),
>  getFirstArgOI(), ObjectInspectorCopyOption.WRITABLE);
>   Object ret = null;
>   int newIdx = idx;
>   do {
>   --newIdx;
>   if (newIdx >= end || newIdx < start) {
>   ret = defaultVal;
>   return ret;
>   }else{
>   ret = 
> ObjectInspectorUtils.copyToStandardObject(getExprEvaluator().evaluate(getpItr().lag(1)),
>  getFirstArgOI(), ObjectInspectorCopyOption.WRITABLE);
>   if(ret.equals(currValue)){
>   setAmt(getAmt() - 1);
>   }
>   }
>   } while (getAmt() > 0);
>   return ret;
>   } finally {
>   Object currRow = getpItr().resetToIndex(idx);
>   // reevaluate expression on current Row, to trigger the 
> Lazy object
>   // caches to be reset to the current row.
>   getExprEvaluator().evaluate(currRow);
>   }
>   }
>   @Override
>   protected  String _getFnName(){
>return "distinctLag";
>   }
>   @Override
>   protected Object getRow(int amt) throws HiveException {
>   throw new 

[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757163#comment-16757163
 ] 

Hive QA commented on HIVE-20849:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
38s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} ql: The patch generated 0 new + 85 unchanged - 9 
fixed = 85 total (was 94) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} ql generated 0 new + 2301 unchanged - 3 fixed = 2301 
total (was 2304) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15861/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15861/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, HIVE-20849.5.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757135#comment-16757135
 ] 

Hive QA commented on HIVE-20484:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956992/HIVE-20484.5.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15720 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/15860/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/15860/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-15860/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12956992 - PreCommit-HIVE-Build

> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached for this Scan.
>* 
>* This is true by default.  When true, default settings of the table and
>* family are used (this will never override caching blocks if the block
>* cache is disabled for that family or entirely).
>*
>* @param cacheBlocks if false, default settings are overridden and blocks
>* will not be cached
>*/
>   public Scan setCacheBlocks(boolean cacheBlocks) {
> this.cacheBlocks = cacheBlocks;
> return this;
>   }
> {code}
> Hive is doing full scans of the table with MapReduce/Spark and therefore, 
> according to the HBase docs, the default behavior here should be that blocks 
> are not cached.  Hive should set this value to "false" by default unless the 
> table {{SERDEPROPERTIES}} override this.
> {code:sql}
> -- Commands for HBase
> -- create 'test', 't'
> CREATE EXTERNAL TABLE test(value map, row_key string) 
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping" = "t:,:key",
> "hbase.scan.cacheblock" = "false"
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-685) add UDFquote

2019-01-31 Thread Mani M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani M updated HIVE-685:

Status: In Progress  (was: Patch Available)

> add UDFquote
> 
>
> Key: HIVE-685
> URL: https://issues.apache.org/jira/browse/HIVE-685
> Project: Hive
>  Issue Type: New Feature
>Reporter: Namit Jain
>Assignee: Mani M
>Priority: Major
>  Labels: todoc4.0, udf
> Fix For: 4.0.0
>
> Attachments: HIVE.685.02.PATCH, HIVE.685.03.PATCH, HIVE.685.PATCH
>
>
> add UDFquote
> look at
> http://dev.mysql.com/doc/refman/5.0/en/func-op-summary-ref.html
> for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-685) add UDFquote

2019-01-31 Thread Mani M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani M updated HIVE-685:

Status: Patch Available  (was: In Progress)

> add UDFquote
> 
>
> Key: HIVE-685
> URL: https://issues.apache.org/jira/browse/HIVE-685
> Project: Hive
>  Issue Type: New Feature
>Reporter: Namit Jain
>Assignee: Mani M
>Priority: Major
>  Labels: todoc4.0, udf
> Fix For: 4.0.0
>
> Attachments: HIVE.685.02.PATCH, HIVE.685.03.PATCH, HIVE.685.04.PATCH, 
> HIVE.685.PATCH
>
>
> add UDFquote
> look at
> http://dev.mysql.com/doc/refman/5.0/en/func-op-summary-ref.html
> for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-685) add UDFquote

2019-01-31 Thread Mani M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mani M updated HIVE-685:

Attachment: HIVE.685.04.PATCH

> add UDFquote
> 
>
> Key: HIVE-685
> URL: https://issues.apache.org/jira/browse/HIVE-685
> Project: Hive
>  Issue Type: New Feature
>Reporter: Namit Jain
>Assignee: Mani M
>Priority: Major
>  Labels: todoc4.0, udf
> Fix For: 4.0.0
>
> Attachments: HIVE.685.02.PATCH, HIVE.685.03.PATCH, HIVE.685.04.PATCH, 
> HIVE.685.PATCH
>
>
> add UDFquote
> look at
> http://dev.mysql.com/doc/refman/5.0/en/func-op-summary-ref.html
> for details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20484) Disable Block Cache By Default With HBase SerDe

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757082#comment-16757082
 ] 

Hive QA commented on HIVE-20484:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15860/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: hbase-handler U: hbase-handler |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15860/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Disable Block Cache By Default With HBase SerDe
> ---
>
> Key: HIVE-20484
> URL: https://issues.apache.org/jira/browse/HIVE-20484
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 1.2.3, 2.4.0, 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-20484.1.patch, HIVE-20484.2.patch, 
> HIVE-20484.3.patch, HIVE-20484.4.patch, HIVE-20484.5.patch
>
>
> {quote}
> Scan instances can be set to use the block cache in the RegionServer via the 
> setCacheBlocks method. For input Scans to MapReduce jobs, this should be 
> false. 
> https://hbase.apache.org/book.html#perf.hbase.client.blockcache
> {quote}
> However, from the Hive code, we can see that this is not the case.
> {code}
> public static final String HBASE_SCAN_CACHEBLOCKS = "hbase.scan.cacheblock";
> ...
> String scanCacheBlocks = 
> tableProperties.getProperty(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   jobProperties.put(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS, scanCacheBlocks);
> }
> ...
> String scanCacheBlocks = jobConf.get(HBaseSerDe.HBASE_SCAN_CACHEBLOCKS);
> if (scanCacheBlocks != null) {
>   scan.setCacheBlocks(Boolean.parseBoolean(scanCacheBlocks));
> }
> {code}
> In the Hive code, we can see that if {{hbase.scan.cacheblock}} is not 
> specified in the {{SERDEPROPERTIES}} then {{setCacheBlocks}} is not called 
> and the default value of the HBase {{Scan}} class is used.
> {code:java|title=Scan.java}
>   /**
>* Set whether blocks should be cached 

[jira] [Commented] (HIVE-20797) Print Number of Locks Acquired

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757068#comment-16757068
 ] 

Hive QA commented on HIVE-20797:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956988/HIVE-20797.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 784 failed/errored test(s), 12107 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_index] 
(batchId=267)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=267)
org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[colstats_all_nulls] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[create_merge_compressed]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[explain_outputs] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[insert_overwrite_local_directory_1]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[mapjoin2] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[select_dummy_source] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_10] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_11] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_12] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_13] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_16] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_1] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_2] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_3] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[udf_unix_timestamp] 
(batchId=275)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[buckets] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_database]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[explain] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[having] 
(batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table]
 (batchId=278)

[jira] [Commented] (HIVE-20797) Print Number of Locks Acquired

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757051#comment-16757051
 ] 

Hive QA commented on HIVE-20797:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
41s{color} | {color:blue} ql in master has 2304 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-15859/dev-support/hive-personality.sh
 |
| git revision | master / 4271bbf |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-15859/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Print Number of Locks Acquired
> --
>
> Key: HIVE-20797
> URL: https://issues.apache.org/jira/browse/HIVE-20797
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2, Locking
>Affects Versions: 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>  Labels: newbie, noob
> Attachments: HIVE-20797.1.patch, HIVE-20797.2.patch, 
> HIVE-20797.3.patch
>
>
> The number of locks acquired by a query can greatly influence the performance 
> and stability of the system, especially for ZK locks.  Please add INFO level 
> logging with the number of locks each query obtains.
> Log here:
> https://github.com/apache/hive/blob/3963c729fabf90009cb67d277d40fe5913936358/ql/src/java/org/apache/hadoop/hive/ql/Driver.java#L1670-L1672
> {quote}
> A list of acquired locks will be stored in the 
> org.apache.hadoop.hive.ql.Context object and can be retrieved via 
> org.apache.hadoop.hive.ql.Context#getHiveLocks.
> {quote}
> https://github.com/apache/hive/blob/758ff449099065a84c46d63f9418201c8a6731b1/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/HiveTxnManager.java#L115-L127



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21189) hive.merge.nway.joins should default to false

2019-01-31 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757025#comment-16757025
 ] 

Hive QA commented on HIVE-21189:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12956980/HIVE-21189.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 228 failed/errored test(s), 15718 tests 
executed
*Failed tests:*
{noformat}
TestReplicationScenariosIncrementalLoadAcidTables - did not produce a 
TEST-*.xml file (likely timed out) (batchId=251)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join_on_filter]
 (batchId=278)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_join] 
(batchId=58)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_join_pkfk]
 (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join12] (batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join20] (batchId=96)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join21] (batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join28] (batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join29] (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join31] (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join3] (batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join7] (batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats2] 
(batchId=94)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer5] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_join_merge] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[empty_join] (batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_logical] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fold_to_null] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_reducers_power_two]
 (batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_join_preds] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join12] (batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join20] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join21] (batchId=43)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join26] (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join28] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join3] (batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join40] (batchId=58)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join45] (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join47] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join7] (batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_alt_syntax] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_1] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_2] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_3] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_4] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual1]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual2]
 (batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual3]
 (batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual4]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_filters_overlap] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_grp_diff_keys] 
(batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_map_ppr] 
(batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_merge_multi_expressions]
 (batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_reorder2] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_reorder3] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_reorder4] 
(batchId=5)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[keep_uniform] 
(batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin47] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_filter_on_outerjoin]
 (batchId=66)

[jira] [Updated] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-31 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21143:

Attachment: HIVE-21143.11.patch

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch, HIVE-21143.11.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21143) Add rewrite rules to open/close Between operators

2019-01-31 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21143:

Attachment: HIVE-21143.10.patch

> Add rewrite rules to open/close Between operators
> -
>
> Key: HIVE-21143
> URL: https://issues.apache.org/jira/browse/HIVE-21143
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21143.01.patch, HIVE-21143.02.patch, 
> HIVE-21143.03.patch, HIVE-21143.03.patch, HIVE-21143.04.patch, 
> HIVE-21143.05.patch, HIVE-21143.06.patch, HIVE-21143.07.patch, 
> HIVE-21143.08.patch, HIVE-21143.08.patch, HIVE-21143.09.patch, 
> HIVE-21143.10.patch
>
>
> During query compilation it's better to have BETWEEN statements in open form, 
> as Calcite current not considering them during simplification.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)