[jira] [Updated] (HIVE-17300) WebUI query plan graphs

2018-09-12 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-17300:
-
Attachment: HIVE-17300.7.patch
Status: Patch Available  (was: Open)

> WebUI query plan graphs
> ---
>
> Key: HIVE-17300
> URL: https://issues.apache.org/jira/browse/HIVE-17300
> Project: Hive
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, features, patch
> Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, 
> HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, 
> HIVE-17300.7.patch, HIVE-17300.patch, complete_success.png, 
> full_mapred_stats.png, graph_with_mapred_stats.png, last_stage_error.png, 
> last_stage_running.png, non_mapred_task_selected.png
>
>
> Hi all,
> I’m working on a feature of the Hive WebUI Query Plan tab that would provide 
> the option to display the query plan as a nice graph (scroll down for 
> screenshots). If you click on one of the graph’s stages, the plan for that 
> stage appears as text below. 
> Stages are color-coded if they have a status (Success, Error, Running), and 
> the rest are grayed out. Coloring is based on status already available in the 
> WebUI, under the Stages tab.
> There is an additional option to display stats for MapReduce tasks. This 
> includes the job’s ID, tracking URL (where the logs are found), and mapper 
> and reducer numbers/progress, among other info. 
> The library I’m using for the graph is called vis.js (http://visjs.org/). It 
> has an Apache license, and the only necessary file to be included from this 
> library is about 700 KB.
> I tried to keep server-side changes minimal, and graph generation is taken 
> care of by the client. Plans with more than a given number of stages 
> (default: 25) won't be displayed in order to preserve resources.
> I’d love to hear any and all input from the community about this feature: do 
> you think it’s useful, and is there anything important I’m missing?
> Thanks,
> Karen Coppage
> Review request: https://reviews.apache.org/r/61663/
> Any input is welcome!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17300) WebUI query plan graphs

2018-09-12 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-17300:
-
Status: Open  (was: Patch Available)

> WebUI query plan graphs
> ---
>
> Key: HIVE-17300
> URL: https://issues.apache.org/jira/browse/HIVE-17300
> Project: Hive
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, features, patch
> Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, 
> HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, 
> HIVE-17300.7.patch, HIVE-17300.patch, complete_success.png, 
> full_mapred_stats.png, graph_with_mapred_stats.png, last_stage_error.png, 
> last_stage_running.png, non_mapred_task_selected.png
>
>
> Hi all,
> I’m working on a feature of the Hive WebUI Query Plan tab that would provide 
> the option to display the query plan as a nice graph (scroll down for 
> screenshots). If you click on one of the graph’s stages, the plan for that 
> stage appears as text below. 
> Stages are color-coded if they have a status (Success, Error, Running), and 
> the rest are grayed out. Coloring is based on status already available in the 
> WebUI, under the Stages tab.
> There is an additional option to display stats for MapReduce tasks. This 
> includes the job’s ID, tracking URL (where the logs are found), and mapper 
> and reducer numbers/progress, among other info. 
> The library I’m using for the graph is called vis.js (http://visjs.org/). It 
> has an Apache license, and the only necessary file to be included from this 
> library is about 700 KB.
> I tried to keep server-side changes minimal, and graph generation is taken 
> care of by the client. Plans with more than a given number of stages 
> (default: 25) won't be displayed in order to preserve resources.
> I’d love to hear any and all input from the community about this feature: do 
> you think it’s useful, and is there anything important I’m missing?
> Thanks,
> Karen Coppage
> Review request: https://reviews.apache.org/r/61663/
> Any input is welcome!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19847) Create Separate getInputSummary Service

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613045#comment-16613045
 ] 

Hive QA commented on HIVE-19847:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939397/HIVE-19847.5.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13750/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13750/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13750/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-09-13 05:40:03.257
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-13750/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-09-13 05:40:03.261
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at a3b7a24 HIVE-19814: RPC Server port is always random for spark 
(Bharathkrishna Guruvayoor Murali, reviewed by Sahil Takiar)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at a3b7a24 HIVE-19814: RPC Server port is always random for spark 
(Bharathkrishna Guruvayoor Murali, reviewed by Sahil Takiar)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-09-13 05:40:03.943
+ rm -rf ../yetus_PreCommit-HIVE-Build-13750
+ mkdir ../yetus_PreCommit-HIVE-Build-13750
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-13750
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-13750/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not 
exist in index
error: a/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java: does 
not exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc9088563118744787995.exe, --version]
protoc-jar: executing: [/tmp/protoc9088563118744787995.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) on project hive-shims-0.23: Execution 
process-resource-bundles of goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed. 
ConcurrentModificationException -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn 

[jira] [Commented] (HIVE-20503) Use datastructure aware estimations during mapjoin selection

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613044#comment-16613044
 ] 

Hive QA commented on HIVE-20503:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939382/HIVE-20503.05.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14938 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13749/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13749/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13749/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939382 - PreCommit-HIVE-Build

> Use datastructure aware estimations during mapjoin selection
> 
>
> Key: HIVE-20503
> URL: https://issues.apache.org/jira/browse/HIVE-20503
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20503.01.patch, HIVE-20503.01wip01.patch, 
> HIVE-20503.01wip01.patch, HIVE-20503.02.patch, HIVE-20503.03.patch, 
> HIVE-20503.04.patch, HIVE-20503.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20503) Use datastructure aware estimations during mapjoin selection

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613026#comment-16613026
 ] 

Hive QA commented on HIVE-20503:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13749/dev-support/hive-personality.sh
 |
| git revision | master / a3b7a24 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13749/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Use datastructure aware estimations during mapjoin selection
> 
>
> Key: HIVE-20503
> URL: https://issues.apache.org/jira/browse/HIVE-20503
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20503.01.patch, HIVE-20503.01wip01.patch, 
> HIVE-20503.01wip01.patch, HIVE-20503.02.patch, HIVE-20503.03.patch, 
> HIVE-20503.04.patch, HIVE-20503.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20548) Can not start llp via yarn service

2018-09-12 Thread zhangbutao (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613015#comment-16613015
 ] 

zhangbutao commented on HIVE-20548:
---

hi,yes,we use ambari to start llap. Do you have some tutorials to setup llap 
via yarn service ? Thanks !


















 

Email: zhangbu...@cmss.chinamobile.com

 



发件人: Gopal V (JIRA)

时间: 2018/09/13(星期四)12:24

收件人: zhangbutao;

主题: [jira] [Commented] (HIVE-20548) Can not start llp via yarn service
[ 
https://issues.apache.org/jira/browse/HIVE-20548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=16612999#comment-16612999
 ] 

Gopal V commented on HIVE-20548:


This is a YARN setup problem - are you using Ambari? Ambari does this while 
setting up YARN.

https://github.com/apache/ambari/blob/5460e8952729854f1c032a781c9a8de608ba4475/ambari-common/src/main/python/resource_management/libraries/functions/copy_tarball.py#L213



> Can not start llp via yarn service
> --
>
> Key: HIVE-20548
> URL: https://issues.apache.org/jira/browse/HIVE-20548
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.0
>Reporter: zhangbutao
>Priority: Major
>
> We start llap through yarn service instead of slider, and some problems 
> happen as follows:
> {code:java}
> 2018-09-12 19:32:48,629 - LLAP start command: 
> /usr/bch/current/hive-server2/bin/hive --service llap --size 10930m 
> --startImmediately --name llap0 --cache 0m --xmx 8m --loglevel INFO --output 
> /var/lib/ambari-agent/tmp/llap-yarn-service_2018-09-12_11-32-48 
> --service-placement 4 --skiphadoopversion --skiphbasecp --instances 1 
> --logger query-routing --args " -XX:+AlwaysPreTouch -XX:+UseG1GC 
> -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts 
> -XX:InitiatingHeapOccupancyPercent=70 -XX:+UnlockExperimentalVMOptions 
> -XX:G1MaxNewSizePercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 
> -XX:MetaspaceSize=1024m"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
> WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not 
> exist
> WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: 
> [hive.aux.jars.path]
> WARN cli.LlapServiceDriver: Java versions might not match : 
> JAVA_HOME=[/usr/jdk64/jdk1.8.0_112],process jre=[/usr/jdk64/jdk1.8.0_112/jre]
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
> WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
> WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not 
> exist
> 11:32:54 Running as a child of LlapServiceDriver
> 11:32:54 Prepared the files
> 11:33:13 Packaged the files
> WARN curator.CuratorZookeeperClient: session timeout [1] is less than 
> connection timeout [15000]
> ERROR client.ServiceClient: Error on destroy 'llap0': not found.
> WARN client.ServiceClient: Property yarn.service.framework.path has a value 
> /bch/apps/3.0.0/yarn/service-dep.tar.gz, but is not a valid file
> 2018-09-12 19:33:17,385 - 
> 2018-09-12 19:33:17,385 - LLAP status command : 
> /usr/bch/current/hive-server2/bin/hive --service llapstatus -w -r 0.8 -i 2 -t 
> 400
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> 

[jira] [Commented] (HIVE-20420) Provide a fallback authorizer when no other authorizer is in use

2018-09-12 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613014#comment-16613014
 ] 

Daniel Dai commented on HIVE-20420:
---

Fixing ptest failures.

> Provide a fallback authorizer when no other authorizer is in use
> 
>
> Key: HIVE-20420
> URL: https://issues.apache.org/jira/browse/HIVE-20420
> Project: Hive
>  Issue Type: New Feature
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20420.1.patch, HIVE-20420.2.patch, 
> HIVE-20420.3.patch, HIVE-20420.4.patch, HIVE-20420.5.patch, HIVE-20420.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20420) Provide a fallback authorizer when no other authorizer is in use

2018-09-12 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20420:
--
Attachment: HIVE-20420.6.patch

> Provide a fallback authorizer when no other authorizer is in use
> 
>
> Key: HIVE-20420
> URL: https://issues.apache.org/jira/browse/HIVE-20420
> Project: Hive
>  Issue Type: New Feature
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20420.1.patch, HIVE-20420.2.patch, 
> HIVE-20420.3.patch, HIVE-20420.4.patch, HIVE-20420.5.patch, HIVE-20420.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20541) REPL DUMP on external table with add partition event throws NoSuchElementException.

2018-09-12 Thread anishek (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613011#comment-16613011
 ] 

anishek commented on HIVE-20541:


+1 

> REPL DUMP on external table with add partition event throws 
> NoSuchElementException.
> ---
>
> Key: HIVE-20541
> URL: https://issues.apache.org/jira/browse/HIVE-20541
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-20541.01.patch
>
>
> REPL dump on an external table with add partition event throws 
> NoSuchElementException. Need to check if file iterator list hasNext before 
> accessing it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20423) Set NULLS LAST as the default null ordering

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613000#comment-16613000
 ] 

Hive QA commented on HIVE-20423:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939365/HIVE-20423.8.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14938 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13748/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13748/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13748/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939365 - PreCommit-HIVE-Build

> Set NULLS LAST as the default null ordering
> ---
>
> Key: HIVE-20423
> URL: https://issues.apache.org/jira/browse/HIVE-20423
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20423.1.patch, HIVE-20423.2.patch, 
> HIVE-20423.3.patch, HIVE-20423.4.patch, HIVE-20423.4.patch, 
> HIVE-20423.5.patch, HIVE-20423.6.patch, HIVE-20423.7.patch, HIVE-20423.8.patch
>
>
> HIVE-20150 TopNKeyOperator pushdown can be more efficient if NULLS LAST 
> becomes the default null ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20548) Can not start llp via yarn service

2018-09-12 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612999#comment-16612999
 ] 

Gopal V commented on HIVE-20548:


This is a YARN setup problem - are you using Ambari? Ambari does this while 
setting up YARN.

https://github.com/apache/ambari/blob/5460e8952729854f1c032a781c9a8de608ba4475/ambari-common/src/main/python/resource_management/libraries/functions/copy_tarball.py#L213



> Can not start llp via yarn service
> --
>
> Key: HIVE-20548
> URL: https://issues.apache.org/jira/browse/HIVE-20548
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.1.0
>Reporter: zhangbutao
>Priority: Major
>
> We start llap through yarn service instead of slider, and some problems 
> happen as follows:
> {code:java}
> 2018-09-12 19:32:48,629 - LLAP start command: 
> /usr/bch/current/hive-server2/bin/hive --service llap --size 10930m 
> --startImmediately --name llap0 --cache 0m --xmx 8m --loglevel INFO --output 
> /var/lib/ambari-agent/tmp/llap-yarn-service_2018-09-12_11-32-48 
> --service-placement 4 --skiphadoopversion --skiphbasecp --instances 1 
> --logger query-routing --args " -XX:+AlwaysPreTouch -XX:+UseG1GC 
> -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts 
> -XX:InitiatingHeapOccupancyPercent=70 -XX:+UnlockExperimentalVMOptions 
> -XX:G1MaxNewSizePercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 
> -XX:MetaspaceSize=1024m"
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
> WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not 
> exist
> WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: 
> [hive.aux.jars.path]
> WARN cli.LlapServiceDriver: Java versions might not match : 
> JAVA_HOME=[/usr/jdk64/jdk1.8.0_112],process jre=[/usr/jdk64/jdk1.8.0_112/jre]
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
> WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
> WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not 
> exist
> 11:32:54 Running as a child of LlapServiceDriver
> 11:32:54 Prepared the files
> 11:33:13 Packaged the files
> WARN curator.CuratorZookeeperClient: session timeout [1] is less than 
> connection timeout [15000]
> ERROR client.ServiceClient: Error on destroy 'llap0': not found.
> WARN client.ServiceClient: Property yarn.service.framework.path has a value 
> /bch/apps/3.0.0/yarn/service-dep.tar.gz, but is not a valid file
> 2018-09-12 19:33:17,385 - 
> 2018-09-12 19:33:17,385 - LLAP status command : 
> /usr/bch/current/hive-server2/bin/hive --service llapstatus -w -r 0.8 -i 2 -t 
> 400
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/usr/bch/3.0.0/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
> WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not 
> exist
> WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist
> WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does 
> not exist
> WARN conf.HiveConf: HiveConf of 

[jira] [Comment Edited] (HIVE-20509) Plan: fix wasted memory in plans with large partition counts

2018-09-12 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612987#comment-16612987
 ] 

Gopal V edited comment on HIVE-20509 at 9/13/18 4:20 AM:
-

[~b.maidics]: You are right, my calculation is wrong - I'm assuming the whole 
entry can be skipped (so 72 is what we save).

However, now that I think about it a bit more, I can't see why we end up with 
thousands of identical arraylist objects - we can generate default 1 element 
list for each alias with a more functional list object & save more space for 
large partitioned tables.

Basically instead of mutating it in-place, allocate a new one to add a new 
item, so that we can reuse the 1-element case object (this might help 
serializing that hashmap as well).


was (Author: gopalv):
[~b.maidics]: You are right, my calculation is wrong - I'm assuming the whole 
entry can be skipped (so 72 is what we save).

However, now that I think about it a bit more, I can't see why we end up with 
thousands of identical arraylist objects - we can generate default 1 element 
list for each alias with a CopyOnWriteArrayList() (well possibly something 
cheaper than that) & save more space for large partitioned tables.



> Plan: fix wasted memory in plans with large partition counts
> 
>
> Key: HIVE-20509
> URL: https://issues.apache.org/jira/browse/HIVE-20509
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Gopal V
>Assignee: Barnabas Maidics
>Priority: Minor
>  Labels: newbie
> Attachments: HIVE-20509.2.patch, HIVE-20509.patch, after.png, 
> before.png
>
>
> {code}
>   public void addPathToAlias(Path path, String newAlias){
> ArrayList aliases = pathToAliases.get(path);
> if (aliases == null) {
>   aliases = new ArrayList<>();
>   StringInternUtils.internUriStringsInPath(path);
>   pathToAliases.put(path, aliases);
> }
> aliases.add(newAlias.intern());
>   }
> {code}
> ArrayList::DEFAULT_CAPACITY is 10, so this wastes 500 bytes of memory due to 
> the {{new ArrayList<>();}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20549) Allow user set query tag, and kill query with tag

2018-09-12 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20549:
--
Attachment: HIVE-20549.1.patch

> Allow user set query tag, and kill query with tag
> -
>
> Key: HIVE-20549
> URL: https://issues.apache.org/jira/browse/HIVE-20549
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20549.1.patch
>
>
> HIVE-19924 add capacity for replication job set a query tag and kill the 
> replication distcp job with the tag. Here I make it more general, user can 
> set arbitrary "hive.query.tag" in sql script, and kill query with the tag. 
> Hive will cancel the corresponding operation in hs2, along with Tez/MR 
> application launched for the query. For example:
> {code}
> set hive.query.tag=mytag;
> select . -- long running query
> {code}
> In another session:
> {code}
> kill query 'mytag';
> {code}
> There're limitations in the implementation:
> 1. No tag duplication check. There's nothing to prevent conflicting tag for 
> same user, and kill query will kill queries share the same tag. However, kill 
> query will not kill queries from different user unless admin. So different 
> user might share the same tag
> 2. In multiple hs2 environment, kill statement should be issued to all hs2 to 
> make sure the corresponding operation is canceled. When beeline/jdbc connects 
> to hs2 using regular way (zookeeper url), the session will connect to random 
> hs2, which might be different than the hs2 where query run on. User can use 
> HiveConnection.getAllUrls or beeline --getUrlsFromBeelineSite (HIVE-20507) to 
> get a list of all hs2 instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20549) Allow user set query tag, and kill query with tag

2018-09-12 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20549:
--
Status: Patch Available  (was: Open)

> Allow user set query tag, and kill query with tag
> -
>
> Key: HIVE-20549
> URL: https://issues.apache.org/jira/browse/HIVE-20549
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20549.1.patch
>
>
> HIVE-19924 add capacity for replication job set a query tag and kill the 
> replication distcp job with the tag. Here I make it more general, user can 
> set arbitrary "hive.query.tag" in sql script, and kill query with the tag. 
> Hive will cancel the corresponding operation in hs2, along with Tez/MR 
> application launched for the query. For example:
> {code}
> set hive.query.tag=mytag;
> select . -- long running query
> {code}
> In another session:
> {code}
> kill query 'mytag';
> {code}
> There're limitations in the implementation:
> 1. No tag duplication check. There's nothing to prevent conflicting tag for 
> same user, and kill query will kill queries share the same tag. However, kill 
> query will not kill queries from different user unless admin. So different 
> user might share the same tag
> 2. In multiple hs2 environment, kill statement should be issued to all hs2 to 
> make sure the corresponding operation is canceled. When beeline/jdbc connects 
> to hs2 using regular way (zookeeper url), the session will connect to random 
> hs2, which might be different than the hs2 where query run on. User can use 
> HiveConnection.getAllUrls or beeline --getUrlsFromBeelineSite (HIVE-20507) to 
> get a list of all hs2 instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20549) Allow user set query tag, and kill query with tag

2018-09-12 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reassigned HIVE-20549:
-


> Allow user set query tag, and kill query with tag
> -
>
> Key: HIVE-20549
> URL: https://issues.apache.org/jira/browse/HIVE-20549
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
>
> HIVE-19924 add capacity for replication job set a query tag and kill the 
> replication distcp job with the tag. Here I make it more general, user can 
> set arbitrary "hive.query.tag" in sql script, and kill query with the tag. 
> Hive will cancel the corresponding operation in hs2, along with Tez/MR 
> application launched for the query. For example:
> {code}
> set hive.query.tag=mytag;
> select . -- long running query
> {code}
> In another session:
> {code}
> kill query 'mytag';
> {code}
> There're limitations in the implementation:
> 1. No tag duplication check. There's nothing to prevent conflicting tag for 
> same user, and kill query will kill queries share the same tag. However, kill 
> query will not kill queries from different user unless admin. So different 
> user might share the same tag
> 2. In multiple hs2 environment, kill statement should be issued to all hs2 to 
> make sure the corresponding operation is canceled. When beeline/jdbc connects 
> to hs2 using regular way (zookeeper url), the session will connect to random 
> hs2, which might be different than the hs2 where query run on. User can use 
> HiveConnection.getAllUrls or beeline --getUrlsFromBeelineSite (HIVE-20507) to 
> get a list of all hs2 instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-20509) Plan: fix wasted memory in plans with large partition counts

2018-09-12 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612987#comment-16612987
 ] 

Gopal V edited comment on HIVE-20509 at 9/13/18 4:17 AM:
-

[~b.maidics]: You are right, my calculation is wrong - I'm assuming the whole 
entry can be skipped (so 72 is what we save).

However, now that I think about it a bit more, I can't see why we end up with 
thousands of identical arraylist objects - we can generate default 1 element 
list for each alias with a CopyOnWriteArrayList() (well possibly something 
cheaper than that) & save more space for large partitioned tables.




was (Author: gopalv):
[~b.maidics]: You are right, my calculation is wrong - I'm assuming the whole 
entry can be skipped (so 72 is what we save).

However, now that I think about it a bit more, I can't see why we end up with 
thousands of identical arraylist objects - we can generate default 1 element 
list for each alias with a CopyOnWriteArrayList() & save more space for large 
partitioned tables.



> Plan: fix wasted memory in plans with large partition counts
> 
>
> Key: HIVE-20509
> URL: https://issues.apache.org/jira/browse/HIVE-20509
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Gopal V
>Assignee: Barnabas Maidics
>Priority: Minor
>  Labels: newbie
> Attachments: HIVE-20509.2.patch, HIVE-20509.patch, after.png, 
> before.png
>
>
> {code}
>   public void addPathToAlias(Path path, String newAlias){
> ArrayList aliases = pathToAliases.get(path);
> if (aliases == null) {
>   aliases = new ArrayList<>();
>   StringInternUtils.internUriStringsInPath(path);
>   pathToAliases.put(path, aliases);
> }
> aliases.add(newAlias.intern());
>   }
> {code}
> ArrayList::DEFAULT_CAPACITY is 10, so this wastes 500 bytes of memory due to 
> the {{new ArrayList<>();}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20509) Plan: fix wasted memory in plans with large partition counts

2018-09-12 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612987#comment-16612987
 ] 

Gopal V commented on HIVE-20509:


[~b.maidics]: You are right, my calculation is wrong - I'm assuming the whole 
entry can be skipped (so 72 is what we save).

However, now that I think about it a bit more, I can't see why we end up with 
thousands of identical arraylist objects - we can generate default 1 element 
list for each alias with a CopyOnWriteArrayList() & save more space for large 
partitioned tables.



> Plan: fix wasted memory in plans with large partition counts
> 
>
> Key: HIVE-20509
> URL: https://issues.apache.org/jira/browse/HIVE-20509
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: Gopal V
>Assignee: Barnabas Maidics
>Priority: Minor
>  Labels: newbie
> Attachments: HIVE-20509.2.patch, HIVE-20509.patch, after.png, 
> before.png
>
>
> {code}
>   public void addPathToAlias(Path path, String newAlias){
> ArrayList aliases = pathToAliases.get(path);
> if (aliases == null) {
>   aliases = new ArrayList<>();
>   StringInternUtils.internUriStringsInPath(path);
>   pathToAliases.put(path, aliases);
> }
> aliases.add(newAlias.intern());
>   }
> {code}
> ArrayList::DEFAULT_CAPACITY is 10, so this wastes 500 bytes of memory due to 
> the {{new ArrayList<>();}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-20543) Support replication of Materialized views

2018-09-12 Thread Sankar Hariappan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-20543 started by Sankar Hariappan.
---
> Support replication of Materialized views
> -
>
> Key: HIVE-20543
> URL: https://issues.apache.org/jira/browse/HIVE-20543
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views, repl
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
>
> Currently materialized views are replicated but doesn't work if the DB is 
> renamed after load. Also, it doesn't replicate ALTER MATERIALIZED VIEW 
> [db_name.]materialized_view_name REBUILD; command so that MV remains stale 
> and not in sync with source.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20423) Set NULLS LAST as the default null ordering

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612975#comment-16612975
 ] 

Hive QA commented on HIVE-20423:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
34s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 16 new + 576 unchanged - 26 
fixed = 592 total (was 602) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13748/dev-support/hive-personality.sh
 |
| git revision | master / a3b7a24 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13748/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common itests/hive-blobstore ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13748/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Set NULLS LAST as the default null ordering
> ---
>
> Key: HIVE-20423
> URL: https://issues.apache.org/jira/browse/HIVE-20423
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20423.1.patch, HIVE-20423.2.patch, 
> HIVE-20423.3.patch, HIVE-20423.4.patch, HIVE-20423.4.patch, 
> HIVE-20423.5.patch, HIVE-20423.6.patch, HIVE-20423.7.patch, HIVE-20423.8.patch
>
>
> HIVE-20150 TopNKeyOperator pushdown can be more efficient if NULLS LAST 
> becomes the default null ordering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20541) REPL DUMP on external table with add partition event throws NoSuchElementException.

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612963#comment-16612963
 ] 

Hive QA commented on HIVE-20541:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939363/HIVE-20541.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14939 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13747/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13747/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13747/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939363 - PreCommit-HIVE-Build

> REPL DUMP on external table with add partition event throws 
> NoSuchElementException.
> ---
>
> Key: HIVE-20541
> URL: https://issues.apache.org/jira/browse/HIVE-20541
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-20541.01.patch
>
>
> REPL dump on an external table with add partition event throws 
> NoSuchElementException. Need to check if file iterator list hasNext before 
> accessing it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Teddy Choi (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612953#comment-16612953
 ] 

Teddy Choi commented on HIVE-18908:
---

+1 LGTM. Both failed tests ran successfully on my laptop. And the patch doesn't 
have any conflict.

> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, HIVE-18908.09995.patch, JOIN to MAPJOIN 
> Transformation.pdf, SHARED-MEMORY FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> FULL OUTER MapJoin OFF =  MergeJoin
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin reduce-shuffle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20547) HS2: support Tez sessions started by someone else (part 1)

2018-09-12 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-20547:

Attachment: HIVE-20547.patch

> HS2: support Tez sessions started by someone else (part 1)
> --
>
> Key: HIVE-20547
> URL: https://issues.apache.org/jira/browse/HIVE-20547
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20547.patch
>
>
> The registry/configs/some code is based on a private patch by [~prasanth_j].
> The patch refactors tez pool session to use composition instead of 
> implementation inheritance from TezSessionState, to allow for two 
> implementations of TezSessionState.
> For now it's blocked on getClient API in Tez that will be available after 
> 0.9.3 release; however I commented out that path to check that refactoring 
> passes tests.
> When 0.9.3 becomes available, we can uncomment and commit.
> In part 2, we may add some tests, and also consider other changes that are 
> required for external sessions (e.g. KillQuery, where we cannot assume YARN 
> is present).
> We may also consider a WM change that allows for proportional session 
> distribution when the number of external sessions and the number of 
> admin-specified sessions doesn't match, or at least some validation to see 
> that the external sessions are available when applying a RP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20547) HS2: support Tez sessions started by someone else (part 1)

2018-09-12 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-20547:
---


> HS2: support Tez sessions started by someone else (part 1)
> --
>
> Key: HIVE-20547
> URL: https://issues.apache.org/jira/browse/HIVE-20547
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20547.patch
>
>
> The registry/configs/some code is based on a private patch by [~prasanth_j].
> The patch refactors tez pool session to use composition instead of 
> implementation inheritance from TezSessionState, to allow for two 
> implementations of TezSessionState.
> For now it's blocked on getClient API in Tez that will be available after 
> 0.9.3 release; however I commented out that path to check that refactoring 
> passes tests.
> When 0.9.3 becomes available, we can uncomment and commit.
> In part 2, we may add some tests, and also consider other changes that are 
> required for external sessions (e.g. KillQuery, where we cannot assume YARN 
> is present).
> We may also consider a WM change that allows for proportional session 
> distribution when the number of external sessions and the number of 
> admin-specified sessions doesn't match, or at least some validation to see 
> that the external sessions are available when applying a RP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20541) REPL DUMP on external table with add partition event throws NoSuchElementException.

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612949#comment-16612949
 ] 

Hive QA commented on HIVE-20541:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 102 
unchanged - 0 fixed = 103 total (was 102) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13747/dev-support/hive-personality.sh
 |
| git revision | master / a3b7a24 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13747/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: itests/hive-unit ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13747/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> REPL DUMP on external table with add partition event throws 
> NoSuchElementException.
> ---
>
> Key: HIVE-20541
> URL: https://issues.apache.org/jira/browse/HIVE-20541
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Attachments: HIVE-20541.01.patch
>
>
> REPL dump on an external table with add partition event throws 
> NoSuchElementException. Need to check if file iterator list hasNext before 
> accessing it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20547) HS2: support Tez sessions started by someone else (part 1)

2018-09-12 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612939#comment-16612939
 ] 

Sergey Shelukhin commented on HIVE-20547:
-

[~prasanth_j] can you take a look? I also renamed unmanaged to external, 
because unmanaged sessions already has a different meaning in WM

> HS2: support Tez sessions started by someone else (part 1)
> --
>
> Key: HIVE-20547
> URL: https://issues.apache.org/jira/browse/HIVE-20547
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20547.patch
>
>
> The registry/configs/some code is based on a private patch by [~prasanth_j].
> The patch refactors tez pool session to use composition instead of 
> implementation inheritance from TezSessionState, to allow for two 
> implementations of TezSessionState.
> For now it's blocked on getClient API in Tez that will be available after 
> 0.9.3 release; however I commented out that path to check that refactoring 
> passes tests.
> When 0.9.3 becomes available, we can uncomment and commit.
> In part 2, we may add some tests, and also consider other changes that are 
> required for external sessions (e.g. KillQuery, where we cannot assume YARN 
> is present).
> We may also consider a WM change that allows for proportional session 
> distribution when the number of external sessions and the number of 
> admin-specified sessions doesn't match, or at least some validation to see 
> that the external sessions are available when applying a RP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612936#comment-16612936
 ] 

Hive QA commented on HIVE-17684:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939352/HIVE-17684.07.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 14936 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask]
 (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_convert_join]
 (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13746/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13746/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13746/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939352 - PreCommit-HIVE-Build

> HoS memory issues with MapJoinMemoryExhaustionHandler
> -
>
> Key: HIVE-17684
> URL: https://issues.apache.org/jira/browse/HIVE-17684
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, 
> HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, 
> HIVE-17684.06.patch, HIVE-17684.07.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of 
> the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect 
> scenarios where the small table is taking too much space in memory, in which 
> case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic 
> to estimate how much memory the {{HashMap}} is consuming: 
> {{MemoryMXBean#getHeapMemoryUsage().getUsed() / 
> MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be 
> inaccurate. The value returned by this method returns all reachable and 
> unreachable memory on the heap, so there may be a bunch of garbage data, and 
> the JVM just hasn't taken the time to reclaim it all. This can lead to 
> intermittent failures of this check even though a simple GC would have 
> reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. 
> In Hive-on-MR this probably made sense to use because every Hive task was run 
> in a dedicated container, so a Hive Task could assume it created most of the 
> data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks 
> running in a single executor, each doing different things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612933#comment-16612933
 ] 

Hive QA commented on HIVE-17684:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} common: The patch generated 3 new + 425 unchanged - 0 
fixed = 428 total (was 425) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} root: The patch generated 3 new + 425 unchanged - 0 
fixed = 428 total (was 425) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
8s{color} | {color:red} ql generated 4 new + 2310 unchanged - 1 fixed = 2314 
total (was 2311) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 28s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Incorrect lazy initialization and update of static field 
org.apache.hadoop.hive.ql.exec.Operator.hiveGcTimeMonitor in 
org.apache.hadoop.hive.ql.exec.Operator.initialize(Configuration, 
ObjectInspector[])  At Operator.java:of static field 
org.apache.hadoop.hive.ql.exec.Operator.hiveGcTimeMonitor in 
org.apache.hadoop.hive.ql.exec.Operator.initialize(Configuration, 
ObjectInspector[])  At Operator.java:[lines 429-432] |
|  |  Write to static field 
org.apache.hadoop.hive.ql.exec.Operator.criticalGcTimePercentage from instance 
method org.apache.hadoop.hive.ql.exec.Operator.initialize(Configuration, 
ObjectInspector[])  At Operator.java:from instance method 
org.apache.hadoop.hive.ql.exec.Operator.initialize(Configuration, 
ObjectInspector[])  At Operator.java:[line 430] |
|  |  Write to static field 
org.apache.hadoop.hive.ql.exec.Operator.hiveGcTimeMonitor from instance method 
org.apache.hadoop.hive.ql.exec.Operator.initialize(Configuration, 
ObjectInspector[])  At Operator.java:from instance method 
org.apache.hadoop.hive.ql.exec.Operator.initialize(Configuration, 
ObjectInspector[])  At Operator.java:[line 432] |
|  |  Write to static field 
org.apache.hadoop.hive.ql.exec.Operator.lastAlertGcTimePercentage from instance 
method 
org.apache.hadoop.hive.ql.exec.Operator$HiveGcTimeMonitor$1.alert(GcTimeMonitor$GcData)
  At Operator.java:from instance method 

[jira] [Assigned] (HIVE-17041) Aggregate elimination with UNIQUE and NOT NULL column

2018-09-12 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-17041:
--

Assignee: Vineet Garg

> Aggregate elimination with UNIQUE and NOT NULL column
> -
>
> Key: HIVE-17041
> URL: https://issues.apache.org/jira/browse/HIVE-17041
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Vineet Garg
>Priority: Major
>
> If columns are part of a GROUP BY expression and they are UNIQUE and do not 
> accept NULL values, i.e. PK or UK+NOTNULL, the _Aggregate_ operator can be 
> transformed into a Project operator, as each row will end up in a different 
> group.
> For instance, given that _pk_ is the PRIMARY KEY for the table, the GROUP BY 
> could be removed from grouping columns for following query:
> {code:sql}
> SELECT pk, value1
> FROM table_1
> GROUP BY value1, pk, value2;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19814) RPC Server port is always random for spark

2018-09-12 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-19814:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks Bharath for the contribution!

> RPC Server port is always random for spark
> --
>
> Key: HIVE-19814
> URL: https://issues.apache.org/jira/browse/HIVE-19814
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.3.0, 3.0.0, 2.4.0, 4.0.0
>Reporter: bounkong khamphousone
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-19814.1.patch, HIVE-19814.2.patch, 
> HIVE-19814.3.patch
>
>
> RPC server port is always a random one. In fact, the problem is in 
> RpcConfiguration.HIVE_SPARK_RSC_CONFIGS which doesn't include 
> SPARK_RPC_SERVER_PORT.
>  
> I've found this issue while trying to make hive-on-spark running inside 
> docker.
>  
> HIVE_SPARK_RSC_CONFIGS is called by HiveSparkClientFactory.initiateSparkConf 
> > SparkSessionManagerImpl.setup and the latter call 
> SparkClientFactory.initialize(conf) which initialize the rpc server. This 
> RPCServer is then used to create the sparkClient which use the rpc server 
> port as --remote-port arg. Since initiateSparkConf ignore 
> SPARK_RPC_SERVER_PORT, then it will always be a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489

2018-09-12 Thread Janaki Lahorani (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612912#comment-16612912
 ] 

Janaki Lahorani commented on HIVE-20526:


The test issue is not related to this patch.

> Add test case for HIVE-20489
> 
>
> Key: HIVE-20526
> URL: https://issues.apache.org/jira/browse/HIVE-20526
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20526.1.patch
>
>
> Add a test case for the issue discussed in HIVE-20489.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20516) alter table drop partition should be compatible with old metastore, as partition pruner does

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612904#comment-16612904
 ] 

Hive QA commented on HIVE-20516:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939345/temp.diff

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14936 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=264)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13745/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13745/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13745/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939345 - PreCommit-HIVE-Build

> alter table drop partition should be compatible with old metastore, as 
> partition pruner does
> 
>
> Key: HIVE-20516
> URL: https://issues.apache.org/jira/browse/HIVE-20516
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
> Environment: all
>Reporter: jinzheng
>Assignee: jinzheng
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: temp.diff
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>     After the change of  HIVE-4914, we always push the partition expression 
> to metastore, to avoid filtering partition by partition names.
>     And HIVE-4914 added some protection in Partition pruner, in case that 
> metastore may not have api of get_partitions_by_expr.
>     Therefore, we should also add similar protection to another calling 
> point, when dealing with "alter table drop partition".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20519) Remove 30m min value for hive.spark.session.timeout

2018-09-12 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-20519:

Status: Patch Available  (was: Open)

> Remove 30m min value for hive.spark.session.timeout
> ---
>
> Key: HIVE-20519
> URL: https://issues.apache.org/jira/browse/HIVE-20519
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20519.1.patch
>
>
> In HIVE-14162 we added the config \{{hive.spark.session.timeout}} which 
> provided a way to time out Spark sessions that are active for a long period 
> of time. The config has a lower bound of 30m which we should remove. It 
> should be possible for users to configure this value so the HoS session is 
> closed as soon as the query is complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20519) Remove 30m min value for hive.spark.session.timeout

2018-09-12 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-20519:

Attachment: HIVE-20519.1.patch

> Remove 30m min value for hive.spark.session.timeout
> ---
>
> Key: HIVE-20519
> URL: https://issues.apache.org/jira/browse/HIVE-20519
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-20519.1.patch
>
>
> In HIVE-14162 we added the config \{{hive.spark.session.timeout}} which 
> provided a way to time out Spark sessions that are active for a long period 
> of time. The config has a lower bound of 30m which we should remove. It 
> should be possible for users to configure this value so the HoS session is 
> closed as soon as the query is complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20516) alter table drop partition should be compatible with old metastore, as partition pruner does

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612884#comment-16612884
 ] 

Hive QA commented on HIVE-20516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 1 new + 399 unchanged - 1 
fixed = 400 total (was 400) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13745/dev-support/hive-personality.sh
 |
| git revision | master / 84e5b93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13745/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13745/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> alter table drop partition should be compatible with old metastore, as 
> partition pruner does
> 
>
> Key: HIVE-20516
> URL: https://issues.apache.org/jira/browse/HIVE-20516
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
> Environment: all
>Reporter: jinzheng
>Assignee: jinzheng
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: temp.diff
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>     After the change of  HIVE-4914, we always push the partition expression 
> to metastore, to avoid filtering partition by partition names.
>     And HIVE-4914 added some protection in Partition pruner, in case that 
> metastore may not have api of get_partitions_by_expr.
>     Therefore, we should also add similar protection to another calling 
> point, when dealing with "alter table drop partition".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20483) Really move metastore common classes into metastore-common

2018-09-12 Thread Alexander Kolbasov (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612866#comment-16612866
 ] 

Alexander Kolbasov commented on HIVE-20483:
---

[~ngangam]

The goal is to have the following structure:
 # hive-metastore-server contains everything needed for the metastore server 
itself. Nothing there should be needed for clients (unless they use embedded 
metastore.
 # hive-metastore-common contains shared code that is used by both client and 
server
 # The plan is to introduce hive-metastore-client module later which will have 
everything needed to have a metastore client, but it isn't done yet, so for now 
every client should just use hive-metastore-common

All the classes that I moved into common are used outside of metastore server 
one way or another. The goal is to untangle all such dependencies eventually 
but for now all such classes just live in hive-metastore-common and will be 
required for the client as well.

> Really move metastore common classes into metastore-common
> --
>
> Key: HIVE-20483
> URL: https://issues.apache.org/jira/browse/HIVE-20483
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.0.1, 4.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20483.01.patch, HIVE-20483.02.patch, 
> HIVE-20483.03.patch, HIVE-20483.04.patch, HIVE-20483.05.patch, 
> HIVE-20483.06.patch
>
>
>  HIVE-20388 patch was supposed to move a bunch of files from metastore-server 
> to metastore-common but for some reason it didn't happen, so now these files 
> should be moved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20483) Really move metastore common classes into metastore-common

2018-09-12 Thread Alexander Kolbasov (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612863#comment-16612863
 ] 

Alexander Kolbasov commented on HIVE-20483:
---

[~pvary] Yes, My goal is to untangle as much as possible but I am doing this in 
pieces.

> Really move metastore common classes into metastore-common
> --
>
> Key: HIVE-20483
> URL: https://issues.apache.org/jira/browse/HIVE-20483
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.0.1, 4.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20483.01.patch, HIVE-20483.02.patch, 
> HIVE-20483.03.patch, HIVE-20483.04.patch, HIVE-20483.05.patch, 
> HIVE-20483.06.patch
>
>
>  HIVE-20388 patch was supposed to move a bunch of files from metastore-server 
> to metastore-common but for some reason it didn't happen, so now these files 
> should be moved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612861#comment-16612861
 ] 

Hive QA commented on HIVE-20526:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939344/HIVE-20526.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14937 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.miniHS2.TestHs2ConnectionMetricsBinary.testOpenConnectionMetrics
 (batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13744/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13744/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13744/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939344 - PreCommit-HIVE-Build

> Add test case for HIVE-20489
> 
>
> Key: HIVE-20526
> URL: https://issues.apache.org/jira/browse/HIVE-20526
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20526.1.patch
>
>
> Add a test case for the issue discussed in HIVE-20489.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions

2018-09-12 Thread Alexander Kolbasov (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612860#comment-16612860
 ] 

Alexander Kolbasov commented on HIVE-20306:
---

[~aihuaxu] Here is a summary of my changes:
 # I changed Thrift definition to move \{catalog, dbName, tableName}from spec 
sytucture to request structure

 # Updated RawStore API to avoid exposing directSQL parameters at the API level 
- it is now strictly internal business of ObjectSTore
 # Added extra unit tests
 # Modified equals() method for the new key class used in multimap - previous 
equals() was using hashCode for equality.

> Implement projection spec for fetching only requested fields from partitions
> 
>
> Key: HIVE-20306
> URL: https://issues.apache.org/jira/browse/HIVE-20306
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20306.02.patch, HIVE-20306.03.patch, 
> HIVE-20306.04.patch, HIVE-20306.05.patch, HIVE-20306.06.patch, 
> HIVE-20306.07.patch, HIVE-20306.08.patch, HIVE-20306.09.patch, 
> HIVE-20306.10.patch, HIVE-20306.11.patch, HIVE-20306.12.patch, 
> HIVE-20306.13.patch, HIVE-20306.14.patch, HIVE-20306.15.patch, 
> HIVE-20306.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20546) Upgrade to Druid 0.13.0

2018-09-12 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612858#comment-16612858
 ] 

Nishant Bangarwa commented on HIVE-20546:
-

Attached a Work in progress patch. 
Main changes include - 
# Upgrade druid version 
# package renamings from io.druid to org.apache.druid
# some test results changed due to double precision. 


> Upgrade to Druid 0.13.0
> ---
>
> Key: HIVE-20546
> URL: https://issues.apache.org/jira/browse/HIVE-20546
> Project: Hive
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20546.patch
>
>
> This task is to upgrade to druid 0.13.0 when it is released. Note that it 
> will hopefully be first apache release for Druid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20546) Upgrade to Druid 0.13.0

2018-09-12 Thread Nishant Bangarwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HIVE-20546:

Attachment: HIVE-20546.patch

> Upgrade to Druid 0.13.0
> ---
>
> Key: HIVE-20546
> URL: https://issues.apache.org/jira/browse/HIVE-20546
> Project: Hive
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20546.patch
>
>
> This task is to upgrade to druid 0.13.0 when it is released. Note that it 
> will hopefully be first apache release for Druid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20546) Upgrade to Druid 0.13.0

2018-09-12 Thread Nishant Bangarwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa reassigned HIVE-20546:
---


> Upgrade to Druid 0.13.0
> ---
>
> Key: HIVE-20546
> URL: https://issues.apache.org/jira/browse/HIVE-20546
> Project: Hive
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
>
> This task is to upgrade to druid 0.13.0 when it is released. Note that it 
> will hopefully be first apache release for Druid. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20545) Exclude large-sized parameters from serialization of Table and Partition thrift objects in HMS notifications

2018-09-12 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20545:

Affects Version/s: 4.0.0
   3.1.0

> Exclude large-sized parameters from serialization of Table and Partition 
> thrift objects in HMS notifications
> 
>
> Key: HIVE-20545
> URL: https://issues.apache.org/jira/browse/HIVE-20545
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Priority: Major
>
> Clients can add large-sized parameters in Table/Partition objects. So we need 
> to enable adding regex patterns through HiveConf to match parameters to be 
> filtered from table and partition objects before serialization in HMS 
> notifications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL

2018-09-12 Thread Jason Dere (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612830#comment-16612830
 ] 

Jason Dere commented on HIVE-20524:
---

+1 pending test results

> Schema Evolution checking is broken in going from Hive version 2 to version 3 
> for ALTER TABLE VARCHAR to DECIMAL
> 
>
> Key: HIVE-20524
> URL: https://issues.apache.org/jira/browse/HIVE-20524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch
>
>
> Issue that started this JIRA:
> {code}
> create external table varchar_decimal (c1 varchar(25));
> alter table varchar_decimal change c1 c1 decimal(31,0);
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following 
> columns have types incompatible with the existing columns in their respective 
> positions :
> c1
> {code}
> There appear to be 2 issues here:
> 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the 
> default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a 
> number that can hold the largest numbers.  The theory being we don't want 
> data loss you would get by converting the StringFamily field into integers, 
> etc.  In Hive version 2 the hierarchy of numbers had DECIMAL at the top.  At 
> some point during Hive version 2 we realized this was incorrect and put 
> DOUBLE the top.
> However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method 
> allows StringFamily to either DOUBLE or DECIMAL conversion.
> The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version 
> 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only 
> allows DOUBLE.
> This JIRA fixes that problem.
> 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug 
> fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when 
> checking for Schema Evolution compatibility.  So, when that code is checking 
> if a data type "varchar(25)" is StringFamily it fails because the "(25)" 
> didn't get removed properly.
> This JIRA fixes issue #2 also.
> NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version 
> performed the logic in TypeInfoUtils.implicitConvertible on the 
> PrimitiveCategory not the raw type string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions

2018-09-12 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612828#comment-16612828
 ] 

Aihua Xu commented on HIVE-20306:
-

[~akolb] Can you summarize your additional change on top of Vihang's original 
change or upload the patch on Vihang's RB? I reviewed Vihang's patch and want 
to see the difference. Thanks.

> Implement projection spec for fetching only requested fields from partitions
> 
>
> Key: HIVE-20306
> URL: https://issues.apache.org/jira/browse/HIVE-20306
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20306.02.patch, HIVE-20306.03.patch, 
> HIVE-20306.04.patch, HIVE-20306.05.patch, HIVE-20306.06.patch, 
> HIVE-20306.07.patch, HIVE-20306.08.patch, HIVE-20306.09.patch, 
> HIVE-20306.10.patch, HIVE-20306.11.patch, HIVE-20306.12.patch, 
> HIVE-20306.13.patch, HIVE-20306.14.patch, HIVE-20306.15.patch, 
> HIVE-20306.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612822#comment-16612822
 ] 

Hive QA commented on HIVE-20526:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 2s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13744/dev-support/hive-personality.sh
 |
| git revision | master / 84e5b93 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13744/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add test case for HIVE-20489
> 
>
> Key: HIVE-20526
> URL: https://issues.apache.org/jira/browse/HIVE-20526
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20526.1.patch
>
>
> Add a test case for the issue discussed in HIVE-20489.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20420) Provide a fallback authorizer when no other authorizer is in use

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612813#comment-16612813
 ] 

Hive QA commented on HIVE-20420:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939331/HIVE-20420.5.patch

{color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14944 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[fallbackauth_addjar]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[fallbackauth_compile]
 (batchId=98)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[fallbackauth_dfs]
 (batchId=97)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13743/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13743/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13743/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939331 - PreCommit-HIVE-Build

> Provide a fallback authorizer when no other authorizer is in use
> 
>
> Key: HIVE-20420
> URL: https://issues.apache.org/jira/browse/HIVE-20420
> Project: Hive
>  Issue Type: New Feature
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20420.1.patch, HIVE-20420.2.patch, 
> HIVE-20420.3.patch, HIVE-20420.4.patch, HIVE-20420.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20095) Fix jdbc external table feature

2018-09-12 Thread Jonathan Doron (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Doron updated HIVE-20095:
--
Attachment: HIVE-20095.7.patch
Status: Patch Available  (was: Open)

[~jcamachorodriguez]

It seems like all tests has finally passed, can you please apply the patch?

(I have fixed the checkstyle issues)

> Fix jdbc external table feature
> ---
>
> Key: HIVE-20095
> URL: https://issues.apache.org/jira/browse/HIVE-20095
> Project: Hive
>  Issue Type: Bug
>Reporter: Jonathan Doron
>Assignee: Jonathan Doron
>Priority: Major
> Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, 
> HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, 
> HIVE-20095.6.patch, HIVE-20095.7.patch
>
>
> It seems like the committed code for HIVE-19161 
> (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 
> updates, and therefore some of the external table queries are not working 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20095) Fix jdbc external table feature

2018-09-12 Thread Jonathan Doron (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Doron updated HIVE-20095:
--
Status: Open  (was: Patch Available)

> Fix jdbc external table feature
> ---
>
> Key: HIVE-20095
> URL: https://issues.apache.org/jira/browse/HIVE-20095
> Project: Hive
>  Issue Type: Bug
>Reporter: Jonathan Doron
>Assignee: Jonathan Doron
>Priority: Major
> Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, 
> HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, HIVE-20095.6.patch
>
>
> It seems like the committed code for HIVE-19161 
> (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 
> updates, and therefore some of the external table queries are not working 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20524:

Status: In Progress  (was: Patch Available)

> Schema Evolution checking is broken in going from Hive version 2 to version 3 
> for ALTER TABLE VARCHAR to DECIMAL
> 
>
> Key: HIVE-20524
> URL: https://issues.apache.org/jira/browse/HIVE-20524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch
>
>
> Issue that started this JIRA:
> {code}
> create external table varchar_decimal (c1 varchar(25));
> alter table varchar_decimal change c1 c1 decimal(31,0);
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following 
> columns have types incompatible with the existing columns in their respective 
> positions :
> c1
> {code}
> There appear to be 2 issues here:
> 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the 
> default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a 
> number that can hold the largest numbers.  The theory being we don't want 
> data loss you would get by converting the StringFamily field into integers, 
> etc.  In Hive version 2 the hierarchy of numbers had DECIMAL at the top.  At 
> some point during Hive version 2 we realized this was incorrect and put 
> DOUBLE the top.
> However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method 
> allows StringFamily to either DOUBLE or DECIMAL conversion.
> The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version 
> 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only 
> allows DOUBLE.
> This JIRA fixes that problem.
> 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug 
> fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when 
> checking for Schema Evolution compatibility.  So, when that code is checking 
> if a data type "varchar(25)" is StringFamily it fails because the "(25)" 
> didn't get removed properly.
> This JIRA fixes issue #2 also.
> NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version 
> performed the logic in TypeInfoUtils.implicitConvertible on the 
> PrimitiveCategory not the raw type string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20524:

Status: Patch Available  (was: In Progress)

> Schema Evolution checking is broken in going from Hive version 2 to version 3 
> for ALTER TABLE VARCHAR to DECIMAL
> 
>
> Key: HIVE-20524
> URL: https://issues.apache.org/jira/browse/HIVE-20524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch
>
>
> Issue that started this JIRA:
> {code}
> create external table varchar_decimal (c1 varchar(25));
> alter table varchar_decimal change c1 c1 decimal(31,0);
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following 
> columns have types incompatible with the existing columns in their respective 
> positions :
> c1
> {code}
> There appear to be 2 issues here:
> 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the 
> default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a 
> number that can hold the largest numbers.  The theory being we don't want 
> data loss you would get by converting the StringFamily field into integers, 
> etc.  In Hive version 2 the hierarchy of numbers had DECIMAL at the top.  At 
> some point during Hive version 2 we realized this was incorrect and put 
> DOUBLE the top.
> However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method 
> allows StringFamily to either DOUBLE or DECIMAL conversion.
> The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version 
> 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only 
> allows DOUBLE.
> This JIRA fixes that problem.
> 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug 
> fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when 
> checking for Schema Evolution compatibility.  So, when that code is checking 
> if a data type "varchar(25)" is StringFamily it fails because the "(25)" 
> didn't get removed properly.
> This JIRA fixes issue #2 also.
> NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version 
> performed the logic in TypeInfoUtils.implicitConvertible on the 
> PrimitiveCategory not the raw type string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20524) Schema Evolution checking is broken in going from Hive version 2 to version 3 for ALTER TABLE VARCHAR to DECIMAL

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-20524:

Attachment: HIVE-20524.02.patch

> Schema Evolution checking is broken in going from Hive version 2 to version 3 
> for ALTER TABLE VARCHAR to DECIMAL
> 
>
> Key: HIVE-20524
> URL: https://issues.apache.org/jira/browse/HIVE-20524
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-20524.01.patch, HIVE-20524.02.patch
>
>
> Issue that started this JIRA:
> {code}
> create external table varchar_decimal (c1 varchar(25));
> alter table varchar_decimal change c1 c1 decimal(31,0);
> ERROR : FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following 
> columns have types incompatible with the existing columns in their respective 
> positions :
> c1
> {code}
> There appear to be 2 issues here:
> 1) When hive.metastore.disallow.incompatible.col.type.changes is true (the 
> default) we only allow StringFamily (STRING, CHAR, VARCHAR) conversion to a 
> number that can hold the largest numbers.  The theory being we don't want 
> data loss you would get by converting the StringFamily field into integers, 
> etc.  In Hive version 2 the hierarchy of numbers had DECIMAL at the top.  At 
> some point during Hive version 2 we realized this was incorrect and put 
> DOUBLE the top.
> However, the Hive2 Hive version 2 TypeInfoUtils.implicitConversion method 
> allows StringFamily to either DOUBLE or DECIMAL conversion.
> The new org.apache.hadoop.hive.metastore.ColumnType class under Hive version 
> 3 hive-standalone-metadata-server method checkColTypeChangeCompatible only 
> allows DOUBLE.
> This JIRA fixes that problem.
> 2) Also, the checkColTypeChangeCompatible method lost a version 2 series bug 
> fix that drops CHAR/VARCHAR (and DECIMAL I think) type decorations when 
> checking for Schema Evolution compatibility.  So, when that code is checking 
> if a data type "varchar(25)" is StringFamily it fails because the "(25)" 
> didn't get removed properly.
> This JIRA fixes issue #2 also.
> NOTE: Hive1 version 2 did undecoratedTypeName(oldType) and Hive2 version 
> performed the logic in TypeInfoUtils.implicitConvertible on the 
> PrimitiveCategory not the raw type string.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20420) Provide a fallback authorizer when no other authorizer is in use

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612787#comment-16612787
 ] 

Hive QA commented on HIVE-20420:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
51s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 12 new + 1 unchanged - 0 fixed 
= 13 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13743/dev-support/hive-personality.sh
 |
| git revision | master / 84e5b93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13743/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13743/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Provide a fallback authorizer when no other authorizer is in use
> 
>
> Key: HIVE-20420
> URL: https://issues.apache.org/jira/browse/HIVE-20420
> Project: Hive
>  Issue Type: New Feature
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20420.1.patch, HIVE-20420.2.patch, 
> HIVE-20420.3.patch, HIVE-20420.4.patch, HIVE-20420.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612771#comment-16612771
 ] 

Hive QA commented on HIVE-18908:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} serde in master has 195 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
51s{color} | {color:red} branch/itests/hive-jmh cannot run convertXmlToText 
from findbugs {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch storage-api passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} The patch serde passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} The patch . passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} itests/hive-jmh: The patch generated 0 new + 11 
unchanged - 6 fixed = 11 total (was 17) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
35s{color} | {color:red} ql: The patch generated 312 new + 3561 unchanged - 174 
fixed = 3873 total (was 3735) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} patch/storage-api cannot run setBugDatabaseInfo from 
findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} patch/common cannot run setBugDatabaseInfo from 
findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
40s{color} | {color:red} patch/serde cannot run setBugDatabaseInfo from 
findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
23s{color} | {color:red} patch/itests/hive-jmh cannot run setBugDatabaseInfo 
from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  8m 
19s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  9m 
19s{color} | {color:red} root generated 2 new + 386 unchanged - 0 fixed = 388 
total (was 386) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
14s{color} | 

[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18908:

Status: Patch Available  (was: In Progress)

> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, HIVE-18908.09995.patch, JOIN to MAPJOIN 
> Transformation.pdf, SHARED-MEMORY FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> FULL OUTER MapJoin OFF =  MergeJoin
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin reduce-shuffle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18908:

Attachment: HIVE-18908.09995.patch

> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, HIVE-18908.09995.patch, JOIN to MAPJOIN 
> Transformation.pdf, SHARED-MEMORY FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> FULL OUTER MapJoin OFF =  MergeJoin
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin reduce-shuffle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18908:

Status: In Progress  (was: Patch Available)

> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, JOIN to MAPJOIN Transformation.pdf, SHARED-MEMORY 
> FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> FULL OUTER MapJoin OFF =  MergeJoin
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin reduce-shuffle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612741#comment-16612741
 ] 

Hive QA commented on HIVE-18908:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939329/HIVE-18908.09994.patch

{color:green}SUCCESS:{color} +1 due to 64 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 14961 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.miniHS2.TestHs2ConnectionMetricsHttp.testOpenConnectionMetrics
 (batchId=255)
org.apache.hive.service.auth.TestCustomAuthentication.org.apache.hive.service.auth.TestCustomAuthentication
 (batchId=247)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13742/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13742/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13742/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939329 - PreCommit-HIVE-Build

> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, JOIN to MAPJOIN Transformation.pdf, SHARED-MEMORY 
> FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> FULL OUTER MapJoin OFF =  MergeJoin
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin reduce-shuffle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612709#comment-16612709
 ] 

Miklos Gergely commented on HIVE-20536:
---

Fixed, now writeId is directly set for GenericUDFSurrogateKey. Will add unit 
tests too.

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20536:
--
Attachment: HIVE-20536.02.patch

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20536:
--
Status: Patch Available  (was: Open)

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20536:
--
Status: Open  (was: Patch Available)

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch, HIVE-20536.02.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19814) RPC Server port is always random for spark

2018-09-12 Thread Bharathkrishna Guruvayoor Murali (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612699#comment-16612699
 ] 

Bharathkrishna Guruvayoor Murali commented on HIVE-19814:
-

Yes, the test failure looks unrelated.

> RPC Server port is always random for spark
> --
>
> Key: HIVE-19814
> URL: https://issues.apache.org/jira/browse/HIVE-19814
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.3.0, 3.0.0, 2.4.0, 4.0.0
>Reporter: bounkong khamphousone
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-19814.1.patch, HIVE-19814.2.patch, 
> HIVE-19814.3.patch
>
>
> RPC server port is always a random one. In fact, the problem is in 
> RpcConfiguration.HIVE_SPARK_RSC_CONFIGS which doesn't include 
> SPARK_RPC_SERVER_PORT.
>  
> I've found this issue while trying to make hive-on-spark running inside 
> docker.
>  
> HIVE_SPARK_RSC_CONFIGS is called by HiveSparkClientFactory.initiateSparkConf 
> > SparkSessionManagerImpl.setup and the latter call 
> SparkClientFactory.initialize(conf) which initialize the rpc server. This 
> RPCServer is then used to create the sparkClient which use the rpc server 
> port as --remote-port arg. Since initiateSparkConf ignore 
> SPARK_RPC_SERVER_PORT, then it will always be a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612695#comment-16612695
 ] 

Ashutosh Chauhan commented on HIVE-20536:
-

adding tableDesc to GenericUDF is not a good idea. Its a public interface and 
exposing internal structures there isn't useful. Instead in genFileSinkDesc() 
test for surrogateKey udf and if found set writeid directly on that udf.
If qtest is not possible then lets write junit test for udf and mock Context 
object if needed.
Also, can you create a RB for this.

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612677#comment-16612677
 ] 

Miklos Gergely commented on HIVE-20536:
---

No q test added as task id is not available at q tests.

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20536:
--
Attachment: HIVE-20536.01.patch

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20536) Add Surrogate Keys function to Hive

2018-09-12 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-20536:
--
Status: Patch Available  (was: Open)

> Add Surrogate Keys function to Hive
> ---
>
> Key: HIVE-20536
> URL: https://issues.apache.org/jira/browse/HIVE-20536
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-20536.01.patch
>
>
> Surrogate keys is an ability to generate and use unique integers for each row 
> in a table. If we have that ability then in conjunction with default clause 
> we can get surrogate keys functionality. Consider following ddl:
> create table t1 (a string, b bigint default unique_long());
> We already have default clause wherein you can specify a function to provide 
> values. So, what we need is udf which can generate unique longs for each row 
> across queries for a table. 
> Idea is to use write_id . This is a column in metastore table TXN_COMPONENTS 
> whose value is determined at compile time to be used during query execution. 
> Each query execution generates a new write_id. So, we can seed udf with this 
> value during compilation.
> Then we statically allocate ranges for each task from which it can draw next 
> long. So, lets say 64-bit write_id we divy up such that last 24 bits belong 
> to original usage of it that is txns. Next 16 bits are used for task_attempts 
> and last 24 bits to generate new long for each row. This implies we can allow 
> 17M txns, 65K tasks and 17M rows in a task. If you hit any of those limits we 
> can fail the query.
> Implementation wise: serialize write_id in initialize() of udf. Then during 
> execute() we find out what task_attempt current task is and use it along with 
> write_id() to get starting long and give a new value on each invocation of 
> execute().
> Here we are assuming write_id can be determined at compile time, which should 
> be the case but we need to figure out how to get handle to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20095) Fix jdbc external table feature

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612663#comment-16612663
 ] 

Hive QA commented on HIVE-20095:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939328/HIVE-20095.6.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14937 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13741/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13741/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13741/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939328 - PreCommit-HIVE-Build

> Fix jdbc external table feature
> ---
>
> Key: HIVE-20095
> URL: https://issues.apache.org/jira/browse/HIVE-20095
> Project: Hive
>  Issue Type: Bug
>Reporter: Jonathan Doron
>Assignee: Jonathan Doron
>Priority: Major
> Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, 
> HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, HIVE-20095.6.patch
>
>
> It seems like the committed code for HIVE-19161 
> (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 
> updates, and therefore some of the external table queries are not working 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20412) NPE in HiveMetaHook

2018-09-12 Thread Jason Dere (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-20412:
--
   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Committed to master

> NPE in HiveMetaHook
> ---
>
> Key: HIVE-20412
> URL: https://issues.apache.org/jira/browse/HIVE-20412
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20412.1.patch, HIVE-20412.2.patch
>
>
> {noformat}
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hive.metastore.HiveMetaHook.preAlterTable(HiveMetaHook.java:113)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:427)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:415)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at com.sun.proxy.$Proxy37.alter_table(Unknown Source) ~[?:?]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2933)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at com.sun.proxy.$Proxy37.alter_table(Unknown Source) ~[?:?]
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:708) 
> ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.ql.util.HiveStrictManagedMigration$HiveUpdater.updateTableProperties(HiveStrictManagedMigration.java:954)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20095) Fix jdbc external table feature

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612637#comment-16612637
 ] 

Hive QA commented on HIVE-20095:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} serde in master has 195 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} druid-handler in master has 13 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
22s{color} | {color:blue} jdbc-handler in master has 8 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} druid-handler in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
13s{color} | {color:red} jdbc-handler in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} serde: The patch generated 1 new + 2 unchanged - 0 
fixed = 3 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} druid-handler: The patch generated 0 new + 10 
unchanged - 1 fixed = 10 total (was 11) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} jdbc-handler: The patch generated 78 new + 25 
unchanged - 0 fixed = 103 total (was 25) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
32s{color} | {color:red} jdbc-handler generated 3 new + 8 unchanged - 0 fixed = 
11 total (was 8) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:jdbc-handler |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hive.storage.jdbc.JdbcSerDe.initialize(Configuration, Properties)  
At JdbcSerDe.java:is not thrown in 
org.apache.hive.storage.jdbc.JdbcSerDe.initialize(Configuration, Properties)  
At JdbcSerDe.java:[line 114] |
|  |  
org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getColumnTypes(Configuration)
 may fail to clean up java.sql.ResultSet  Obligation to clean up resource 
created at GenericJdbcDatabaseAccessor.java:up java.sql.ResultSet  Obligation 
to clean up resource created at GenericJdbcDatabaseAccessor.java:[line 115] is 
not discharged |
|  |  

[jira] [Updated] (HIVE-20527) Intern table descriptors from spark task

2018-09-12 Thread Janaki Lahorani (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-20527:
---
Attachment: HIVE-20527.1.patch

> Intern table descriptors from spark task
> 
>
> Key: HIVE-20527
> URL: https://issues.apache.org/jira/browse/HIVE-20527
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20527.1.patch, HIVE-20527.1.patch
>
>
> Table descriptors from MR tasks and Tez tasks are interned.  This fix is to 
> intern table desc from spark tasks as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20412) NPE in HiveMetaHook

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612607#comment-16612607
 ] 

Hive QA commented on HIVE-20412:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939323/HIVE-20412.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14936 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13740/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13740/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13740/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939323 - PreCommit-HIVE-Build

> NPE in HiveMetaHook
> ---
>
> Key: HIVE-20412
> URL: https://issues.apache.org/jira/browse/HIVE-20412
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20412.1.patch, HIVE-20412.2.patch
>
>
> {noformat}
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hive.metastore.HiveMetaHook.preAlterTable(HiveMetaHook.java:113)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:427)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:415)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at com.sun.proxy.$Proxy37.alter_table(Unknown Source) ~[?:?]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2933)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at com.sun.proxy.$Proxy37.alter_table(Unknown Source) ~[?:?]
> at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:708) 
> ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.ql.util.HiveStrictManagedMigration$HiveUpdater.updateTableProperties(HiveStrictManagedMigration.java:954)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20527) Intern table descriptors from spark task

2018-09-12 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612572#comment-16612572
 ] 

Andrew Sherman commented on HIVE-20527:
---

+1 LGTM pending clean test run

> Intern table descriptors from spark task
> 
>
> Key: HIVE-20527
> URL: https://issues.apache.org/jira/browse/HIVE-20527
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20527.1.patch
>
>
> Table descriptors from MR tasks and Tez tasks are interned.  This fix is to 
> intern table desc from spark tasks as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20526) Add test case for HIVE-20489

2018-09-12 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612571#comment-16612571
 ] 

Andrew Sherman commented on HIVE-20526:
---

+1 LGTM

> Add test case for HIVE-20489
> 
>
> Key: HIVE-20526
> URL: https://issues.apache.org/jira/browse/HIVE-20526
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-20526.1.patch
>
>
> Add a test case for the issue discussed in HIVE-20489.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20537) Multi-column joins estimates with uncorrelated columns different in CBO and Hive

2018-09-12 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-20537:
---
Attachment: HIVE-20537.01.patch

> Multi-column joins estimates with uncorrelated columns different in CBO and 
> Hive
> 
>
> Key: HIVE-20537
> URL: https://issues.apache.org/jira/browse/HIVE-20537
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20537.01.patch, HIVE-20537.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18908:

Description: 
Currently, we do not support FULL OUTER JOIN in MapJoin.

Rough TPC-DS timings run on laptop:

(NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)

FULL OUTER MapJoin OFF =  MergeJoin

Query 51:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 4:30 minutes
•   FULL OUTER MapJoin ON: 4:37 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 2:35 minutes
•   FULL OUTER MapJoin ON: 1:47 minutes

Query 97:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 2:37 minutes
•   FULL OUTER MapJoin ON: 2:42 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 1:17 minutes
•   FULL OUTER MapJoin ON: 0:06 minutes

FULL OUTER Join 10,000,000 rows against 323,910 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 14:56 minutes
•   FULL OUTER MapJoin ON: 1:45 minutes

FULL OUTER Join 10,000,000 rows against 1,000 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 12:37 minutes
•   FULL OUTER MapJoin ON: 1:38 minutes

Hopefully, someone will do large scale cluster testing.  
[DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
[Sort] MergeJoin reduce-shuffle.



  was:
Currently, we do not support FULL OUTER JOIN in MapJoin.

Rough TPC-DS timings run on laptop:

(NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)

Query 51:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 4:30 minutes
•   FULL OUTER MapJoin ON: 4:37 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 2:35 minutes
•   FULL OUTER MapJoin ON: 1:47 minutes

Query 97:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 2:37 minutes
•   FULL OUTER MapJoin ON: 2:42 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 1:17 minutes
•   FULL OUTER MapJoin ON: 0:06 minutes

FULL OUTER Join 10,000,000 rows against 323,910 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 14:56 minutes
•   FULL OUTER MapJoin ON: 1:45 minutes

FULL OUTER Join 10,000,000 rows against 1,000 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 12:37 minutes
•   FULL OUTER MapJoin ON: 1:38 minutes

Hopefully, someone will do large scale cluster testing.  
[DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
[Sort] MergeJoin reduce-shuffle.




> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, JOIN to MAPJOIN Transformation.pdf, SHARED-MEMORY 
> FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> FULL OUTER MapJoin OFF =  MergeJoin
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin 

[jira] [Commented] (HIVE-20412) NPE in HiveMetaHook

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612537#comment-16612537
 ] 

Hive QA commented on HIVE-20412:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
23s{color} | {color:blue} standalone-metastore/metastore-common in master has 9 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13740/dev-support/hive-personality.sh
 |
| git revision | master / f4380f3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common U: 
standalone-metastore/metastore-common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13740/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> NPE in HiveMetaHook
> ---
>
> Key: HIVE-20412
> URL: https://issues.apache.org/jira/browse/HIVE-20412
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20412.1.patch, HIVE-20412.2.patch
>
>
> {noformat}
> java.lang.NullPointerException: null
> at 
> org.apache.hadoop.hive.metastore.HiveMetaHook.preAlterTable(HiveMetaHook.java:113)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:427)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:415)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_112]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_112]
> at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112]
> at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>  ~[hive-exec-3.1.0.3.0.1.0-104.jar:3.1.0.3.0.1.0-104]
> at com.sun.proxy.$Proxy37.alter_table(Unknown Source) ~[?:?]
> at 

[jira] [Commented] (HIVE-19814) RPC Server port is always random for spark

2018-09-12 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612515#comment-16612515
 ] 

Sahil Takiar commented on HIVE-19814:
-

I think this patch is good to merge. I have a fix the flakiness of 
{{TestSparkSessionTimeout}} that I'm planning to merge in HIVE-20519, plus you 
got a green run earlier and the code (besides the new unit test) hasn't changed 
since then.

> RPC Server port is always random for spark
> --
>
> Key: HIVE-19814
> URL: https://issues.apache.org/jira/browse/HIVE-19814
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.3.0, 3.0.0, 2.4.0, 4.0.0
>Reporter: bounkong khamphousone
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-19814.1.patch, HIVE-19814.2.patch, 
> HIVE-19814.3.patch
>
>
> RPC server port is always a random one. In fact, the problem is in 
> RpcConfiguration.HIVE_SPARK_RSC_CONFIGS which doesn't include 
> SPARK_RPC_SERVER_PORT.
>  
> I've found this issue while trying to make hive-on-spark running inside 
> docker.
>  
> HIVE_SPARK_RSC_CONFIGS is called by HiveSparkClientFactory.initiateSparkConf 
> > SparkSessionManagerImpl.setup and the latter call 
> SparkClientFactory.initialize(conf) which initialize the rpc server. This 
> RPCServer is then used to create the sparkClient which use the rpc server 
> port as --remote-port arg. Since initiateSparkConf ignore 
> SPARK_RPC_SERVER_PORT, then it will always be a random port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19253) HMS ignores tableType property for external tables

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612511#comment-16612511
 ] 

Hive QA commented on HIVE-19253:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939321/HIVE-19253.11.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 95 failed/errored test(s), 14936 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query10] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query11] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query12] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query13] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query15] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query16] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query17] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query18] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query19] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query1] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query20] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query21] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query22] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query23] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query24] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query25] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query26] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query27] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query29] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query2] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query30] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query31] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query32] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query33] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query34] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query35] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query36] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query37] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query38] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query39] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query3] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query40] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query42] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query43] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query44] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query45] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query46] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query47] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query48] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query49] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query4] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query50] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query51] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query52] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query53] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query54] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query55] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query56] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query57] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query58] 
(batchId=264)

[jira] [Commented] (HIVE-18583) Enable DateRangeRules

2018-09-12 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612490#comment-16612490
 ] 

Nishant Bangarwa commented on HIVE-18583:
-

rebased and attached new patch. 

> Enable DateRangeRules 
> --
>
> Key: HIVE-18583
> URL: https://issues.apache.org/jira/browse/HIVE-18583
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-18583.2.patch, HIVE-18583.3.patch, 
> HIVE-18583.4.patch, HIVE-18583.5.patch, HIVE-18583.patch
>
>
> Enable DateRangeRules to translate druid filters to date ranges. 
> Need calcite version to upgrade to 0.16.0 before merging this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18583) Enable DateRangeRules

2018-09-12 Thread Nishant Bangarwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HIVE-18583:

Attachment: HIVE-18583.5.patch

> Enable DateRangeRules 
> --
>
> Key: HIVE-18583
> URL: https://issues.apache.org/jira/browse/HIVE-18583
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-18583.2.patch, HIVE-18583.3.patch, 
> HIVE-18583.4.patch, HIVE-18583.5.patch, HIVE-18583.patch
>
>
> Enable DateRangeRules to translate druid filters to date ranges. 
> Need calcite version to upgrade to 0.16.0 before merging this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19253) HMS ignores tableType property for external tables

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612474#comment-16612474
 ] 

Hive QA commented on HIVE-19253:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} hcatalog/webhcat/java-client in master has 3 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
53s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13739/dev-support/hive-personality.sh
 |
| git revision | master / 294665a |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13739/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13739/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: hcatalog/webhcat/java-client ql 
standalone-metastore/metastore-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13739/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HMS ignores tableType property for external tables
> --
>
> Key: HIVE-19253
> URL: https://issues.apache.org/jira/browse/HIVE-19253
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0, 3.1.0, 4.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>  Labels: newbie
> Attachments: HIVE-19253.01.patch, HIVE-19253.02.patch, 
> HIVE-19253.03.patch, HIVE-19253.03.patch, HIVE-19253.04.patch, 
> HIVE-19253.05.patch, HIVE-19253.06.patch, HIVE-19253.07.patch, 
> HIVE-19253.08.patch, HIVE-19253.09.patch, HIVE-19253.10.patch, 
> 

[jira] [Updated] (HIVE-19552) Enable TestMiniDruidKafkaCliDriver#druidkafkamini_basic.q

2018-09-12 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-19552:
---
   Resolution: Fixed
Fix Version/s: 3.2.0
   4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, branch-3. Thanks [~nishantbangarwa]

> Enable TestMiniDruidKafkaCliDriver#druidkafkamini_basic.q
> -
>
> Key: HIVE-19552
> URL: https://issues.apache.org/jira/browse/HIVE-19552
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Affects Versions: 3.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Nishant Bangarwa
>Priority: Critical
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-19552.1.patch, HIVE-19552.patch
>
>
> The failure was caused by the following sequence of steps - 
> # Test queries for available hosts where a segment is located and gets the 
> location of kafka task. 
> # Kafka task hands over the data and finishes
> # Now the scan query is sent to the kafka task, but the task has already 
> completed and will fail. 
> https://issues.apache.org/jira/browse/HIVE-20349 fixes this issue by retrying 
> the broker in this case. 
> One more cause of failure was the latestOffsets and minimumLag not reported 
> when there is no task. 
> This patch masks those two values also. Query results are verified to ensure 
> that there is no lag. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18247) Use DB auto-increment for indexes

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612435#comment-16612435
 ] 

Hive QA commented on HIVE-18247:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907546/HIVE-18247.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13738/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13738/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13738/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-09-12 16:29:40.122
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-13738/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-09-12 16:29:40.125
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 294665a HIVE-20539 : Remove dependency on com.metamx.java-util 
(Nishant Bangarwa via Ashutosh Chauhan)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 294665a HIVE-20539 : Remove dependency on com.metamx.java-util 
(Nishant Bangarwa via Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-09-12 16:29:40.750
+ rm -rf ../yetus_PreCommit-HIVE-Build-13738
+ mkdir ../yetus_PreCommit-HIVE-Build-13738
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-13738
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-13738/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/standalone-metastore/src/main/resources/package.jdo: does not exist in 
index
error: standalone-metastore/src/main/resources/package.jdo: does not exist in 
index
error: src/main/resources/package.jdo: does not exist in index
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-13738
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12907546 - PreCommit-HIVE-Build

> Use DB auto-increment for indexes
> -
>
> Key: HIVE-18247
> URL: https://issues.apache.org/jira/browse/HIVE-18247
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore
>Affects Versions: 3.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>  Labels: datanucleus, perfomance
> Attachments: HIVE-18247.02.patch
>
>
> I initially noticed this problem in Apache Sentry - see SENTRY-1960. Hive has 
> the same issue. DataNucleus uses SEQUENCE table to allocate IDs which 
> requires raw locks on multiple tables during transactions and this creates 
> scalability problems. 
> Instead DN should rely on DB auto-increment mechanisms which are much more 
> scalable.
> See SENTRY-1960 for extra details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20483) Really move metastore common classes into metastore-common

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612431#comment-16612431
 ] 

Hive QA commented on HIVE-20483:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939319/HIVE-20483.06.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14935 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13737/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13737/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13737/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939319 - PreCommit-HIVE-Build

> Really move metastore common classes into metastore-common
> --
>
> Key: HIVE-20483
> URL: https://issues.apache.org/jira/browse/HIVE-20483
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.0.1, 4.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20483.01.patch, HIVE-20483.02.patch, 
> HIVE-20483.03.patch, HIVE-20483.04.patch, HIVE-20483.05.patch, 
> HIVE-20483.06.patch
>
>
>  HIVE-20388 patch was supposed to move a bunch of files from metastore-server 
> to metastore-common but for some reason it didn't happen, so now these files 
> should be moved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20239) Do Not Print StackTraces to STDERR in MapJoinProcessor

2018-09-12 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612428#comment-16612428
 ] 

BELUGA BEHR commented on HIVE-20239:


Any update on this [~anuragmantri]? :) [~vihangk1]

> Do Not Print StackTraces to STDERR in MapJoinProcessor
> --
>
> Key: HIVE-20239
> URL: https://issues.apache.org/jira/browse/HIVE-20239
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: Anurag Mantripragada
>Priority: Minor
>  Labels: newbie, noob
> Fix For: 4.0.0
>
> Attachments: HIVE-20239.1.patch, HIVE-20239.2.patch, 
> HIVE-20239.3.patch
>
>
> {code:java|title=MapJoinProcessor.java}
> } catch (Exception e) {
>   e.printStackTrace();
>   throw new SemanticException("Failed to generate new mapJoin operator " +
>   "by exception : " + e.getMessage());
> }
> {code}
> Please change to... something like...
> {code}
> } catch (Exception e) {
>   throw new SemanticException("Failed to generate new mapJoin operator", 
> e);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20483) Really move metastore common classes into metastore-common

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612418#comment-16612418
 ] 

Hive QA commented on HIVE-20483:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
28s{color} | {color:blue} standalone-metastore/metastore-common in master has 9 
extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} beeline in master has 53 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} branch/hcatalog no findbugs output file 
(hcatalog/target/findbugsXml.xml) {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} hcatalog/hcatalog-pig-adapter in master has 2 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2311 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in master failed. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} streaming in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
42s{color} | {color:red} standalone-metastore/metastore-common generated 23 new 
+ 5 unchanged - 4 fixed = 28 total (was 9) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
40s{color} | {color:red} patch/hcatalog no findbugs output file 
(hcatalog/target/findbugsXml.xml) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
15s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | {color:red} standalone-metastore_metastore-common generated 15 new 
+ 4 unchanged - 1 fixed = 19 total (was 5) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} metastore in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} beeline in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
36s{color} | {color:green} hcatalog in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} 

[jira] [Assigned] (HIVE-20544) TOpenSessionReq logs password and username

2018-09-12 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage reassigned HIVE-20544:



> TOpenSessionReq logs password and username
> --
>
> Key: HIVE-20544
> URL: https://issues.apache.org/jira/browse/HIVE-20544
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: patch, security, beginner
>
> In 
> service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TOpenSessionReq,
>  if client protocol is unset, validate() and toString() prints both username 
> and password to logs.
> Logging a password is a security risk. We should hide the ***.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20483) Really move metastore common classes into metastore-common

2018-09-12 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612364#comment-16612364
 ] 

Naveen Gangam commented on HIVE-20483:
--

[~akolb] Can you please provide some context on how these classes are being 
organized? it would make sense to categorize classes into 
1) hive-metastore-server 
2) hive-metastore-client and 
3) hive-metastore-common (classes that are used by both HMS server and HMS 
client) like the MetaStoreConf, *SpecProxy* etc.

In the patch, 
we are moving classes like IMetaStoreClient and MetaStoreClient to the common 
module. 
classes like FileUtils, HdfsUtils, MetaStoreUtils, ReplChangeManager, 
MetadataStore, MetaStoreFS to the common module. I havent looked at the usage 
of all the methods in these classes but they sound like they will be used by 
the HMS Server and should remain within the server module and not in the 
common. Are these classes somehow used by the client as well?


> Really move metastore common classes into metastore-common
> --
>
> Key: HIVE-20483
> URL: https://issues.apache.org/jira/browse/HIVE-20483
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.0.1, 4.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20483.01.patch, HIVE-20483.02.patch, 
> HIVE-20483.03.patch, HIVE-20483.04.patch, HIVE-20483.05.patch, 
> HIVE-20483.06.patch
>
>
>  HIVE-20388 patch was supposed to move a bunch of files from metastore-server 
> to metastore-common but for some reason it didn't happen, so now these files 
> should be moved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20375) Json SerDe ignoring the timestamp.formats property

2018-09-12 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612359#comment-16612359
 ] 

slim bouguerra commented on HIVE-20375:
---

[~kgyrtkirk] i don't agree as you can see HIVE-19211 and commit 
https://github.com/apache/hive/commit/e6c0c8d5bbc99ac05f200e0fbc9c78ad6a4da9d8#diff-6ab52347d2832029cedb60df2bd97d83R140
 the org/apache/hadoop/hive/serde2/JsonSerDe.java is suppose to support 
timestamp format. 

{code} 
 jsonFactory = new JsonFactory();
tsParser = new TimestampParser(
  
HiveStringUtils.splitAndUnEscape(tbl.getProperty(serdeConstants.TIMESTAMP_FORMATS)));
  }
{code}

> Json SerDe ignoring the timestamp.formats property
> --
>
> Key: HIVE-20375
> URL: https://issues.apache.org/jira/browse/HIVE-20375
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Major
>
> JsonSerd is supposed to accept "timestamp.formats" SerDe property to allow 
> different timestamp formats, after recent refactor I see that this is not 
> working anymore.
> Looking at the code I can see that The serde is not using the constructed 
> parser with added format 
> https://github.com/apache/hive/blob/1105ef3974d8a324637d3d35881a739af3aeb382/serde/src/java/org/apache/hadoop/hive/serde2/json/HiveJsonStructReader.java#L82
> But instead it is using Converter
> https://github.com/apache/hive/blob/1105ef3974d8a324637d3d35881a739af3aeb382/serde/src/java/org/apache/hadoop/hive/serde2/json/HiveJsonStructReader.java#L324
> Then converter is using 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorConverter.TimestampConverter
> This converter does not have any knowledge about user formats or what so 
> ever...
> It is using this static converter 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils#getTimestampFromString



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2018-09-12 Thread Ivan Suller (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Suller updated HIVE-20295:
---
Attachment: HIVE-20295.03.patch

> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch, 
> HIVE-20295.03.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19552) Enable TestMiniDruidKafkaCliDriver#druidkafkamini_basic.q

2018-09-12 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612316#comment-16612316
 ] 

Nishant Bangarwa commented on HIVE-19552:
-

[~jcamachorodriguez] Please merge. This is good to be merged now.

> Enable TestMiniDruidKafkaCliDriver#druidkafkamini_basic.q
> -
>
> Key: HIVE-19552
> URL: https://issues.apache.org/jira/browse/HIVE-19552
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Affects Versions: 3.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Nishant Bangarwa
>Priority: Critical
> Attachments: HIVE-19552.1.patch, HIVE-19552.patch
>
>
> The failure was caused by the following sequence of steps - 
> # Test queries for available hosts where a segment is located and gets the 
> location of kafka task. 
> # Kafka task hands over the data and finishes
> # Now the scan query is sent to the kafka task, but the task has already 
> completed and will fail. 
> https://issues.apache.org/jira/browse/HIVE-20349 fixes this issue by retrying 
> the broker in this case. 
> One more cause of failure was the latestOffsets and minimumLag not reported 
> when there is no task. 
> This patch masks those two values also. Query results are verified to ensure 
> that there is no lag. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-12 Thread Nishant Bangarwa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishant Bangarwa updated HIVE-20349:

Attachment: HIVE-20349.3.patch

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-12 Thread Nishant Bangarwa (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612314#comment-16612314
 ] 

Nishant Bangarwa commented on HIVE-20349:
-

updated patch. 

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, 
> HIVE-20349.3.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612311#comment-16612311
 ] 

Hive QA commented on HIVE-20306:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939317/HIVE-20306.15.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 14968 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druid_timestamptz2]
 (batchId=192)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_dynamic_partition]
 (batchId=193)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions]
 (batchId=193)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_extractTime]
 (batchId=192)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_floorTime]
 (batchId=192)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=192)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test1]
 (batchId=193)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_alter]
 (batchId=193)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_insert]
 (batchId=193)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_test_ts]
 (batchId=192)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13736/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13736/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13736/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939317 - PreCommit-HIVE-Build

> Implement projection spec for fetching only requested fields from partitions
> 
>
> Key: HIVE-20306
> URL: https://issues.apache.org/jira/browse/HIVE-20306
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-20306.02.patch, HIVE-20306.03.patch, 
> HIVE-20306.04.patch, HIVE-20306.05.patch, HIVE-20306.06.patch, 
> HIVE-20306.07.patch, HIVE-20306.08.patch, HIVE-20306.09.patch, 
> HIVE-20306.10.patch, HIVE-20306.11.patch, HIVE-20306.12.patch, 
> HIVE-20306.13.patch, HIVE-20306.14.patch, HIVE-20306.15.patch, 
> HIVE-20306.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20539) Remove dependency on com.metamx.java-util

2018-09-12 Thread Ashutosh Chauhan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-20539:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Nishant!

> Remove dependency on com.metamx.java-util
> -
>
> Key: HIVE-20539
> URL: https://issues.apache.org/jira/browse/HIVE-20539
> Project: Hive
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20539.patch
>
>
> java-util was moved from com.metamx to druid code repository. 
> Currently we are packing both com.metamx.java-jtil and io.druid.java-util, 
> This task is to remove the dependency on com.metamx.java-util



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18908) FULL OUTER JOIN to MapJoin

2018-09-12 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18908:

Description: 
Currently, we do not support FULL OUTER JOIN in MapJoin.

Rough TPC-DS timings run on laptop:

(NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)

Query 51:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 4:30 minutes
•   FULL OUTER MapJoin ON: 4:37 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 2:35 minutes
•   FULL OUTER MapJoin ON: 1:47 minutes

Query 97:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 2:37 minutes
•   FULL OUTER MapJoin ON: 2:42 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 1:17 minutes
•   FULL OUTER MapJoin ON: 0:06 minutes

FULL OUTER Join 10,000,000 rows against 323,910 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 14:56 minutes
•   FULL OUTER MapJoin ON: 1:45 minutes

FULL OUTER Join 10,000,000 rows against 1,000 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 12:37 minutes
•   FULL OUTER MapJoin ON: 1:38 minutes

Hopefully, someone will do large scale cluster testing.  
[DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
[Sort] MergeJoin reduce-shuffle.



  was:
Currently, we do not support FULL OUTER JOIN in MapJoin.

Rough TPC-DS timings run on laptop:

(NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)

Query 51:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 4:30 minutes
•   FULL OUTER MapJoin ON: 4:37 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 2:35 minutes
•   FULL OUTER MapJoin ON: 1:47 minutes

Query 97:
o   Vectorization OFF
•   FULL OUTER MapJoin OFF: 2:37 minutes
•   FULL OUTER MapJoin ON: 2:42 minutes
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 1:17 minutes
•   FULL OUTER MapJoin ON: 0:06 minutes

FULL OUTER Join 10,000,000 rows against 323,910 small table keys
o   Vectorization ON
•   FULL OUTER MapJoin OFF: 14:56 minutes
•   FULL OUTER MapJoin ON: 1:45 minutes

Hopefully, someone will do large scale cluster testing.  
[DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
[Sort] MergeJoin reduce-shuffle.




> FULL OUTER JOIN to MapJoin
> --
>
> Key: HIVE-18908
> URL: https://issues.apache.org/jira/browse/HIVE-18908
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: FULL OUTER MapJoin Code Changes.docx, 
> HIVE-18908.01.patch, HIVE-18908.02.patch, HIVE-18908.03.patch, 
> HIVE-18908.04.patch, HIVE-18908.05.patch, HIVE-18908.06.patch, 
> HIVE-18908.08.patch, HIVE-18908.09.patch, HIVE-18908.091.patch, 
> HIVE-18908.092.patch, HIVE-18908.093.patch, HIVE-18908.096.patch, 
> HIVE-18908.097.patch, HIVE-18908.098.patch, HIVE-18908.099.patch, 
> HIVE-18908.0991.patch, HIVE-18908.0992.patch, HIVE-18908.0993.patch, 
> HIVE-18908.0994.patch, HIVE-18908.0995.patch, HIVE-18908.0996.patch, 
> HIVE-18908.0997.patch, HIVE-18908.0998.patch, HIVE-18908.0999.patch, 
> HIVE-18908.09991.patch, HIVE-18908.09992.patch, HIVE-18908.09993.patch, 
> HIVE-18908.09994.patch, JOIN to MAPJOIN Transformation.pdf, SHARED-MEMORY 
> FULL OUTER MapJoin.pdf
>
>
> Currently, we do not support FULL OUTER JOIN in MapJoin.
> Rough TPC-DS timings run on laptop:
> (NOTE: Query 51 has PTF as a bigger serial portion -- Amdahl's law at play)
> Query 51:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 4:30 minutes
> • FULL OUTER MapJoin ON: 4:37 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 2:35 minutes
> • FULL OUTER MapJoin ON: 1:47 minutes
> Query 97:
> o Vectorization OFF
> • FULL OUTER MapJoin OFF: 2:37 minutes
> • FULL OUTER MapJoin ON: 2:42 minutes
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 1:17 minutes
> • FULL OUTER MapJoin ON: 0:06 minutes
> FULL OUTER Join 10,000,000 rows against 323,910 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 14:56 minutes
> • FULL OUTER MapJoin ON: 1:45 minutes
> FULL OUTER Join 10,000,000 rows against 1,000 small table keys
> o Vectorization ON
> • FULL OUTER MapJoin OFF: 12:37 minutes
> • FULL OUTER MapJoin ON: 1:38 minutes
> Hopefully, someone will do large scale cluster testing.  
> [DynamicPartitionedHashJoin] MapJoin should scale dramatically better than 
> [Sort] MergeJoin reduce-shuffle.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20349) Implement Retry Logic in HiveDruidSplit for Scan Queries

2018-09-12 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612269#comment-16612269
 ] 

Ashutosh Chauhan commented on HIVE-20349:
-

[~nishantbangarwa] Can you rebase and reupload the patch?

> Implement Retry Logic in HiveDruidSplit for Scan Queries
> 
>
> Key: HIVE-20349
> URL: https://issues.apache.org/jira/browse/HIVE-20349
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20349.1.patch, HIVE-20349.2.patch, HIVE-20349.patch
>
>
> while distributing druid scan query we check where the segments are loaded 
> and then each HiveDruidSplit directly queries the historical node. 
> There are few cases when we need to retry and refetch the segments. 
> # The segment is loaded on multiple historical nodes and one of them went 
> down. in this case when we do not get response from one segment, we query the 
> next replica. 
> # The segment was loaded onto a realtime task and was handed over, when we 
> query the realtime task has already finished. In this case there is no 
> replica. The Split needs to query the broker again for the location of the 
> segment and then send the query to correct historical node. 
> This is also the root cause of failure of druidkafkamini_basic.q test, where 
> the segment handover happens before the scan query is executed.
> Note: This is not a problem when we are directly querying Druid brokers as 
> the broker handles the retry logic. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20306) Implement projection spec for fetching only requested fields from partitions

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612228#comment-16612228
 ] 

Hive QA commented on HIVE-20306:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
26s{color} | {color:blue} standalone-metastore/metastore-common in master has 9 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
21s{color} | {color:red} hcatalog-unit in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 53 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13736/dev-support/hive-personality.sh
 |
| git revision | master / f18842e |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13736/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13736/yetus/patch-mvninstall-itests_hcatalog-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13736/yetus/whitespace-eol.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13736/yetus/whitespace-tabs.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13736/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-common itests/hcatalog-unit 
standalone-metastore/metastore-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13736/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement projection spec for fetching only requested fields from partitions
> 
>
> Key: HIVE-20306
> URL: https://issues.apache.org/jira/browse/HIVE-20306
> Project: Hive
>  Issue Type: 

[jira] [Commented] (HIVE-20539) Remove dependency on com.metamx.java-util

2018-09-12 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612184#comment-16612184
 ] 

Hive QA commented on HIVE-20539:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12939312/HIVE-20539.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14935 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13735/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13735/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13735/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12939312 - PreCommit-HIVE-Build

> Remove dependency on com.metamx.java-util
> -
>
> Key: HIVE-20539
> URL: https://issues.apache.org/jira/browse/HIVE-20539
> Project: Hive
>  Issue Type: Task
>Reporter: Nishant Bangarwa
>Assignee: Nishant Bangarwa
>Priority: Major
> Attachments: HIVE-20539.patch
>
>
> java-util was moved from com.metamx to druid code repository. 
> Currently we are packing both com.metamx.java-jtil and io.druid.java-util, 
> This task is to remove the dependency on com.metamx.java-util



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20295) Remove !isNumber check after failed constant interpretation

2018-09-12 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612162#comment-16612162
 ] 

Zoltan Haindrich commented on HIVE-20295:
-

I think this patch may reject expressions like: {{c = 1.0D}} ; where c is has a 
type from the following: (short,long,double)


> Remove !isNumber check after failed constant interpretation
> ---
>
> Key: HIVE-20295
> URL: https://issues.apache.org/jira/browse/HIVE-20295
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Ivan Suller
>Priority: Major
> Attachments: HIVE-20295.01.patch, HIVE-20295.02.patch
>
>
> During constant interpretation; if the number can't be parsed - it might be 
> possible that the comparsion is out of range for the type in question - in 
> which case it could be removed.
> https://github.com/apache/hive/blob/2cabb8da150b8fb980223fbd6c2c93b842ca3ee5/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java#L1163



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >