[jira] [Commented] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624505#comment-16624505
 ] 

Hive QA commented on HIVE-17043:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940852/HIVE-17043.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 41 failed/errored test(s), 14994 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_join_pkfk]
 (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark4] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_join_pushdown] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_vc] (batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[runtime_skewjoin_mapjoin_spark]
 (batchId=58)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[dynamic_semijoin_user_level]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_4]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction_sw]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_6]
 (batchId=188)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_recursive_mapjoin]
 (batchId=188)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_vc] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[runtime_skewjoin_mapjoin_spark]
 (batchId=134)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query22] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query24] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query45] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query54] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query57] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query58] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query65] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query66] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query67] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query70] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query91] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query99] 
(batchId=266)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query22] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query24] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query45] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query54] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query57] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query58] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query65] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query67] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query91] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query99] 
(batchId=264)
org.apache.hadoop.hive.metastore.TestHiveMetaStoreAlterColumnPar.org.apache.hadoop.hive.metastore.TestHiveMetaStoreAlterColumnPar
 (batchId=238)
org.apache.hadoop.hive.ql.exec.spark.TestSparkSessionTimeout.testMultiSparkSessionTimeout
 (batchId=245)
org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13965/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13965/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13965/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 41 tests failed

[jira] [Commented] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624494#comment-16624494
 ] 

Hive QA commented on HIVE-17043:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 14 new + 46 unchanged - 3 
fixed = 60 total (was 49) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13965/dev-support/hive-personality.sh
 |
| git revision | master / cdba00c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13965/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13965/yetus/whitespace-eol.txt
 |
| modules | C: itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13965/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove non unique columns from group by keys if not referenced later
> 
>
> Key: HIVE-17043
> URL: https://issues.apache.org/jira/browse/HIVE-17043
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, 
> HIVE-17043.3.patch, HIVE-17043.4.patch, HIVE-17043.5.patch
>
>
> Group by keys may be a mix of unique (or primary) keys and regular columns. 
> In such cases presence of regular column won't alter cardinality of groups. 
> So, if regular columns are not referenced later, they can be dropped from 
> group by keys. Depending on operator tree may result in those columns not 
> being read at all from disk in best case. In worst case, we will avoid 
> shuffling and sorting regular columns from mapper to reducer, which still 
> could be 

[jira] [Commented] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624483#comment-16624483
 ] 

Hive QA commented on HIVE-20603:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940703/HIVE-20603.2.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 14995 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestActivePassiveHA.testActivePassiveHA (batchId=251)
org.apache.hive.jdbc.TestActivePassiveHA.testClientConnectionsOnFailover 
(batchId=251)
org.apache.hive.jdbc.TestActivePassiveHA.testConnectionActivePassiveHAServiceDiscovery
 (batchId=251)
org.apache.hive.jdbc.TestActivePassiveHA.testManualFailover (batchId=251)
org.apache.hive.jdbc.TestActivePassiveHA.testManualFailoverUnauthorized 
(batchId=251)
org.apache.hive.jdbc.TestActivePassiveHA.testNoConnectionOnPassive (batchId=251)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13964/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13964/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13964/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940703 - PreCommit-HIVE-Build

> "Wrong FS" error when inserting to partition after changing table location 
> filesystem
> -
>
> Key: HIVE-20603
> URL: https://issues.apache.org/jira/browse/HIVE-20603
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20603.1.patch, HIVE-20603.2.patch
>
>
> Inserting into an existing partition, after changing a table's location to 
> point to a different HDFS filesystem:
> {noformat}
>query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) 
> PARTITIONED BY (year int);\n"
> query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES 
> (8,'Henry','CSE');\n"
> query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n"
> query += "ALTER TABLE test_managed_tbl SET LOCATION 
>   
> 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'"
> query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES 
> (9,'Harris','CSE');\n"
> {noformat}
> Results in the following error:
> {noformat}
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://ns1/warehouse/tablespace/managed/hive/test_managed_tbl/year=2017, 
> expected: hdfs://ns2
> at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:781)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:240)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1583)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1580)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1595)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1734)
> at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:4141)
> at 
> org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1966)
> at 
> org.apache.hadoop.hive.ql.exec.MoveTask.handleStaticParts(MoveTask.java:477)
> at org.apache.hadoop.hive.ql.exec.MoveTask.execute(MoveTask.java:397)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:210)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2701)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2372)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2048)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1746)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1740)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20603) "Wrong FS" error when inserting to partition after changing table location filesystem

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624474#comment-16624474
 ] 

Hive QA commented on HIVE-20603:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
33s{color} | {color:blue} standalone-metastore/metastore-common in master has 
28 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13964/dev-support/hive-personality.sh
 |
| git revision | master / cdba00c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-common itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13964/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> "Wrong FS" error when inserting to partition after changing table location 
> filesystem
> -
>
> Key: HIVE-20603
> URL: https://issues.apache.org/jira/browse/HIVE-20603
> Project: Hive
>  Issue Type: Bug
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-20603.1.patch, HIVE-20603.2.patch
>
>
> Inserting into an existing partition, after changing a table's location to 
> point to a different HDFS filesystem:
> {noformat}
>query += "CREATE TABLE test_managed_tbl (id int, name string, dept string) 
> PARTITIONED BY (year int);\n"
> query += "INSERT INTO test_managed_tbl PARTITION (year=2016) VALUES 
> (8,'Henry','CSE');\n"
> query += "ALTER TABLE test_managed_tbl ADD PARTITION (year=2017);\n"
> query += "ALTER TABLE test_managed_tbl SET LOCATION 
>   
> 'hdfs://ns2/warehouse/tablespace/managed/hive/test_managed_tbl'"
> query += "INSERT INTO test_managed_tbl PARTITION (year=2017) VALUES 
> (9,'Harris','CSE');\n"
> 

[jira] [Commented] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624456#comment-16624456
 ] 

Hive QA commented on HIVE-20615:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940698/HIVE-20615.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 14967 tests 
executed
*Failed tests:*
{noformat}
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=227)
TestMarkPartition - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestMetaStoreEventListenerOnlyOnCommit - did not produce a TEST-*.xml file 
(likely timed out) (batchId=227)
TestMetaStoreInitListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=227)
TestMetaStoreListenersError - did not produce a TEST-*.xml file (likely timed 
out) (batchId=227)
TestMetaStoreSchemaInfo - did not produce a TEST-*.xml file (likely timed out) 
(batchId=227)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=194)

[druidmini_masking.q,druidmini_test1.q,druidkafkamini_basic.q,druidmini_joins.q,druid_timestamptz.q]
org.apache.hadoop.hive.ql.exec.spark.TestSparkSessionTimeout.testMultiSparkSessionTimeout
 (batchId=245)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13963/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13963/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13963/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940698 - PreCommit-HIVE-Build

> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20615) CachedStore: Background refresh thread bug fixes

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624435#comment-16624435
 ] 

Hive QA commented on HIVE-20615:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
13s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13963/dev-support/hive-personality.sh
 |
| git revision | master / cdba00c |
| Default Java | 1.8.0_111 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13963/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13963/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13963/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-server U: 
standalone-metastore/metastore-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13963/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CachedStore: Background refresh thread bug fixes
> 
>
> Key: HIVE-20615
> URL: https://issues.apache.org/jira/browse/HIVE-20615
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20615.1.patch
>
>
> Regression introduced in HIVE-18264. Fixes background thread starting and 
> refreshing of the table cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19748) Add appropriate null checks to DecimalColumnStatsAggregator

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624424#comment-16624424
 ] 

Hive QA commented on HIVE-19748:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940696/HIVE-19748.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13962/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13962/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13962/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-09-22 02:01:14.697
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-13962/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-09-22 02:01:14.700
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at cdba00c HIVE-20555: HiveServer2: Preauthenticated subject for 
http transport is not retained for entire duration of http communication in 
some cases (Vaibhav Gumashta reviewed by Daniel Dai)
+ git clean -f -d
Removing ${project.basedir}/
Removing itests/${project.basedir}/
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at cdba00c HIVE-20555: HiveServer2: Preauthenticated subject for 
http transport is not retained for entire duration of http communication in 
some cases (Vaibhav Gumashta reviewed by Daniel Dai)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-09-22 02:01:15.389
+ rm -rf ../yetus_PreCommit-HIVE-Build-13962
+ mkdir ../yetus_PreCommit-HIVE-Build-13962
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-13962
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-13962/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/DecimalColumnStatsAggregator.java:
 does not exist in index
error: 
standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/DecimalColumnStatsAggregator.java:
 does not exist in index
error: 
src/main/java/org/apache/hadoop/hive/metastore/columnstats/aggr/DecimalColumnStatsAggregator.java:
 does not exist in index
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-13962
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940696 - PreCommit-HIVE-Build

> Add appropriate null checks to DecimalColumnStatsAggregator
> ---
>
> Key: HIVE-19748
> URL: https://issues.apache.org/jira/browse/HIVE-19748
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-19748.1.patch, HIVE-19748.1.patch, 
> HIVE-19748.1.patch
>
>
> In some of our internal testing, we noticed that calls to 
> MetaStoreUtils.decimalToDoublee(Decimal decimal)  from within 
> DecimalColumnStatsAggregator end up passing null Decimal values to the method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624423#comment-16624423
 ] 

Hive QA commented on HIVE-18871:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940695/HIVE-18871.8.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14993 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13961/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13961/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13961/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940695 - PreCommit-HIVE-Build

> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1, 4.0.0, 3.2.0
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, 
> HIVE-18871.3.patch, HIVE-18871.4.patch, HIVE-18871.5.patch, 
> HIVE-18871.6.patch, HIVE-18871.7.patch, HIVE-18871.8.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
>  ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) 
> ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) 
> ~[hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 

[jira] [Updated] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20621:
-
Attachment: HIVE-20621.2.patch

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch, HIVE-20621.2.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched (there is also connection 
> overhead without keepalive). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624413#comment-16624413
 ] 

Prasanth Jayachandran commented on HIVE-20621:
--

Added comment

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch, HIVE-20621.2.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched (there is also connection 
> overhead without keepalive). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Gopal V (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624409#comment-16624409
 ] 

Gopal V commented on HIVE-20621:


LGTM - +1

{code} {
operationStatus = ((HiveStatement) statement).waitForOperationToComplete();
{code}

add a comment saying that a query is expected to go from running -> complete 
and won't go back to running while fetching results (this is an implicit state 
machine).

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched (there is also connection 
> overhead without keepalive). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624396#comment-16624396
 ] 

Hive QA commented on HIVE-18871:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 1 new + 42 unchanged - 0 fixed 
= 43 total (was 42) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13961/dev-support/hive-personality.sh
 |
| git revision | master / cdba00c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13961/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13961/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1, 4.0.0, 3.2.0
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, 
> HIVE-18871.3.patch, HIVE-18871.4.patch, HIVE-18871.5.patch, 
> HIVE-18871.6.patch, HIVE-18871.7.patch, HIVE-18871.8.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> 

[jira] [Updated] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20621:
-
Description: 
Fetching result set for a result cache hit query gets slower as more rows are 
fetched. For fetching 10 row result set it took about 900ms but fetching 200 
row result set took 8 seconds. 

Reason for this slowness is GetOperationStatus is invoked inside 
resultset.next() and it happens for every row even after operation has 
completed. This is one RPC call per row fetched (there is also connection 
overhead without keepalive). 

  was:
Fetching result set for a result cache hit query gets slower as more rows are 
fetched. For fetching 10 row result set it took about 900ms but fetching 200 
row result set took 8 seconds. 

Reason for this slowness is GetOperationStatus is invoked inside 
resultset.next() and it happens for every row even after operation has 
completed. This is one RPC call per row fetched. 


> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched (there is also connection 
> overhead without keepalive). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2018-09-21 Thread Alice Fan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624389#comment-16624389
 ] 

Alice Fan commented on HIVE-12812:
--

Hi [~ctang.ma], Thank you very much for your response :) If you don't mind, I 
would like to assign the ticket to myself and complete the patch. I am thinking 
maybe I can add an if condition here, at MapReduceCompiler.init(), we only set 
mapred.input.dir.recursive=true when hive.optimize.union.remove=true. By this 
way, it will be focused on addressing the issue caused by the remove union 
optimizer and may help to avoid regression in test case.

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Chaoyu Tang
>Priority: Major
> Attachments: HIVE-12812.patch, HIVE-12812.patch, HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624381#comment-16624381
 ] 

Prasanth Jayachandran commented on HIVE-20621:
--

[~thejas] [~gopalv] can some please review this patch?

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624380#comment-16624380
 ] 

Prasanth Jayachandran commented on HIVE-20621:
--

After this patch fetching result set is independent of result set size. Was 
able to fetch 200 row result set under a second. 

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20621:
-
Environment: (was: Fetching result set for a result cache hit query 
gets slower as more rows are fetched. For fetching 10 row result set it took 
about 900ms but fetching 200 row result set took 8 seconds. 

Reason for this slowness is GetOperationStatus is invoked inside 
resultset.next() and it happens for every row even after operation has 
completed. This is one RPC call per row fetched. )

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20621:
-
Description: 
Fetching result set for a result cache hit query gets slower as more rows are 
fetched. For fetching 10 row result set it took about 900ms but fetching 200 
row result set took 8 seconds. 

Reason for this slowness is GetOperationStatus is invoked inside 
resultset.next() and it happens for every row even after operation has 
completed. This is one RPC call per row fetched. 

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>
> Fetching result set for a result cache hit query gets slower as more rows are 
> fetched. For fetching 10 row result set it took about 900ms but fetching 200 
> row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624376#comment-16624376
 ] 

Hive QA commented on HIVE-20593:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940685/HIVE-20593.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14993 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13960/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13960/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13960/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940685 - PreCommit-HIVE-Build

> Load Data for partitioned ACID tables fails with bucketId out of range: -1
> --
>
> Key: HIVE-20593
> URL: https://issues.apache.org/jira/browse/HIVE-20593
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20593.1.patch, HIVE-20593.2.patch, 
> HIVE-20593.3.patch
>
>
> Load data for ACID tables is failing to load ORC files when it is converted 
> to IAS job.
>  
> The tempTblObj is inherited from target table. However, the only table 
> property which needs to be inherited is bucketing version. Properties like 
> transactional etc should be ignored.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20621:
-
Attachment: HIVE-20621.1.patch

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
> Environment: Fetching result set for a result cache hit query gets 
> slower as more rows are fetched. For fetching 10 row result set it took about 
> 900ms but fetching 200 row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched. 
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-20621:
-
Status: Patch Available  (was: Open)

> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
> Environment: Fetching result set for a result cache hit query gets 
> slower as more rows are fetched. For fetching 10 row result set it took about 
> 900ms but fetching 200 row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched. 
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-20621.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20621) GetOperationStatus called in resultset.next causing incremental slowness

2018-09-21 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-20621:



> GetOperationStatus called in resultset.next causing incremental slowness
> 
>
> Key: HIVE-20621
> URL: https://issues.apache.org/jira/browse/HIVE-20621
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
> Environment: Fetching result set for a result cache hit query gets 
> slower as more rows are fetched. For fetching 10 row result set it took about 
> 900ms but fetching 200 row result set took 8 seconds. 
> Reason for this slowness is GetOperationStatus is invoked inside 
> resultset.next() and it happens for every row even after operation has 
> completed. This is one RPC call per row fetched. 
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20593) Load Data for partitioned ACID tables fails with bucketId out of range: -1

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624372#comment-16624372
 ] 

Hive QA commented on HIVE-20593:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
37s{color} | {color:red} root in master failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
35s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 35s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 1 new + 8 unchanged - 0 fixed 
= 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13960/dev-support/hive-personality.sh
 |
| git revision | master / cfdb433 |
| Default Java | 1.8.0_111 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13960/yetus/branch-compile-root.txt
 |
| findbugs | v3.0.0 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13960/yetus/patch-compile-root.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13960/yetus/patch-compile-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13960/yetus/diff-checkstyle-ql.txt
 |
| modules | C: . ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13960/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Load Data for partitioned ACID tables fails with bucketId out of range: -1
> --
>
> Key: HIVE-20593
> URL: https://issues.apache.org/jira/browse/HIVE-20593
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20593.1.patch, HIVE-20593.2.patch, 
> HIVE-20593.3.patch
>
>
> Load data for ACID tables is failing to load ORC files when it is converted 
> to IAS job.
>  
> The tempTblObj is inherited from target table. However, the only table 
> property which needs to be inherited is bucketing version. Properties like 
> transactional etc should be ignored.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables

2018-09-21 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-20556:
---
Attachment: HIVE-20556.13.patch
Status: Patch Available  (was: Open)

> Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
> --
>
> Key: HIVE-20556
> URL: https://issues.apache.org/jira/browse/HIVE-20556
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Standalone Metastore
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20556.1.patch, HIVE-20556.10.patch, 
> HIVE-20556.11.patch, HIVE-20556.12.patch, HIVE-20556.13.patch, 
> HIVE-20556.2.patch, HIVE-20556.3.patch, HIVE-20556.4.patch, 
> HIVE-20556.5.patch, HIVE-20556.6.patch, HIVE-20556.7.patch, 
> HIVE-20556.8.patch, HIVE-20556.9.patch
>
>
> We have two options to do this
> 1) Use the current MTable and add a field for this value
> 2) Add an independent API call to the metastore that would return the TBL_ID.
> Option 1 is preferable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables

2018-09-21 Thread Jaume M (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jaume M updated HIVE-20556:
---
Status: Open  (was: Patch Available)

> Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
> --
>
> Key: HIVE-20556
> URL: https://issues.apache.org/jira/browse/HIVE-20556
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Standalone Metastore
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20556.1.patch, HIVE-20556.10.patch, 
> HIVE-20556.11.patch, HIVE-20556.12.patch, HIVE-20556.13.patch, 
> HIVE-20556.2.patch, HIVE-20556.3.patch, HIVE-20556.4.patch, 
> HIVE-20556.5.patch, HIVE-20556.6.patch, HIVE-20556.7.patch, 
> HIVE-20556.8.patch, HIVE-20556.9.patch
>
>
> We have two options to do this
> 1) Use the current MTable and add a field for this value
> 2) Add an independent API call to the metastore that would return the TBL_ID.
> Option 1 is preferable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624318#comment-16624318
 ] 

Sergey Shelukhin commented on HIVE-20620:
-

cc [~djaiswal]

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.01.patch, HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20555) HiveServer2: Preauthenticated subject for http transport is not retained for entire duration of http communication in some cases

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20555:

Fix Version/s: (was: 3.10)

> HiveServer2: Preauthenticated subject for http transport is not retained for 
> entire duration of http communication in some cases
> 
>
> Key: HIVE-20555
> URL: https://issues.apache.org/jira/browse/HIVE-20555
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20555.1.patch, HIVE-20555.1.patch, 
> HIVE-20555.1.patch
>
>
> As implemented in HIVE-8705, for http transport, we add the logged in 
> subject's credentials in the http header via a request interceptor. The 
> request interceptor doesn't seem to be getting used for some http traffic 
> (e.g. knox ssl in the same rpc). It would also be better to cache the logged 
> in subject for the duration of the whole session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20555) HiveServer2: Preauthenticated subject for http transport is not retained for entire duration of http communication in some cases

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20555:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   3.10
   4.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-3, branch-3.1 and master. Thanks [~daijy]

> HiveServer2: Preauthenticated subject for http transport is not retained for 
> entire duration of http communication in some cases
> 
>
> Key: HIVE-20555
> URL: https://issues.apache.org/jira/browse/HIVE-20555
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Fix For: 4.0.0, 3.10, 3.2.0
>
> Attachments: HIVE-20555.1.patch, HIVE-20555.1.patch, 
> HIVE-20555.1.patch
>
>
> As implemented in HIVE-8705, for http transport, we add the logged in 
> subject's credentials in the http header via a request interceptor. The 
> request interceptor doesn't seem to be getting used for some http traffic 
> (e.g. knox ssl in the same rpc). It would also be better to cache the logged 
> in subject for the duration of the whole session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20555) HiveServer2: Preauthenticated subject for http transport is not retained for entire duration of http communication in some cases

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624302#comment-16624302
 ] 

Hive QA commented on HIVE-20555:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940682/HIVE-20555.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14993 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13959/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13959/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13959/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940682 - PreCommit-HIVE-Build

> HiveServer2: Preauthenticated subject for http transport is not retained for 
> entire duration of http communication in some cases
> 
>
> Key: HIVE-20555
> URL: https://issues.apache.org/jira/browse/HIVE-20555
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20555.1.patch, HIVE-20555.1.patch, 
> HIVE-20555.1.patch
>
>
> As implemented in HIVE-8705, for http transport, we add the logged in 
> subject's credentials in the http header via a request interceptor. The 
> request interceptor doesn't seem to be getting used for some http traffic 
> (e.g. knox ssl in the same rpc). It would also be better to cache the logged 
> in subject for the duration of the whole session.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20540) Vectorization : Support loading bucketed tables using sorted dynamic partition optimizer - II

2018-09-21 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-20540:
--
Attachment: HIVE-20540.1.patch

> Vectorization : Support loading bucketed tables using sorted dynamic 
> partition optimizer - II
> -
>
> Key: HIVE-20540
> URL: https://issues.apache.org/jira/browse/HIVE-20540
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20540.1.patch
>
>
> Followup to HIVE-20510 with remaining issues,
>  
> 1. Avoid using Reflection.
> 2. In VectorizationContext, use correct place to setup the VectorExpression. 
> It may be missed in certain cases.
> 3. In BucketNumExpression, make sure that a value is not overwritten before 
> it is processed. Use a flag to achieve this.
> cc [~gopalv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20540) Vectorization : Support loading bucketed tables using sorted dynamic partition optimizer - II

2018-09-21 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-20540:
--
Status: Patch Available  (was: In Progress)

> Vectorization : Support loading bucketed tables using sorted dynamic 
> partition optimizer - II
> -
>
> Key: HIVE-20540
> URL: https://issues.apache.org/jira/browse/HIVE-20540
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20540.1.patch
>
>
> Followup to HIVE-20510 with remaining issues,
>  
> 1. Avoid using Reflection.
> 2. In VectorizationContext, use correct place to setup the VectorExpression. 
> It may be missed in certain cases.
> 3. In BucketNumExpression, make sure that a value is not overwritten before 
> it is processed. Use a flag to achieve this.
> cc [~gopalv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-20540) Vectorization : Support loading bucketed tables using sorted dynamic partition optimizer - II

2018-09-21 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-20540 started by Deepak Jaiswal.
-
> Vectorization : Support loading bucketed tables using sorted dynamic 
> partition optimizer - II
> -
>
> Key: HIVE-20540
> URL: https://issues.apache.org/jira/browse/HIVE-20540
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-20540.1.patch
>
>
> Followup to HIVE-20510 with remaining issues,
>  
> 1. Avoid using Reflection.
> 2. In VectorizationContext, use correct place to setup the VectorExpression. 
> It may be missed in certain cases.
> 3. In BucketNumExpression, make sure that a value is not overwritten before 
> it is processed. Use a flag to achieve this.
> cc [~gopalv]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20538) Allow to store a key value together with a transaction.

2018-09-21 Thread Jaume M (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624280#comment-16624280
 ] 

Jaume M commented on HIVE-20538:


Published in https://reviews.apache.org/r/68805/ [~ekoifman]

> Allow to store a key value together with a transaction.
> ---
>
> Key: HIVE-20538
> URL: https://issues.apache.org/jira/browse/HIVE-20538
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore, Transactions
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch, 
> HIVE-20538.2.patch, HIVE-20538.3.patch, HIVE-20538.4.patch
>
>
> This can be useful for example to know if a transaction has already happened.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20555) HiveServer2: Preauthenticated subject for http transport is not retained for entire duration of http communication in some cases

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624253#comment-16624253
 ] 

Hive QA commented on HIVE-20555:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} jdbc in master has 17 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} jdbc in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} jdbc: The patch generated 3 new + 33 unchanged - 6 
fixed = 36 total (was 39) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} service: The patch generated 0 new + 3 unchanged - 1 
fixed = 3 total (was 4) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13959/dev-support/hive-personality.sh
 |
| git revision | master / cfdb433 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13959/yetus/patch-mvninstall-jdbc.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13959/yetus/diff-checkstyle-jdbc.txt
 |
| modules | C: jdbc service U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13959/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveServer2: Preauthenticated subject for http transport is not retained for 
> entire duration of http communication in some cases
> 
>
> Key: HIVE-20555
> URL: https://issues.apache.org/jira/browse/HIVE-20555
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.3.2, 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20555.1.patch, HIVE-20555.1.patch, 
> HIVE-20555.1.patch
>
>
> As implemented in HIVE-8705, for http transport, we add the logged in 
> subject's credentials in the 

[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624244#comment-16624244
 ] 

Hive QA commented on HIVE-17684:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940678/HIVE-17684.11.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14993 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13958/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13958/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13958/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940678 - PreCommit-HIVE-Build

> HoS memory issues with MapJoinMemoryExhaustionHandler
> -
>
> Key: HIVE-17684
> URL: https://issues.apache.org/jira/browse/HIVE-17684
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch, 
> HIVE-17684.03.patch, HIVE-17684.04.patch, HIVE-17684.05.patch, 
> HIVE-17684.06.patch, HIVE-17684.07.patch, HIVE-17684.08.patch, 
> HIVE-17684.09.patch, HIVE-17684.10.patch, HIVE-17684.11.patch
>
>
> We have seen a number of memory issues due the {{HashSinkOperator}} use of 
> the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect 
> scenarios where the small table is taking too much space in memory, in which 
> case a {{MapJoinMemoryExhaustionError}} is thrown.
> The configs to control this logic are:
> {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90)
> {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55)
> The handler works by using the {{MemoryMXBean}} and uses the following logic 
> to estimate how much memory the {{HashMap}} is consuming: 
> {{MemoryMXBean#getHeapMemoryUsage().getUsed() / 
> MemoryMXBean#getHeapMemoryUsage().getMax()}}
> The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be 
> inaccurate. The value returned by this method returns all reachable and 
> unreachable memory on the heap, so there may be a bunch of garbage data, and 
> the JVM just hasn't taken the time to reclaim it all. This can lead to 
> intermittent failures of this check even though a simple GC would have 
> reclaimed enough space for the process to continue working.
> We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. 
> In Hive-on-MR this probably made sense to use because every Hive task was run 
> in a dedicated container, so a Hive Task could assume it created most of the 
> data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks 
> running in a single executor, each doing different things.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624242#comment-16624242
 ] 

Sergey Shelukhin commented on HIVE-20620:
-

Updated the test, the difference in plan was the culprit. Gopal suggested the 
optimization that caused the difference.
Seems like the issue can be reproed now and the patch fixes it for me.

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.01.patch, HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-20620:

Attachment: HIVE-20620.01.patch

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.01.patch, HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624240#comment-16624240
 ] 

Vineet Garg commented on HIVE-17043:


[~jcamachorodriguez] I introduced new logic to compute unique keys based on 
statistics. Now {{RelMdUniqueKeys}} is only used for computing keys based on 
constraints.

> Remove non unique columns from group by keys if not referenced later
> 
>
> Key: HIVE-17043
> URL: https://issues.apache.org/jira/browse/HIVE-17043
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, 
> HIVE-17043.3.patch, HIVE-17043.4.patch, HIVE-17043.5.patch
>
>
> Group by keys may be a mix of unique (or primary) keys and regular columns. 
> In such cases presence of regular column won't alter cardinality of groups. 
> So, if regular columns are not referenced later, they can be dropped from 
> group by keys. Depending on operator tree may result in those columns not 
> being read at all from disk in best case. In worst case, we will avoid 
> shuffling and sorting regular columns from mapper to reducer, which still 
> could be substantial CPU and network savings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17043:
---
Attachment: HIVE-17043.5.patch

> Remove non unique columns from group by keys if not referenced later
> 
>
> Key: HIVE-17043
> URL: https://issues.apache.org/jira/browse/HIVE-17043
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, 
> HIVE-17043.3.patch, HIVE-17043.4.patch, HIVE-17043.5.patch
>
>
> Group by keys may be a mix of unique (or primary) keys and regular columns. 
> In such cases presence of regular column won't alter cardinality of groups. 
> So, if regular columns are not referenced later, they can be dropped from 
> group by keys. Depending on operator tree may result in those columns not 
> being read at all from disk in best case. In worst case, we will avoid 
> shuffling and sorting regular columns from mapper to reducer, which still 
> could be substantial CPU and network savings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17043:
---
Status: Patch Available  (was: Open)

> Remove non unique columns from group by keys if not referenced later
> 
>
> Key: HIVE-17043
> URL: https://issues.apache.org/jira/browse/HIVE-17043
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, 
> HIVE-17043.3.patch, HIVE-17043.4.patch, HIVE-17043.5.patch
>
>
> Group by keys may be a mix of unique (or primary) keys and regular columns. 
> In such cases presence of regular column won't alter cardinality of groups. 
> So, if regular columns are not referenced later, they can be dropped from 
> group by keys. Depending on operator tree may result in those columns not 
> being read at all from disk in best case. In worst case, we will avoid 
> shuffling and sorting regular columns from mapper to reducer, which still 
> could be substantial CPU and network savings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17043:
---
Status: Open  (was: Patch Available)

> Remove non unique columns from group by keys if not referenced later
> 
>
> Key: HIVE-17043
> URL: https://issues.apache.org/jira/browse/HIVE-17043
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, 
> HIVE-17043.3.patch, HIVE-17043.4.patch, HIVE-17043.5.patch
>
>
> Group by keys may be a mix of unique (or primary) keys and regular columns. 
> In such cases presence of regular column won't alter cardinality of groups. 
> So, if regular columns are not referenced later, they can be dropped from 
> group by keys. Depending on operator tree may result in those columns not 
> being read at all from disk in best case. In worst case, we will avoid 
> shuffling and sorting regular columns from mapper to reducer, which still 
> could be substantial CPU and network savings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624234#comment-16624234
 ] 

Hive QA commented on HIVE-17684:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
11s{color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
38s{color} | {color:red} root in master failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
39s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 39s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} common: The patch generated 3 new + 424 unchanged - 0 
fixed = 427 total (was 424) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} root: The patch generated 3 new + 424 unchanged - 0 
fixed = 427 total (was 424) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 5 new + 6 unchanged - 0 fixed 
= 11 total (was 6) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
21s{color} | {color:red} ql generated 1 new + 2325 unchanged - 1 fixed = 2326 
total (was 2326) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Class org.apache.hadoop.hive.ql.exec.HashTableSinkOperator defines 
non-transient non-serializable instance field memoryExhaustionChecker  In 
HashTableSinkOperator.java:instance field memoryExhaustionChecker  In 
HashTableSinkOperator.java |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13958/dev-support/hive-personality.sh
 |
| git revision | master / f404b0d |
| Default Java | 1.8.0_111 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13958/yetus/branch-compile-root.txt
 |
| findbugs | v3.0.0 |
| compile | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13958/yetus/patch-compile-root.txt
 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13958/yetus/patch-compile-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13958/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13958/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 

[jira] [Commented] (HIVE-20538) Allow to store a key value together with a transaction.

2018-09-21 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624232#comment-16624232
 ] 

Eugene Koifman commented on HIVE-20538:
---

could you create an RB please

> Allow to store a key value together with a transaction.
> ---
>
> Key: HIVE-20538
> URL: https://issues.apache.org/jira/browse/HIVE-20538
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore, Transactions
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch, 
> HIVE-20538.2.patch, HIVE-20538.3.patch, HIVE-20538.4.patch
>
>
> This can be useful for example to know if a transaction has already happened.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Eugene Koifman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624226#comment-16624226
 ] 

Eugene Koifman commented on HIVE-20620:
---

There are some acid tests that tests more buckets than reducers
bucket_num_reducers_acid.*q but these are on MR

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20444) Parameter is not properly quoted in DbNotificationListener.addWriteNotificationLog

2018-09-21 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-20444:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.2.0
   4.0.0
   Status: Resolved  (was: Patch Available)

Patch pushed to branch-3/master. Thanks [~maheshk114] [~sankarh] for review!

> Parameter is not properly quoted in 
> DbNotificationListener.addWriteNotificationLog
> --
>
> Key: HIVE-20444
> URL: https://issues.apache.org/jira/browse/HIVE-20444
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-20444.1.patch, HIVE-20444.2.patch, JDBCTest.java
>
>
> See exception:
> {code}
> 2018-08-22T04:44:22,758 INFO  [pool-8-thread-190]: 
> listener.DbNotificationListener 
> (DbNotificationListener.java:addWriteNotificationLog(765)) - Going to execute 
> insert  "WNL_WRITEID", "WNL_DATABASE", "WNL_TABLE", "WNL_PARTITION", "WNL_TABLE_OBJ", 
> "WNL_PARTITION_OBJ", "WNL_FILES", "WNL_EVENT_TIME") values 
> (50,124,1,'default','t1_default','','{"1":{"str":"t1_default"},"2":{"str":"default"},"3":{"str":"hrt_qa"},"4":{"i32":1534913061},"5":{"i32":0},"6":{"i32":0},"7":{"rec":{"1":{"lst":["rec",15,{"1":{"str":"t"},"2":{"str":"tinyint"}},{"1":{"str":"si"},"2":{"str":"smallint"}},{"1":{"str":"i"},"2":{"str":"int"}},{"1":{"str":"b"},"2":{"str":"bigint"}},{"1":{"str":"f"},"2":{"str":"double"}},{"1":{"str":"d"},"2":{"str":"double"}},{"1":{"str":"s"},"2":{"str":"varchar(25)"}},{"1":{"str":"dc"},"2":{"str":"decimal(38,18)"}},{"1":{"str":"bo"},"2":{"str":"varchar(5)"}},{"1":{"str":"v"},"2":{"str":"varchar(25)"}},{"1":{"str":"c"},"2":{"str":"char(25)"}},{"1":{"str":"ts"},"2":{"str":"timestamp"}},{"1":{"str":"dt"},"2":{"str":"date"}},{"1":{"str":"st"},"2":{"str":"string"}},{"1":{"str":"tz"},"2":{"str":"timestamp
>  with local time 
> zone('UTC')"}}]},"2":{"str":"hdfs://mycluster/warehouse/tablespace/managed/hive/t1_default"},"3":{"str":"org.apache.hadoop.mapred.TextInputFormat"},"4":{"str":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"},"5":{"tf":0},"6":{"i32":-1},"7":{"rec":{"2":{"str":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"},"3":{"map":["str","str",1,{"serialization.format":"1"}]}}},"8":{"lst":["str",0]},"9":{"lst":["rec",0]},"10":{"map":["str","str",0,{}]},"11":{"rec":{"1":{"lst":["str",0]},"2":{"lst":["lst",0]},"3":{"map":["lst","str",0,{}]}}},"12":{"tf":0}}},"8":{"lst":["rec",0]},"9":{"map":["str","str",9,{"totalSize":"0","rawDataSize":"0","numRows":"0","transactional_properties":"insert_only","COLUMN_STATS_ACCURATE":"{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"b\":\"true\",\"bo\":\"true\",\"c\":\"true\",\"d\":\"true\",\"dc\":\"true\",\"dt\":\"true\",\"f\":\"true\",\"i\":\"true\",\"s\":\"true\",\"si\":\"true\",\"st\":\"true\",\"t\":\"true\",\"ts\":\"true\",\"tz\":\"true\",\"v\":\"true\"}}","numFiles":"0","transient_lastDdlTime":"1534913062","bucketing_version":"2","transactional":"true"}]},"12":{"str":"MANAGED_TABLE"},"15":{"tf":0},"17":{"str":"hive"},"18":{"i32":1},"19":{"i64":1}}','null','hdfs://mycluster/warehouse/tablespace/managed/hive/t1_default/delta_001_001_/00_0###delta_001_001_',1534913062)>
> 2018-08-22T04:44:22,773 ERROR [pool-8-thread-190]: 
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to add write notification log 
> org.postgresql.util.PSQLException: ERROR: syntax error at or near "UTC"
>   Position: 1032
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at 
> org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:313)
> at 
> com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:92)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
> at 
> org.apache.hive.hcatalog.listener.DbNotificationListener.addWriteNotificationLog(DbNotificationListener.java:766)
> at 
> org.apache.hive.hcatalog.listener.DbNotificationListener.onAcidWrite(DbNotificationListener.java:657)
> at 
> org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.lambda$static$12(MetaStoreListenerNotifier.java:249)
> at 
> 

[jira] [Updated] (HIVE-20610) TestDbNotificationListener should not use /tmp directory

2018-09-21 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20610:

Attachment: HIVE-20610.1.patch

> TestDbNotificationListener should not use /tmp directory
> 
>
> Key: HIVE-20610
> URL: https://issues.apache.org/jira/browse/HIVE-20610
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20610.1.patch
>
>
> Using /tmp directory creates exceptions for tests like dropTable :
> {code:java}
> 2018-09-19T06:42:04,818  INFO [main] metastore.HiveMetaStore: 0: drop_table : 
> tbl=hive.default.droptbl
> 2018-09-19T06:42:04,819  INFO [main] HiveMetaStore.audit: ugi=hiveptest   
> ip=unknown-ip-addr  cmd=drop_table : tbl=hive.default.droptbl   
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.ICE-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.XIM-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.X11-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/hsperfdata_root]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.font-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.Test-unix]: it still exists.
> 2018-09-19T06:42:05,072 ERROR [main] utils.FileUtils: Failed to delete 
> file:/tmp
> 2018-09-19T06:42:05,072 ERROR [main] utils.MetaStoreUtils: Got exception: 
> org.apache.hadoop.hive.metastore.api.MetaException Unable to delete 
> directory: file:/tmp
> org.apache.hadoop.hive.metastore.api.MetaException: Unable to delete 
> directory: file:/tmp
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:45)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:365) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:353) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:2562)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2523)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2685)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy33.drop_table_with_environment_context(Unknown 
> Source) [?:?]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:3204)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1492)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1432)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable(TestDbNotificationListener.java:522)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20610) TestDbNotificationListener should not use /tmp directory

2018-09-21 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali updated HIVE-20610:

Status: Patch Available  (was: Open)

> TestDbNotificationListener should not use /tmp directory
> 
>
> Key: HIVE-20610
> URL: https://issues.apache.org/jira/browse/HIVE-20610
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-20610.1.patch
>
>
> Using /tmp directory creates exceptions for tests like dropTable :
> {code:java}
> 2018-09-19T06:42:04,818  INFO [main] metastore.HiveMetaStore: 0: drop_table : 
> tbl=hive.default.droptbl
> 2018-09-19T06:42:04,819  INFO [main] HiveMetaStore.audit: ugi=hiveptest   
> ip=unknown-ip-addr  cmd=drop_table : tbl=hive.default.droptbl   
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.ICE-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.XIM-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.X11-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/hsperfdata_root]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.font-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.Test-unix]: it still exists.
> 2018-09-19T06:42:05,072 ERROR [main] utils.FileUtils: Failed to delete 
> file:/tmp
> 2018-09-19T06:42:05,072 ERROR [main] utils.MetaStoreUtils: Got exception: 
> org.apache.hadoop.hive.metastore.api.MetaException Unable to delete 
> directory: file:/tmp
> org.apache.hadoop.hive.metastore.api.MetaException: Unable to delete 
> directory: file:/tmp
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:45)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:365) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:353) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:2562)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2523)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2685)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy33.drop_table_with_environment_context(Unknown 
> Source) [?:?]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:3204)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1492)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1432)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable(TestDbNotificationListener.java:522)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-14609) HS2 cannot drop a function whose associated jar file has been removed

2018-09-21 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624170#comment-16624170
 ] 

BELUGA BEHR edited comment on HIVE-14609 at 9/21/18 9:18 PM:
-

By the same token, I cannot {{describe function}} either.
{code:java}
0: jdbc:hive2://host> describe function row_sequence;
INFO  : Compiling 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d): 
describe function row_sequence
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, 
type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d); Time 
taken: 0.286 seconds
INFO  : Executing 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d): 
describe function row_sequence
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : converting to local hdfs://ns1/tmp/hive-contrib-1.1.0.jar
ERROR : Failed to read external resource hdfs://ns1/tmp/hive-contrib-1.1.0.jar
java.lang.RuntimeException: Failed to read external resource 
hdfs://ns1/tmp/hive-contrib-1.1.0.jar
at 
org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1442)
at 
org.apache.hadoop.hive.ql.session.SessionState.resolveAndDownload(SessionState.java:1398)
at 
org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1322)
at 
org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1308)
at 
org.apache.hadoop.hive.ql.exec.FunctionTask.addFunctionResources(FunctionTask.java:304)
at 
org.apache.hadoop.hive.ql.exec.Registry.registerToSessionRegistry(Registry.java:570)
at 
org.apache.hadoop.hive.ql.exec.Registry.getQualifiedFunctionInfo(Registry.java:556)
at 
org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfo(Registry.java:308)
at 
org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:471)
at 
org.apache.hadoop.hive.ql.exec.DDLTask.describeFunction(DDLTask.java:2907)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:385)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2054)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1750)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1503)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1287)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1282)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
at 
org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at 
org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: File does not exist: 
hdfs://ns1/tmp/hive-contrib-1.1.0.jar
at 
org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1270)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2123)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2092)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2068)
at 
org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1428)
... 29 more

INFO  : Completed executing 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d); Time 
taken: 0.383 seconds
INFO  : OK
+--+--+
|   

[jira] [Commented] (HIVE-14609) HS2 cannot drop a function whose associated jar file has been removed

2018-09-21 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624170#comment-16624170
 ] 

BELUGA BEHR commented on HIVE-14609:


By the same token, I cannot {{describe function}} either to figure out where 
the missing JAR file is.

{code}
0: jdbc:hive2://host> describe function row_sequence;
INFO  : Compiling 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d): 
describe function row_sequence
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, 
type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d); Time 
taken: 0.286 seconds
INFO  : Executing 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d): 
describe function row_sequence
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : converting to local hdfs://ns1/tmp/hive-contrib-1.1.0.jar
ERROR : Failed to read external resource hdfs://ns1/tmp/hive-contrib-1.1.0.jar
java.lang.RuntimeException: Failed to read external resource 
hdfs://ns1/tmp/hive-contrib-1.1.0.jar
at 
org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1442)
at 
org.apache.hadoop.hive.ql.session.SessionState.resolveAndDownload(SessionState.java:1398)
at 
org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1322)
at 
org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1308)
at 
org.apache.hadoop.hive.ql.exec.FunctionTask.addFunctionResources(FunctionTask.java:304)
at 
org.apache.hadoop.hive.ql.exec.Registry.registerToSessionRegistry(Registry.java:570)
at 
org.apache.hadoop.hive.ql.exec.Registry.getQualifiedFunctionInfo(Registry.java:556)
at 
org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfo(Registry.java:308)
at 
org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:471)
at 
org.apache.hadoop.hive.ql.exec.DDLTask.describeFunction(DDLTask.java:2907)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:385)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:99)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2054)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1750)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1503)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1287)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1282)
at 
org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:236)
at 
org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
at 
org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at 
org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.FileNotFoundException: File does not exist: 
hdfs://ns1/tmp/hive-contrib-1.1.0.jar
at 
org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1270)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1262)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1262)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2123)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2092)
at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2068)
at 
org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1428)
... 29 more

INFO  : Completed executing 
command(queryId=hive_2018092113_3c26b2ae-9f0a-4a80-ba3c-a96b23fe8f9d); Time 
taken: 0.383 seconds
INFO  : OK
+--+--+
|   

[jira] [Commented] (HIVE-20538) Allow to store a key value together with a transaction.

2018-09-21 Thread Jaume M (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624159#comment-16624159
 ] 

Jaume M commented on HIVE-20538:


Can you review [~ekoifman]?

> Allow to store a key value together with a transaction.
> ---
>
> Key: HIVE-20538
> URL: https://issues.apache.org/jira/browse/HIVE-20538
> Project: Hive
>  Issue Type: New Feature
>  Components: Standalone Metastore, Transactions
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20538.1.patch, HIVE-20538.1.patch, 
> HIVE-20538.2.patch, HIVE-20538.3.patch, HIVE-20538.4.patch
>
>
> This can be useful for example to know if a transaction has already happened.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624156#comment-16624156
 ] 

Hive QA commented on HIVE-20556:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940676/HIVE-20556.12.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 200 failed/errored test(s), 15001 tests 
executed
*Failed tests:*
{noformat}
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=194)

[druidmini_masking.q,druidmini_test1.q,druidkafkamini_basic.q,druidmini_joins.q,druid_timestamptz.q]
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] 
(batchId=267)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore]
 (batchId=267)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter3] (batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_rename_table] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_stats_status]
 (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_filter] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_groupby] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_limit] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_select] 
(batchId=66)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_table] 
(batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_union] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[archive_excludeHadoop20] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[archive_multi] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_1] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_3] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_8] 
(batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket7] (batchId=34)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_annotate_stats_groupby]
 (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_alter_list_bucketing_table1]
 (batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_default_prop] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_like2] 
(batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_like] (batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_like_tbl_props] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_or_replace_view] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_table_like_stats] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_2] (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_4] (batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_ddl1] 
(batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[describe_table] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_dependency2] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_cube_multi_gby] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input43] (batchId=2)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part10] 
(batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part11] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part12] 
(batchId=87)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part13] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part1] 
(batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part3] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part4] 
(batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part6] 
(batchId=38)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part7] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part8] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part9] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_overwrite] 
(batchId=18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock1] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lock2] (batchId=32)

[jira] [Commented] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624148#comment-16624148
 ] 

Sergey Shelukhin commented on HIVE-20620:
-

[~ashutoshc] can you take a look? a small change.

Unfortunately try as I might I cannot force a local repro of the original issue 
I see on a cluster, so the test doesn't fail without the fix. 
On a cluster, the final stage had 16 reducers, but was writing into 5 buckets. 
No matter what I do in q files, Tez always generates the correct number of 
reducers and taskId in filesinkoperator never changes, each FSO writes its own 
files in an orderly manner; in the original repro each reducer wrote files for 
multiple different buckets.
[~gopalv] [~djaiswal]  do you know by any change how to force Hive/Tez to have 
a number of reducers different from the number of buckets for a SMB table, in 
tests?

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624142#comment-16624142
 ] 

Sergey Shelukhin commented on HIVE-20620:
-

RB; the java change is ~2 lines

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-20620:

Status: Patch Available  (was: Open)

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-20620:

Attachment: HIVE-20620.patch

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-20620.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17043) Remove non unique columns from group by keys if not referenced later

2018-09-21 Thread Vineet Garg (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624133#comment-16624133
 ] 

Vineet Garg commented on HIVE-17043:


[~jcamachorodriguez] I agree it is ugly. The problem with 
{{RelMdColumnUniqueness}} is that it only tells you if given set of columns are 
unique or not, for this optimization we need to know the set of unique keys (if 
there are any for a given input). Therefore {{RelMdColumnUniqueness}} wouldn't 
really work here.

Another possible solution I could think of was calling {{getColumnOrigin}} on 
each group key to track lineage and build the set, then calling 
{{getTableOrigin}} to get to the base table using which we can figure out the 
keys, get rid of the corresponding columns from group sets. But this will be 
pretty expensive (calling getColumnOrigin on all the keys and then calling 
getTableOrigin).

I think we should keep RelMdUniqueKeys for determining unique keys based on the 
constraints, it seems like it is designed for this. We can write (preferably in 
later patch) different logic/methods for getRowCount to use (which will be 
based on stats) since  it only override project to determine uniqueness based 
on statistics.

Let me know what you think.



> Remove non unique columns from group by keys if not referenced later
> 
>
> Key: HIVE-17043
> URL: https://issues.apache.org/jira/browse/HIVE-17043
> Project: Hive
>  Issue Type: Sub-task
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-17043.1.patch, HIVE-17043.2.patch, 
> HIVE-17043.3.patch, HIVE-17043.4.patch
>
>
> Group by keys may be a mix of unique (or primary) keys and regular columns. 
> In such cases presence of regular column won't alter cardinality of groups. 
> So, if regular columns are not referenced later, they can be dropped from 
> group by keys. Depending on operator tree may result in those columns not 
> being read at all from disk in best case. In worst case, we will avoid 
> shuffling and sorting regular columns from mapper to reducer, which still 
> could be substantial CPU and network savings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-20620:

Attachment: HIVE-20620.patch

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-20620:

Attachment: (was: HIVE-20620.patch)

> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20620) manifest collisions when inserting into bucketed sorted MM tables with dynamic partitioning

2018-09-21 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-20620:
---


> manifest collisions when inserting into bucketed sorted MM tables with 
> dynamic partitioning
> ---
>
> Key: HIVE-20620
> URL: https://issues.apache.org/jira/browse/HIVE-20620
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20613:

Description: 
1. Add tests which will use the background thread for updating the cached data 
(database, table, partition, table stats, partition stats)
2. Add more tests for existing APIs: stats aggregation, listing partitions when 
partial specs are provided, testing the storage descriptor 
caching/deduplication (specially when tables/ptns are dropped/added), table col 
stats, partition col stats
3. Test 1. in a multithreaded scenario

  was:
1. Add tests which will use the background thread for updating the cached data 
(database, table, partition, table stats, partition stats)
2. Add more tests for existing APIs: stats aggregation, listing partitions when 
partial specs are provided, testing the storage descriptor 
caching/deduplication (specially when tables/ptns are dropped/added)
3. Test 1. in a multithreaded scenario


> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added), table 
> col stats, partition col stats
> 3. Test 1. in a multithreaded scenario



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20613:

Description: 
1. Add tests which will use the background thread for updating the cached data 
(database, table, partition, table stats, partition stats)
2. Add more tests for existing APIs: stats aggregation, listing partitions when 
partial specs are provided, testing the storage descriptor 
caching/deduplication (specially when tables/ptns are dropped/added)
3. Test 1. in a multithreaded scenario

  was:
1. Add tests which will use the background thread for updating the cached data 
(database, table, partition, table stats, partition stats)
2. Add more tests for existing APIs: stats aggregation, listing partitions when 
partial specs are provided, testing the storage descriptor 
caching/deduplication (specially when tables/ptns are dropped/added)


> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added)
> 3. Test 1. in a multithreaded scenario



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20613:

Description: 
1. Add tests which will use the background thread for updating the cached data 
(database, table, partition, table stats, partition stats)
2. Add more tests for existing APIs: stats aggregation, listing partitions when 
partial specs are provided, testing the storage descriptor 
caching/deduplication (specially when tables/ptns are dropped/added)

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>
> 1. Add tests which will use the background thread for updating the cached 
> data (database, table, partition, table stats, partition stats)
> 2. Add more tests for existing APIs: stats aggregation, listing partitions 
> when partial specs are provided, testing the storage descriptor 
> caching/deduplication (specially when tables/ptns are dropped/added)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20614) CachedStore: Run a select q file tests with CachedStore enabled

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-20614:
---

Assignee: Vaibhav Gumashta

> CachedStore: Run a select q file tests with CachedStore enabled
> ---
>
> Key: HIVE-20614
> URL: https://issues.apache.org/jira/browse/HIVE-20614
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20613) CachedStore: Add more UT coverage (outside of .q files)

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-20613:
---

Assignee: Vaibhav Gumashta

> CachedStore: Add more UT coverage (outside of .q files)
> ---
>
> Key: HIVE-20613
> URL: https://issues.apache.org/jira/browse/HIVE-20613
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20430) CachedStore: bug fixes for TestEmbeddedHiveMetaStore, TestRemoteHiveMetaStore, TestMiniLlapCliDriver, TestMiniTezCliDriver, TestMinimrCliDriver

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20430:

Attachment: HIVE-20430.2.patch

> CachedStore: bug fixes for TestEmbeddedHiveMetaStore, 
> TestRemoteHiveMetaStore, TestMiniLlapCliDriver, TestMiniTezCliDriver, 
> TestMinimrCliDriver
> ---
>
> Key: HIVE-20430
> URL: https://issues.apache.org/jira/browse/HIVE-20430
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20430.1.patch, HIVE-20430.2.patch
>
>
> 1. getTable call needs to set TableType before returning
> 2. getTableObjectsByName should throw UnknownDBException when needed and 
> should not return null table objects
> 3. listTableNamesByFilter should fall back to ObjectStore till we have the 
> correct impl
> 4. listPartitionNamesPs and listPartitionsPsWithAuth are buggy
> 5. SharedCache.removePartition bug fix
> 6. removeTableColStats needs to remove all col stats when column name is null



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20556) Expose an API to retrieve the TBL_ID from TBLS in the metastore tables

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624117#comment-16624117
 ] 

Hive QA commented on HIVE-20556:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
22s{color} | {color:blue} standalone-metastore/metastore-common in master has 
28 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
14s{color} | {color:red} metastore-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
12s{color} | {color:red} metastore-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13957/dev-support/hive-personality.sh
 |
| git revision | master / f404b0d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13957/yetus/branch-findbugs-standalone-metastore_metastore-server.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13957/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13957/yetus/patch-findbugs-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-common ql 
standalone-metastore/metastore-server U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13957/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Expose an API to retrieve the TBL_ID from TBLS in the metastore tables
> --
>
> Key: HIVE-20556
> URL: https://issues.apache.org/jira/browse/HIVE-20556
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Standalone Metastore
>Reporter: Jaume M
>Assignee: Jaume M
>Priority: Major
> Attachments: HIVE-20556.1.patch, HIVE-20556.10.patch, 
> HIVE-20556.11.patch, 

[jira] [Resolved] (HIVE-20446) CachedStore: bug fixes for q file tests: TestMiniLlapCliDriver, TestMiniTezCliDriver, TestMinimrCliDriver

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta resolved HIVE-20446.
-
Resolution: Duplicate

Merging with HIVE-20430

> CachedStore: bug fixes for q file tests: TestMiniLlapCliDriver, 
> TestMiniTezCliDriver, TestMinimrCliDriver
> -
>
> Key: HIVE-20446
> URL: https://issues.apache.org/jira/browse/HIVE-20446
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20430) CachedStore: bug fixes for TestEmbeddedHiveMetaStore, TestRemoteHiveMetaStore, TestMiniLlapCliDriver, TestMiniTezCliDriver, TestMinimrCliDriver

2018-09-21 Thread Vaibhav Gumashta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-20430:

Summary: CachedStore: bug fixes for TestEmbeddedHiveMetaStore, 
TestRemoteHiveMetaStore, TestMiniLlapCliDriver, TestMiniTezCliDriver, 
TestMinimrCliDriver  (was: CachedStore: bug fixes for TestEmbeddedHiveMetaStore 
& TestRemoteHiveMetaStore)

> CachedStore: bug fixes for TestEmbeddedHiveMetaStore, 
> TestRemoteHiveMetaStore, TestMiniLlapCliDriver, TestMiniTezCliDriver, 
> TestMinimrCliDriver
> ---
>
> Key: HIVE-20430
> URL: https://issues.apache.org/jira/browse/HIVE-20430
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Affects Versions: 3.1.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-20430.1.patch
>
>
> 1. getTable call needs to set TableType before returning
> 2. getTableObjectsByName should throw UnknownDBException when needed and 
> should not return null table objects
> 3. listTableNamesByFilter should fall back to ObjectStore till we have the 
> correct impl
> 4. listPartitionNamesPs and listPartitionsPsWithAuth are buggy
> 5. SharedCache.removePartition bug fix
> 6. removeTableColStats needs to remove all col stats when column name is null



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20444) Parameter is not properly quoted in DbNotificationListener.addWriteNotificationLog

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624067#comment-16624067
 ] 

Hive QA commented on HIVE-20444:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940674/HIVE-20444.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14993 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13956/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13956/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13956/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940674 - PreCommit-HIVE-Build

> Parameter is not properly quoted in 
> DbNotificationListener.addWriteNotificationLog
> --
>
> Key: HIVE-20444
> URL: https://issues.apache.org/jira/browse/HIVE-20444
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20444.1.patch, HIVE-20444.2.patch, JDBCTest.java
>
>
> See exception:
> {code}
> 2018-08-22T04:44:22,758 INFO  [pool-8-thread-190]: 
> listener.DbNotificationListener 
> (DbNotificationListener.java:addWriteNotificationLog(765)) - Going to execute 
> insert  "WNL_WRITEID", "WNL_DATABASE", "WNL_TABLE", "WNL_PARTITION", "WNL_TABLE_OBJ", 
> "WNL_PARTITION_OBJ", "WNL_FILES", "WNL_EVENT_TIME") values 
> (50,124,1,'default','t1_default','','{"1":{"str":"t1_default"},"2":{"str":"default"},"3":{"str":"hrt_qa"},"4":{"i32":1534913061},"5":{"i32":0},"6":{"i32":0},"7":{"rec":{"1":{"lst":["rec",15,{"1":{"str":"t"},"2":{"str":"tinyint"}},{"1":{"str":"si"},"2":{"str":"smallint"}},{"1":{"str":"i"},"2":{"str":"int"}},{"1":{"str":"b"},"2":{"str":"bigint"}},{"1":{"str":"f"},"2":{"str":"double"}},{"1":{"str":"d"},"2":{"str":"double"}},{"1":{"str":"s"},"2":{"str":"varchar(25)"}},{"1":{"str":"dc"},"2":{"str":"decimal(38,18)"}},{"1":{"str":"bo"},"2":{"str":"varchar(5)"}},{"1":{"str":"v"},"2":{"str":"varchar(25)"}},{"1":{"str":"c"},"2":{"str":"char(25)"}},{"1":{"str":"ts"},"2":{"str":"timestamp"}},{"1":{"str":"dt"},"2":{"str":"date"}},{"1":{"str":"st"},"2":{"str":"string"}},{"1":{"str":"tz"},"2":{"str":"timestamp
>  with local time 
> zone('UTC')"}}]},"2":{"str":"hdfs://mycluster/warehouse/tablespace/managed/hive/t1_default"},"3":{"str":"org.apache.hadoop.mapred.TextInputFormat"},"4":{"str":"org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"},"5":{"tf":0},"6":{"i32":-1},"7":{"rec":{"2":{"str":"org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe"},"3":{"map":["str","str",1,{"serialization.format":"1"}]}}},"8":{"lst":["str",0]},"9":{"lst":["rec",0]},"10":{"map":["str","str",0,{}]},"11":{"rec":{"1":{"lst":["str",0]},"2":{"lst":["lst",0]},"3":{"map":["lst","str",0,{}]}}},"12":{"tf":0}}},"8":{"lst":["rec",0]},"9":{"map":["str","str",9,{"totalSize":"0","rawDataSize":"0","numRows":"0","transactional_properties":"insert_only","COLUMN_STATS_ACCURATE":"{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"b\":\"true\",\"bo\":\"true\",\"c\":\"true\",\"d\":\"true\",\"dc\":\"true\",\"dt\":\"true\",\"f\":\"true\",\"i\":\"true\",\"s\":\"true\",\"si\":\"true\",\"st\":\"true\",\"t\":\"true\",\"ts\":\"true\",\"tz\":\"true\",\"v\":\"true\"}}","numFiles":"0","transient_lastDdlTime":"1534913062","bucketing_version":"2","transactional":"true"}]},"12":{"str":"MANAGED_TABLE"},"15":{"tf":0},"17":{"str":"hive"},"18":{"i32":1},"19":{"i64":1}}','null','hdfs://mycluster/warehouse/tablespace/managed/hive/t1_default/delta_001_001_/00_0###delta_001_001_',1534913062)>
> 2018-08-22T04:44:22,773 ERROR [pool-8-thread-190]: 
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to add write notification log 
> org.postgresql.util.PSQLException: ERROR: syntax error at or near "UTC"
>   Position: 1032
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at 
> org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:313)
> 

[jira] [Commented] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2018-09-21 Thread Chaoyu Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624059#comment-16624059
 ] 

Chaoyu Tang commented on HIVE-12812:


I could not remember the exact reason why this patch was not be committed years 
ago. It might probably be that we were going to decommission the MR soon and 
there was a regression in one test case. But as a workaround, you can always 
set the property to true on the command (not necessary in hive-site.xml):
set mapred.input.dir.recursive=true

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Chaoyu Tang
>Priority: Major
> Attachments: HIVE-12812.patch, HIVE-12812.patch, HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20574) Column statistics give erraneous numDistinct

2018-09-21 Thread Ajay Jadhav (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Jadhav reassigned HIVE-20574:
--

Assignee: Ajay Jadhav

> Column statistics give erraneous numDistinct
> 
>
> Key: HIVE-20574
> URL: https://issues.apache.org/jira/browse/HIVE-20574
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Statistics
>Affects Versions: 2.3.2
> Environment: Amazon EMR (BigTop based) from emr-5.9.0 to emr-5.16.0.
>Reporter: Mikko Kivistö
>Assignee: Ajay Jadhav
>Priority: Major
>  Labels: Statistics, statsCollection
>
> 1) Download the parquet file to s3/hdfs (e.g. hdfs:///tmp/testi_parquet/) 
> using some tool (aws cli, hdfs command or anything)
>    - S3: s3://www.smartdatahub.io/data/test.parquet
>    - HTTP: [http://www.smartdatahub.io/data/test.parquet]
>    - or the attachmen
> eg. with aws cli, wget/curl/distcp can also be used
> {\{ aws s3 cp s3://www.smartdatahub.io/data/test.parquet . }}
> {\{ hdfs dfs -mkdir hdfs:///tmp/testi_parquet/}}hdfs dfs -put test.parquet }}
> {{ hdfs:///tmp/testi_parquet/test.parquet}}
> 2) Create table default.testi_parquet2 on top of that using the schema 
> provided
> {{CREATE TABLE `default.testi_parquet2`(}}
> {{   `rakennustu` int, }}
> {{   `kohdenimi` string, }}
> {{   `tekstisuun` int, }}
> {{   `tekstikoko` float, }}
> {{   `tekstifont` string, }}
> {{   `buix_bid` int, }}
> {{   `paivitetty` string, }}
> {{   `datanomist` string, }}
> {{   `geom_geojson` string, }}
> {{   `geom` binary, }}
> {{   `extractdate` string)}}
> {{ ROW FORMAT SERDE }}
> {{   'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' }}
> {{ STORED AS INPUTFORMAT }}
> {{   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' }}
> {{ OUTPUTFORMAT }}
> {{   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'}}
> {{ LOCATION}}
> {{   'hdfs:///tmp/testi_parquet/';}}
> {{ \{{ – CHANGE THE LOCATION TO THE PREFIX/DIRECTORY YOU DOWNLOADED THE FILE 
> FROM STEP 1
> 3) To collect the values showing you the actual reality of the data: Query 
> the distinct count, min and max of column "tekstisuun"
> {\{ SELECT COUNT(DISTINCT tekstisuun), MAX(tekstisuun), MIN(tekstisuun) FROM 
> default.testi_parquet2; }}
> and note them  (min 0, max 0, distinct 1)
>  4) Compute statistics for the table using
> {{ANALYZE TABLE default.testi_parquet2 COMPUTE STATISTICS FOR COLUMNS;}}
> 5) See erroneous statistics entry for numDistincts: Query the statistics by 
> using "
> {{DESCRIBE FORMATTED default.testi_parquet2 tekstisuun}}
> " and note the ERRANEOUS numDistincts value: 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20574) Column statistics give erraneous numDistinct

2018-09-21 Thread Ajay Jadhav (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624054#comment-16624054
 ] 

Ajay Jadhav commented on HIVE-20574:


Hive exposes this setting- 
[https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.stats.ndv.error]
 which determines the error tolerance for "distinct_count". Error tolerance 
provides a tradeoff between accuracy vs compute cost.

In order to get the correct count, suggest setting 
{color:#FF}hive.stats.ndv.error = 0{color}
 I have tested this on EMR cluster and it indeed improves the accuracy to 100%

Another interesting property, if you are using partitions, is to consider 
tuning hive.metastore.stats.ndv.tuner to be closer to 1. The default is 0.

> Column statistics give erraneous numDistinct
> 
>
> Key: HIVE-20574
> URL: https://issues.apache.org/jira/browse/HIVE-20574
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Statistics
>Affects Versions: 2.3.2
> Environment: Amazon EMR (BigTop based) from emr-5.9.0 to emr-5.16.0.
>Reporter: Mikko Kivistö
>Priority: Major
>  Labels: Statistics, statsCollection
>
> 1) Download the parquet file to s3/hdfs (e.g. hdfs:///tmp/testi_parquet/) 
> using some tool (aws cli, hdfs command or anything)
>    - S3: s3://www.smartdatahub.io/data/test.parquet
>    - HTTP: [http://www.smartdatahub.io/data/test.parquet]
>    - or the attachmen
> eg. with aws cli, wget/curl/distcp can also be used
> {\{ aws s3 cp s3://www.smartdatahub.io/data/test.parquet . }}
> {\{ hdfs dfs -mkdir hdfs:///tmp/testi_parquet/}}hdfs dfs -put test.parquet }}
> {{ hdfs:///tmp/testi_parquet/test.parquet}}
> 2) Create table default.testi_parquet2 on top of that using the schema 
> provided
> {{CREATE TABLE `default.testi_parquet2`(}}
> {{   `rakennustu` int, }}
> {{   `kohdenimi` string, }}
> {{   `tekstisuun` int, }}
> {{   `tekstikoko` float, }}
> {{   `tekstifont` string, }}
> {{   `buix_bid` int, }}
> {{   `paivitetty` string, }}
> {{   `datanomist` string, }}
> {{   `geom_geojson` string, }}
> {{   `geom` binary, }}
> {{   `extractdate` string)}}
> {{ ROW FORMAT SERDE }}
> {{   'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' }}
> {{ STORED AS INPUTFORMAT }}
> {{   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' }}
> {{ OUTPUTFORMAT }}
> {{   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat'}}
> {{ LOCATION}}
> {{   'hdfs:///tmp/testi_parquet/';}}
> {{ \{{ – CHANGE THE LOCATION TO THE PREFIX/DIRECTORY YOU DOWNLOADED THE FILE 
> FROM STEP 1
> 3) To collect the values showing you the actual reality of the data: Query 
> the distinct count, min and max of column "tekstisuun"
> {\{ SELECT COUNT(DISTINCT tekstisuun), MAX(tekstisuun), MIN(tekstisuun) FROM 
> default.testi_parquet2; }}
> and note them  (min 0, max 0, distinct 1)
>  4) Compute statistics for the table using
> {{ANALYZE TABLE default.testi_parquet2 COMPUTE STATISTICS FOR COLUMNS;}}
> 5) See erroneous statistics entry for numDistincts: Query the statistics by 
> using "
> {{DESCRIBE FORMATTED default.testi_parquet2 tekstisuun}}
> " and note the ERRANEOUS numDistincts value: 2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20444) Parameter is not properly quoted in DbNotificationListener.addWriteNotificationLog

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624005#comment-16624005
 ] 

Hive QA commented on HIVE-20444:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hcatalog/server-extensions: The patch generated 2 new 
+ 3 unchanged - 0 fixed = 5 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} hcatalog/server-extensions generated 0 new + 2 
unchanged - 1 fixed = 2 total (was 3) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13956/dev-support/hive-personality.sh
 |
| git revision | master / f404b0d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13956/yetus/diff-checkstyle-hcatalog_server-extensions.txt
 |
| modules | C: hcatalog/server-extensions U: hcatalog/server-extensions |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13956/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Parameter is not properly quoted in 
> DbNotificationListener.addWriteNotificationLog
> --
>
> Key: HIVE-20444
> URL: https://issues.apache.org/jira/browse/HIVE-20444
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-20444.1.patch, HIVE-20444.2.patch, JDBCTest.java
>
>
> See exception:
> {code}
> 2018-08-22T04:44:22,758 INFO  [pool-8-thread-190]: 
> listener.DbNotificationListener 
> (DbNotificationListener.java:addWriteNotificationLog(765)) - Going to execute 
> insert  "WNL_WRITEID", "WNL_DATABASE", "WNL_TABLE", "WNL_PARTITION", "WNL_TABLE_OBJ", 
> "WNL_PARTITION_OBJ", "WNL_FILES", "WNL_EVENT_TIME") values 
> 

[jira] [Commented] (HIVE-17300) WebUI query plan graphs

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623979#comment-16623979
 ] 

Hive QA commented on HIVE-17300:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940783/HIVE-17300.9.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14995 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13955/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13955/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13955/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940783 - PreCommit-HIVE-Build

> WebUI query plan graphs
> ---
>
> Key: HIVE-17300
> URL: https://issues.apache.org/jira/browse/HIVE-17300
> Project: Hive
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, features, patch
> Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, 
> HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, 
> HIVE-17300.7.patch, HIVE-17300.8.patch, HIVE-17300.8.patch, 
> HIVE-17300.8.patch, HIVE-17300.8.patch, HIVE-17300.9.patch, HIVE-17300.patch, 
> complete_success.png, full_mapred_stats.png, graph_with_mapred_stats.png, 
> last_stage_error.png, last_stage_running.png, non_mapred_task_selected.png
>
>
> Hi all,
> I’m working on a feature of the Hive WebUI Query Plan tab that would provide 
> the option to display the query plan as a nice graph (scroll down for 
> screenshots). If you click on one of the graph’s stages, the plan for that 
> stage appears as text below. 
> Stages are color-coded if they have a status (Success, Error, Running), and 
> the rest are grayed out. Coloring is based on status already available in the 
> WebUI, under the Stages tab.
> There is an additional option to display stats for MapReduce tasks. This 
> includes the job’s ID, tracking URL (where the logs are found), and mapper 
> and reducer numbers/progress, among other info. 
> The library I’m using for the graph is called vis.js (http://visjs.org/). It 
> has an Apache license, and the only necessary file to be included from this 
> library is about 700 KB.
> I tried to keep server-side changes minimal, and graph generation is taken 
> care of by the client. Plans with more than a given number of stages 
> (default: 25) won't be displayed in order to preserve resources.
> I’d love to hear any and all input from the community about this feature: do 
> you think it’s useful, and is there anything important I’m missing?
> Thanks,
> Karen Coppage
> Review request: https://reviews.apache.org/r/61663/
> Any input is welcome!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18778) Needs to capture input/output entities in explain

2018-09-21 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-18778:
--
Attachment: HIVE-18778.6.patch

> Needs to capture input/output entities in explain
> -
>
> Key: HIVE-18778
> URL: https://issues.apache.org/jira/browse/HIVE-18778
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-18778-SparkPositive.patch, HIVE-18778.1.patch, 
> HIVE-18778.2.patch, HIVE-18778.3.patch, HIVE-18778.4.patch, 
> HIVE-18778.5.patch, HIVE-18778.6.patch, HIVE-18778_TestCliDriver.patch, 
> HIVE-18788_SparkNegative.patch, HIVE-18788_SparkPerf.patch
>
>
> With Sentry enabled, commands like explain drop table foo fail with {{explain 
> drop table foo;}}
> {code}
> Error: Error while compiling statement: FAILED: SemanticException No valid 
> privileges
>  Required privilege( Table) not available in input privileges
>  The required privileges: (state=42000,code=4)
> {code}
> Sentry fails to authorize because the ExplainSemanticAnalyzer uses an 
> instance of DDLSemanticAnalyzer to analyze the explain query.
> {code}
> BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(conf, input);
> sem.analyze(input, ctx);
> sem.validate()
> {code}
> The inputs/outputs entities for this query are set in the above code. 
> However, these are never set on the instance of ExplainSemanticAnalyzer 
> itself and thus is not propagated into the HookContext in the calling Driver 
> code.
> {code}
> sem.analyze(tree, ctx); --> this results in calling the above code that uses 
> DDLSA
> hookCtx.update(sem); --> sem is an instance of ExplainSemanticAnalyzer, this 
> code attempts to update the HookContext with the input/output info from ESA 
> which is never set.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17300) WebUI query plan graphs

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623920#comment-16623920
 ] 

Hive QA commented on HIVE-17300:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13955/dev-support/hive-personality.sh
 |
| git revision | master / f404b0d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common itests/hive-unit ql service U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13955/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> WebUI query plan graphs
> ---
>
> Key: HIVE-17300
> URL: https://issues.apache.org/jira/browse/HIVE-17300
> Project: Hive
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, features, patch
> Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, 
> HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, 
> HIVE-17300.7.patch, HIVE-17300.8.patch, HIVE-17300.8.patch, 
> HIVE-17300.8.patch, HIVE-17300.8.patch, HIVE-17300.9.patch, HIVE-17300.patch, 
> complete_success.png, full_mapred_stats.png, graph_with_mapred_stats.png, 
> last_stage_error.png, last_stage_running.png, non_mapred_task_selected.png
>
>
> Hi all,
> I’m working on a feature of the 

[jira] [Commented] (HIVE-20612) Create new join multi-key correlation flag for CBO

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623859#comment-16623859
 ] 

Hive QA commented on HIVE-20612:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940661/HIVE-20612.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 14991 tests 
executed
*Failed tests:*
{noformat}
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=194)

[druidmini_masking.q,druidmini_test1.q,druidkafkamini_basic.q,druidmini_joins.q,druid_timestamptz.q]
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_alt_syntax] 
(batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_2] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_4] 
(batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual2]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_cond_pushdown_unqual4]
 (batchId=3)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_alt_syntax] 
(batchId=145)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_2]
 (batchId=136)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_4]
 (batchId=147)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_unqual2]
 (batchId=116)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join_cond_pushdown_unqual4]
 (batchId=110)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query17] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query24] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query25] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query29] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query50] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query54] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query72] 
(batchId=266)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query85] 
(batchId=266)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query17] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query24] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query25] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query29] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query50] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query54] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query64] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query72] 
(batchId=264)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query85] 
(batchId=264)
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testConnections 
(batchId=237)
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testMetaDataCounts 
(batchId=237)
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.testMethodCounts 
(batchId=237)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13954/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13954/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13954/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 31 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940661 - PreCommit-HIVE-Build

> Create new join multi-key correlation flag for CBO
> --
>
> Key: HIVE-20612
> URL: https://issues.apache.org/jira/browse/HIVE-20612
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20612.patch
>
>
> Currently we reuse the flag in Hive side. It would be good to have the flag 
> separated for debugging purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-20610) TestDbNotificationListener should not use /tmp directory

2018-09-21 Thread Bharathkrishna Guruvayoor Murali (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali reassigned HIVE-20610:
---

Assignee: Bharathkrishna Guruvayoor Murali

> TestDbNotificationListener should not use /tmp directory
> 
>
> Key: HIVE-20610
> URL: https://issues.apache.org/jira/browse/HIVE-20610
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0, 4.0.0
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
>
> Using /tmp directory creates exceptions for tests like dropTable :
> {code:java}
> 2018-09-19T06:42:04,818  INFO [main] metastore.HiveMetaStore: 0: drop_table : 
> tbl=hive.default.droptbl
> 2018-09-19T06:42:04,819  INFO [main] HiveMetaStore.audit: ugi=hiveptest   
> ip=unknown-ip-addr  cmd=drop_table : tbl=hive.default.droptbl   
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.ICE-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.XIM-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.X11-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/hsperfdata_root]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.font-unix]: it still exists.
> 2018-09-19T06:42:05,072  WARN [main] fs.FileUtil: Failed to delete file or 
> dir [/tmp/.Test-unix]: it still exists.
> 2018-09-19T06:42:05,072 ERROR [main] utils.FileUtils: Failed to delete 
> file:/tmp
> 2018-09-19T06:42:05,072 ERROR [main] utils.MetaStoreUtils: Got exception: 
> org.apache.hadoop.hive.metastore.api.MetaException Unable to delete 
> directory: file:/tmp
> org.apache.hadoop.hive.metastore.api.MetaException: Unable to delete 
> directory: file:/tmp
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:45)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:365) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:353) 
> [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:2562)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:2523)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:2685)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_102]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_102]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_102]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>  [hive-standalone-metastore-server-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at com.sun.proxy.$Proxy33.drop_table_with_environment_context(Unknown 
> Source) [?:?]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:3204)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1492)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:1432)
>  [hive-standalone-metastore-common-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
>   at 
> org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropTable(TestDbNotificationListener.java:522)
>  [test-classes/:?]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_102]{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20385) Date: date + int fails to add days

2018-09-21 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20385:

Attachment: HIVE-20385.3.patch

> Date: date + int fails to add days
> --
>
> Key: HIVE-20385
> URL: https://issues.apache.org/jira/browse/HIVE-20385
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20385.1.patch, HIVE-20385.2.patch, 
> HIVE-20385.3.patch
>
>
> {code}
> select current_date + 5;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '5': No 
> matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPDTIPlus with (date, int)
> {code}
> This works in Postgres 9.6 - http://sqlfiddle.com/#!17/9eecb/19253/0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20385) Date: date + int fails to add days

2018-09-21 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20385:

Status: Patch Available  (was: Open)

> Date: date + int fails to add days
> --
>
> Key: HIVE-20385
> URL: https://issues.apache.org/jira/browse/HIVE-20385
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20385.1.patch, HIVE-20385.2.patch, 
> HIVE-20385.3.patch
>
>
> {code}
> select current_date + 5;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '5': No 
> matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPDTIPlus with (date, int)
> {code}
> This works in Postgres 9.6 - http://sqlfiddle.com/#!17/9eecb/19253/0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20385) Date: date + int fails to add days

2018-09-21 Thread Bruno Pusztahazi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Pusztahazi updated HIVE-20385:

Status: Open  (was: Patch Available)

> Date: date + int fails to add days
> --
>
> Key: HIVE-20385
> URL: https://issues.apache.org/jira/browse/HIVE-20385
> Project: Hive
>  Issue Type: Bug
>  Components: Types
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Bruno Pusztahazi
>Priority: Minor
> Attachments: HIVE-20385.1.patch, HIVE-20385.2.patch, 
> HIVE-20385.3.patch
>
>
> {code}
> select current_date + 5;
> FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments '5': No 
> matching method for class 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFOPDTIPlus with (date, int)
> {code}
> This works in Postgres 9.6 - http://sqlfiddle.com/#!17/9eecb/19253/0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20612) Create new join multi-key correlation flag for CBO

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623821#comment-16623821
 ] 

Hive QA commented on HIVE-20612:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
50s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 169 unchanged - 0 
fixed = 170 total (was 169) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13954/dev-support/hive-personality.sh
 |
| git revision | master / f404b0d |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13954/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13954/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Create new join multi-key correlation flag for CBO
> --
>
> Key: HIVE-20612
> URL: https://issues.apache.org/jira/browse/HIVE-20612
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-20612.patch
>
>
> Currently we reuse the flag in Hive side. It would be good to have the flag 
> separated for debugging purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623779#comment-16623779
 ] 

Hive QA commented on HIVE-20599:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940780/HIVE-20599.1-branch-3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 14432 tests 
executed
*Failed tests:*
{noformat}
TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=260)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=260)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=260)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=260)
TestReplicationScenariosAcidTables - did not produce a TEST-*.xml file (likely 
timed out) (batchId=239)
TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely 
timed out) (batchId=239)
TestSparkStatistics - did not produce a TEST-*.xml file (likely timed out) 
(batchId=239)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=260)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_mv] (batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=71)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_partitioned_2]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_partitioned_3]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_part_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_with_masking]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_assertion_type]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[constprog_semijoin]
 (batchId=189)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=114)
org.apache.hadoop.hive.metastore.TestHiveMetaStoreAlterColumnPar.org.apache.hadoop.hive.metastore.TestHiveMetaStoreAlterColumnPar
 (batchId=234)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testManagedPaths 
(batchId=237)
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.testImpersonation 
(batchId=245)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=312)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13953/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13953/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13953/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 27 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940780 - PreCommit-HIVE-Build

> CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
> ---
>
> Key: HIVE-20599
> URL: https://issues.apache.org/jira/browse/HIVE-20599
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 3.1.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-20599-branch-3.patch, 
> HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, HIVE-20599.1.patch
>
>
> SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - 
> from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING);
> throws below Exception
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Wrong arguments ''PST'': No matching method for class 
> org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible 
> choices: _FUNC_(bigint)  _FUNC_(binary)  _FUNC_(boolean)  _FUNC_(date)  
> _FUNC_(decimal(38,18))  

[jira] [Commented] (HIVE-20544) TOpenSessionReq logs password and username

2018-09-21 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623776#comment-16623776
 ] 

Karen Coppage commented on HIVE-20544:
--

[~pvary], your solution worked! The password mask is part of generated code in 
the new patch. Thanks so much for your input:)

> TOpenSessionReq logs password and username
> --
>
> Key: HIVE-20544
> URL: https://issues.apache.org/jira/browse/HIVE-20544
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, patch, security
> Attachments: HIVE-20544.1.patch, HIVE-20544.2.patch, 
> HIVE-20544.patch, non-solution.patch, working-solution.patch
>
>
> In 
> service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TOpenSessionReq,
>  if client protocol is unset, validate() and toString() prints both username 
> and password to logs.
> Logging a password is a security risk. We should hide the ***.
> =Edit= (no longer relevant, see comments)
> This issue is tricky since it is caused in a fully generated class. I've been 
> playing around and have found one working solution, butI'd truly appreciate 
> ideas for a more elegant solution or input.
> The problem:
>  TCLIService.thrift is the template for generating all classes in 
> service-rpc. Struct TOpenSessionReq is OpenSession()'s one parameter and is 
> defined thus:
> {noformat}
> struct TOpenSessionReq {
>   1: required TProtocolVersion client_protocol = 
> TProtocolVersion.HIVE_CLI_SERVICE_PROTOCOL_V10
>   2: optional string username
>   3: optional string password
>   4: optional map configuration
> }
> {noformat}
> In the generated class TOpenSessionReq.java, client_protocol is checked by a 
> validate() method, which is called quite a few times; if client_protocol is 
> not set, it throws a TProtocolException, passing along a toString(). This 
> toString() gets the names and values of all fields, including username and 
> password.
> Working solution:
>  * Create a separate struct containing only the username and password, and 
> pass it to OpenSession() as a second parameter. Since all fields in the new 
> struct are "optional", the generated validate() is empty – toString() is 
> never used. This involves changing core classes and breaks the "Each function 
> should take exactly one parameter" coding convention (detailed at 
> service-rpc/if/TCLIService.thrift:27).
>  See working-solution.patch.
> What doesn't work:
>  * Making client_protocol optional instead of required. Apparently this will 
> break everything.
>  * Overwriting toString() – TOpenSessionReq is a struct.
>  * Creating two Thrift structs, one struct for required (TRequiredReq) and 
> one for optional (TOptionalReq) fields, and nesting them in struct 
> TOpenSessionReq. This doesn't work because validate() in TOpenSessionReq can 
> call TOptionalReq.toString(), which prints the password to logs. This will 
> happen if TRequiredReq.client_protocol isn't set.
>  See non-solution.patch
>  * Asking Thrift devs to change their code. I wrote them an email but have no 
> expectations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20598) Fix typos in HiveAlgorithmsUtil calculations

2018-09-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20598:

   Resolution: Fixed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

pushed to master. Thank you Ashutosh for reviewing the changes!

> Fix typos in HiveAlgorithmsUtil calculations
> 
>
> Key: HIVE-20598
> URL: https://issues.apache.org/jira/browse/HIVE-20598
> Project: Hive
>  Issue Type: Bug
>  Components: Logical Optimizer
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20598.01.patch
>
>
> HIVE-10343 have made the costs changeable by hiveconf settings; however there 
> was a method in which there was already a local variable named 
> cpuCostbottom line is the cost of n-way joins calculated by this method 
> is computed as the product of the number of rows...
> https://github.com/apache/hive/blob/9c907769a63a6b23c91fdf0b3f3d0aa6387035dc/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/cost/HiveAlgorithmsUtil.java#L83



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20544) TOpenSessionReq logs password and username

2018-09-21 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-20544:
-
Description: 
In 
service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TOpenSessionReq,
 if client protocol is unset, validate() and toString() prints both username 
and password to logs.

Logging a password is a security risk. We should hide the ***.

=Edit= (no longer relevant, see comments)

This issue is tricky since it is caused in a fully generated class. I've been 
playing around and have found one working solution, butI'd truly appreciate 
ideas for a more elegant solution or input.

The problem:
 TCLIService.thrift is the template for generating all classes in service-rpc. 
Struct TOpenSessionReq is OpenSession()'s one parameter and is defined thus:
{noformat}
struct TOpenSessionReq {
  1: required TProtocolVersion client_protocol = 
TProtocolVersion.HIVE_CLI_SERVICE_PROTOCOL_V10
  2: optional string username
  3: optional string password
  4: optional map configuration
}
{noformat}
In the generated class TOpenSessionReq.java, client_protocol is checked by a 
validate() method, which is called quite a few times; if client_protocol is not 
set, it throws a TProtocolException, passing along a toString(). This 
toString() gets the names and values of all fields, including username and 
password.

Working solution:
 * Create a separate struct containing only the username and password, and pass 
it to OpenSession() as a second parameter. Since all fields in the new struct 
are "optional", the generated validate() is empty – toString() is never used. 
This involves changing core classes and breaks the "Each function should take 
exactly one parameter" coding convention (detailed at 
service-rpc/if/TCLIService.thrift:27).
 See working-solution.patch.

What doesn't work:
 * Making client_protocol optional instead of required. Apparently this will 
break everything.
 * Overwriting toString() – TOpenSessionReq is a struct.
 * Creating two Thrift structs, one struct for required (TRequiredReq) and one 
for optional (TOptionalReq) fields, and nesting them in struct TOpenSessionReq. 
This doesn't work because validate() in TOpenSessionReq can call 
TOptionalReq.toString(), which prints the password to logs. This will happen if 
TRequiredReq.client_protocol isn't set.
 See non-solution.patch
 * Asking Thrift devs to change their code. I wrote them an email but have no 
expectations.

  was:
In 
service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TOpenSessionReq,
 if client protocol is unset, validate() and toString() prints both username 
and password to logs.

Logging a password is a security risk. We should hide the ***.

=Edit=

This issue is tricky since it is caused in a fully generated class. I've been 
playing around and have found one working solution, butI'd truly appreciate 
ideas for a more elegant solution or input.

The problem:
 TCLIService.thrift is the template for generating all classes in service-rpc. 
Struct TOpenSessionReq is OpenSession()'s one parameter and is defined thus:
{noformat}
struct TOpenSessionReq {
  1: required TProtocolVersion client_protocol = 
TProtocolVersion.HIVE_CLI_SERVICE_PROTOCOL_V10
  2: optional string username
  3: optional string password
  4: optional map configuration
}
{noformat}
In the generated class TOpenSessionReq.java, client_protocol is checked by a 
validate() method, which is called quite a few times; if client_protocol is not 
set, it throws a TProtocolException, passing along a toString(). This 
toString() gets the names and values of all fields, including username and 
password.

Working solution:
 * Create a separate struct containing only the username and password, and pass 
it to OpenSession() as a second parameter. Since all fields in the new struct 
are "optional", the generated validate() is empty – toString() is never used. 
This involves changing core classes and breaks the "Each function should take 
exactly one parameter" coding convention (detailed at 
service-rpc/if/TCLIService.thrift:27).
 See working-solution.patch.

What doesn't work:
 * Making client_protocol optional instead of required. Apparently this will 
break everything.
 * Overwriting toString() – TOpenSessionReq is a struct.
 * Creating two Thrift structs, one struct for required (TRequiredReq) and one 
for optional (TOptionalReq) fields, and nesting them in struct TOpenSessionReq. 
This doesn't work because validate() in TOpenSessionReq can call 
TOptionalReq.toString(), which prints the password to logs. This will happen if 
TRequiredReq.client_protocol isn't set.
 See non-solution.patch
 * Asking Thrift devs to change their code. I wrote them an email but have no 
expectations.


> TOpenSessionReq logs password and username
> --
>
> Key: HIVE-20544
>  

[jira] [Updated] (HIVE-20544) TOpenSessionReq logs password and username

2018-09-21 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-20544:
-
Attachment: HIVE-20544.2.patch
Status: Patch Available  (was: In Progress)

> TOpenSessionReq logs password and username
> --
>
> Key: HIVE-20544
> URL: https://issues.apache.org/jira/browse/HIVE-20544
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, patch, security
> Attachments: HIVE-20544.1.patch, HIVE-20544.2.patch, 
> HIVE-20544.patch, non-solution.patch, working-solution.patch
>
>
> In 
> service-rpc/src/gen/thrift/gen-javabean/org/apache/hive/service/rpc/thrift/TOpenSessionReq,
>  if client protocol is unset, validate() and toString() prints both username 
> and password to logs.
> Logging a password is a security risk. We should hide the ***.
> =Edit=
> This issue is tricky since it is caused in a fully generated class. I've been 
> playing around and have found one working solution, butI'd truly appreciate 
> ideas for a more elegant solution or input.
> The problem:
>  TCLIService.thrift is the template for generating all classes in 
> service-rpc. Struct TOpenSessionReq is OpenSession()'s one parameter and is 
> defined thus:
> {noformat}
> struct TOpenSessionReq {
>   1: required TProtocolVersion client_protocol = 
> TProtocolVersion.HIVE_CLI_SERVICE_PROTOCOL_V10
>   2: optional string username
>   3: optional string password
>   4: optional map configuration
> }
> {noformat}
> In the generated class TOpenSessionReq.java, client_protocol is checked by a 
> validate() method, which is called quite a few times; if client_protocol is 
> not set, it throws a TProtocolException, passing along a toString(). This 
> toString() gets the names and values of all fields, including username and 
> password.
> Working solution:
>  * Create a separate struct containing only the username and password, and 
> pass it to OpenSession() as a second parameter. Since all fields in the new 
> struct are "optional", the generated validate() is empty – toString() is 
> never used. This involves changing core classes and breaks the "Each function 
> should take exactly one parameter" coding convention (detailed at 
> service-rpc/if/TCLIService.thrift:27).
>  See working-solution.patch.
> What doesn't work:
>  * Making client_protocol optional instead of required. Apparently this will 
> break everything.
>  * Overwriting toString() – TOpenSessionReq is a struct.
>  * Creating two Thrift structs, one struct for required (TRequiredReq) and 
> one for optional (TOptionalReq) fields, and nesting them in struct 
> TOpenSessionReq. This doesn't work because validate() in TOpenSessionReq can 
> call TOptionalReq.toString(), which prints the password to logs. This will 
> happen if TRequiredReq.client_protocol isn't set.
>  See non-solution.patch
>  * Asking Thrift devs to change their code. I wrote them an email but have no 
> expectations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20551) Create PreparedStatement query dynamically when IN clause is used

2018-09-21 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-20551:
-
Attachment: HIVE-20551.06.patch

> Create PreparedStatement query dynamically when IN clause is used
> -
>
> Key: HIVE-20551
> URL: https://issues.apache.org/jira/browse/HIVE-20551
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20551.01.patch, HIVE-20551.02.patch, 
> HIVE-20551.03.patch, HIVE-20551.04.patch, HIVE-20551.05.patch, 
> HIVE-20551.06.patch
>
>
> In the MetaStoreDirectSql class when IN clause is used, the query statement 
> is created via string concatenation.
> Since JDBC API allows only one literal for one “?” parameter, 
> PreparedStatement doesn’t work for IN clause queries. To create the 
> PreparedStatement query dynamically based on the size of the elements in IN 
> clause, the makeParams() should be used instead of concatenation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException

2018-09-21 Thread Naresh P R (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623728#comment-16623728
 ] 

Naresh P R commented on HIVE-20599:
---

Rebased to branch-3 & attached new patch.

> CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
> ---
>
> Key: HIVE-20599
> URL: https://issues.apache.org/jira/browse/HIVE-20599
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 3.1.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-20599-branch-3.patch, 
> HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, HIVE-20599.1.patch
>
>
> SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - 
> from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING);
> throws below Exception
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Wrong arguments ''PST'': No matching method for class 
> org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible 
> choices: _FUNC_(bigint)  _FUNC_(binary)  _FUNC_(boolean)  _FUNC_(date)  
> _FUNC_(decimal(38,18))  _FUNC_(double)  _FUNC_(float)  _FUNC_(int)  
> _FUNC_(smallint)  _FUNC_(string)  _FUNC_(timestamp)  _FUNC_(tinyint)  
> _FUNC_(void) (state=42000,code=4){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623673#comment-16623673
 ] 

Hive QA commented on HIVE-20599:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 19s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-13953/patches/PreCommit-HIVE-Build-13953.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13953/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
> ---
>
> Key: HIVE-20599
> URL: https://issues.apache.org/jira/browse/HIVE-20599
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 3.1.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-20599-branch-3.patch, 
> HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, HIVE-20599.1.patch
>
>
> SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - 
> from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING);
> throws below Exception
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Wrong arguments ''PST'': No matching method for class 
> org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible 
> choices: _FUNC_(bigint)  _FUNC_(binary)  _FUNC_(boolean)  _FUNC_(date)  
> _FUNC_(decimal(38,18))  _FUNC_(double)  _FUNC_(float)  _FUNC_(int)  
> _FUNC_(smallint)  _FUNC_(string)  _FUNC_(timestamp)  _FUNC_(tinyint)  
> _FUNC_(void) (state=42000,code=4){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20609) Create SSD cache dir if it doesnt exist already

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623663#comment-16623663
 ] 

Hive QA commented on HIVE-20609:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940656/HIVE-20609.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13952/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13952/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13952/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12940656/HIVE-20609.02.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940656 - PreCommit-HIVE-Build

> Create SSD cache dir if it doesnt exist already
> ---
>
> Key: HIVE-20609
> URL: https://issues.apache.org/jira/browse/HIVE-20609
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.1
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 3.0.1
>
> Attachments: HIVE-20609.01.patch, HIVE-20609.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623661#comment-16623661
 ] 

Hive QA commented on HIVE-20599:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940653/HIVE-20599-branch-3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 115 failed/errored test(s), 14432 tests 
executed
*Failed tests:*
{noformat}
TestAddPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestAddPartitionsFromPartSpec - did not produce a TEST-*.xml file (likely timed 
out) (batchId=230)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestAggregateStatsCache - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestAlterPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestAppendPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=273)
TestCachedStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestCatalogCaching - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestCatalogNonDefaultClient - did not produce a TEST-*.xml file (likely timed 
out) (batchId=228)
TestCatalogNonDefaultSvr - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestCatalogOldClient - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestCatalogs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestCheckConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDataSourceProviderFactory - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestDatabases - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDeadline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestDefaultConstraint - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestDropPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=273)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=231)
TestExchangePartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestFMSketchSerialization - did not produce a TEST-*.xml file (likely timed 
out) (batchId=238)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestForeignKey - did not produce a TEST-*.xml file (likely timed out) 
(batchId=230)
TestFunctions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestGetPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestGetTableMeta - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHLLNoBias - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHLLSerialization - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHdfsUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=236)
TestHiveAlterHandler - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHiveMetaStoreGetMetaConf - did not produce a TEST-*.xml file (likely timed 
out) (batchId=236)
TestHiveMetaStorePartitionSpecs - did not produce a TEST-*.xml file (likely 
timed out) (batchId=230)
TestHiveMetaStoreSchemaMethods - did not produce a TEST-*.xml file (likely 
timed out) (batchId=236)
TestHiveMetaStoreTimeout - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=233)
TestHiveMetastoreCli - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestHyperLogLog - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHyperLogLogDense - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHyperLogLogMerge - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestHyperLogLogSparse - did not produce a TEST-*.xml file (likely timed out) 
(batchId=238)
TestJSONMessageDeserializer - did not produce a TEST-*.xml file (likely timed 
out) (batchId=236)
TestListPartitions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=228)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestLockRequestBuilder - did not produce a TEST-*.xml file (likely timed 

[jira] [Updated] (HIVE-17300) WebUI query plan graphs

2018-09-21 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-17300:
-
Attachment: HIVE-17300.9.patch
Status: Patch Available  (was: Open)

> WebUI query plan graphs
> ---
>
> Key: HIVE-17300
> URL: https://issues.apache.org/jira/browse/HIVE-17300
> Project: Hive
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, features, patch
> Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, 
> HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, 
> HIVE-17300.7.patch, HIVE-17300.8.patch, HIVE-17300.8.patch, 
> HIVE-17300.8.patch, HIVE-17300.8.patch, HIVE-17300.9.patch, HIVE-17300.patch, 
> complete_success.png, full_mapred_stats.png, graph_with_mapred_stats.png, 
> last_stage_error.png, last_stage_running.png, non_mapred_task_selected.png
>
>
> Hi all,
> I’m working on a feature of the Hive WebUI Query Plan tab that would provide 
> the option to display the query plan as a nice graph (scroll down for 
> screenshots). If you click on one of the graph’s stages, the plan for that 
> stage appears as text below. 
> Stages are color-coded if they have a status (Success, Error, Running), and 
> the rest are grayed out. Coloring is based on status already available in the 
> WebUI, under the Stages tab.
> There is an additional option to display stats for MapReduce tasks. This 
> includes the job’s ID, tracking URL (where the logs are found), and mapper 
> and reducer numbers/progress, among other info. 
> The library I’m using for the graph is called vis.js (http://visjs.org/). It 
> has an Apache license, and the only necessary file to be included from this 
> library is about 700 KB.
> I tried to keep server-side changes minimal, and graph generation is taken 
> care of by the client. Plans with more than a given number of stages 
> (default: 25) won't be displayed in order to preserve resources.
> I’d love to hear any and all input from the community about this feature: do 
> you think it’s useful, and is there anything important I’m missing?
> Thanks,
> Karen Coppage
> Review request: https://reviews.apache.org/r/61663/
> Any input is welcome!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17300) WebUI query plan graphs

2018-09-21 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-17300:
-
Status: Open  (was: Patch Available)

> WebUI query plan graphs
> ---
>
> Key: HIVE-17300
> URL: https://issues.apache.org/jira/browse/HIVE-17300
> Project: Hive
>  Issue Type: Sub-task
>  Components: Web UI
>Affects Versions: 4.0.0
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: beginner, features, patch
> Attachments: HIVE-17300.3.patch, HIVE-17300.4.patch, 
> HIVE-17300.5.patch, HIVE-17300.6.patch, HIVE-17300.7.patch, 
> HIVE-17300.7.patch, HIVE-17300.8.patch, HIVE-17300.8.patch, 
> HIVE-17300.8.patch, HIVE-17300.8.patch, HIVE-17300.patch, 
> complete_success.png, full_mapred_stats.png, graph_with_mapred_stats.png, 
> last_stage_error.png, last_stage_running.png, non_mapred_task_selected.png
>
>
> Hi all,
> I’m working on a feature of the Hive WebUI Query Plan tab that would provide 
> the option to display the query plan as a nice graph (scroll down for 
> screenshots). If you click on one of the graph’s stages, the plan for that 
> stage appears as text below. 
> Stages are color-coded if they have a status (Success, Error, Running), and 
> the rest are grayed out. Coloring is based on status already available in the 
> WebUI, under the Stages tab.
> There is an additional option to display stats for MapReduce tasks. This 
> includes the job’s ID, tracking URL (where the logs are found), and mapper 
> and reducer numbers/progress, among other info. 
> The library I’m using for the graph is called vis.js (http://visjs.org/). It 
> has an Apache license, and the only necessary file to be included from this 
> library is about 700 KB.
> I tried to keep server-side changes minimal, and graph generation is taken 
> care of by the client. Plans with more than a given number of stages 
> (default: 25) won't be displayed in order to preserve resources.
> I’d love to hear any and all input from the community about this feature: do 
> you think it’s useful, and is there anything important I’m missing?
> Thanks,
> Karen Coppage
> Review request: https://reviews.apache.org/r/61663/
> Any input is welcome!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException

2018-09-21 Thread Naresh P R (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-20599:
--
Attachment: HIVE-20599.1-branch-3.patch

> CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
> ---
>
> Key: HIVE-20599
> URL: https://issues.apache.org/jira/browse/HIVE-20599
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 3.1.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-20599-branch-3.patch, 
> HIVE-20599.1-branch-3.1.patch, HIVE-20599.1-branch-3.patch, HIVE-20599.1.patch
>
>
> SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - 
> from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING);
> throws below Exception
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Wrong arguments ''PST'': No matching method for class 
> org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible 
> choices: _FUNC_(bigint)  _FUNC_(binary)  _FUNC_(boolean)  _FUNC_(date)  
> _FUNC_(decimal(38,18))  _FUNC_(double)  _FUNC_(float)  _FUNC_(int)  
> _FUNC_(smallint)  _FUNC_(string)  _FUNC_(timestamp)  _FUNC_(tinyint)  
> _FUNC_(void) (state=42000,code=4){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20617) Fix type of constants in IN expressions to have correct type

2018-09-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20617:

Status: Patch Available  (was: Open)

> Fix type of constants in IN expressions to have correct type
> 
>
> Key: HIVE-20617
> URL: https://issues.apache.org/jira/browse/HIVE-20617
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20617.01.patch
>
>
> In statements like {{struct(a,b) IN (const struct('x','y'), ... )}} the 
> comparision in UDFIn may fail because if a or b is of char/varchar type the 
> constants will retain string type - especially after PointlookupOptimizer 
> compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20617) Fix type of constants in IN expressions to have correct type

2018-09-21 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-20617:

Attachment: HIVE-20617.01.patch

> Fix type of constants in IN expressions to have correct type
> 
>
> Key: HIVE-20617
> URL: https://issues.apache.org/jira/browse/HIVE-20617
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-20617.01.patch
>
>
> In statements like {{struct(a,b) IN (const struct('x','y'), ... )}} the 
> comparision in UDFIn may fail because if a or b is of char/varchar type the 
> constants will retain string type - especially after PointlookupOptimizer 
> compaction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20599) CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623560#comment-16623560
 ] 

Hive QA commented on HIVE-20599:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 10s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-13951/patches/PreCommit-HIVE-Build-13951.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13951/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CAST(INTERVAL_DAY_TIME AS STRING) is throwing SemanticException
> ---
>
> Key: HIVE-20599
> URL: https://issues.apache.org/jira/browse/HIVE-20599
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 3.1.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-20599-branch-3.patch, 
> HIVE-20599.1-branch-3.1.patch, HIVE-20599.1.patch
>
>
> SELECT CAST(from_utc_timestamp(timestamp '2018-05-02 15:30:30', 'PST') - 
> from_utc_timestamp(timestamp '1970-01-30 16:00:00', 'PST') AS STRING);
> throws below Exception
> {code:java}
> Error: Error while compiling statement: FAILED: SemanticException Line 0:-1 
> Wrong arguments ''PST'': No matching method for class 
> org.apache.hadoop.hive.ql.udf.UDFToString with (interval_day_time). Possible 
> choices: _FUNC_(bigint)  _FUNC_(binary)  _FUNC_(boolean)  _FUNC_(date)  
> _FUNC_(decimal(38,18))  _FUNC_(double)  _FUNC_(float)  _FUNC_(int)  
> _FUNC_(smallint)  _FUNC_(string)  _FUNC_(timestamp)  _FUNC_(tinyint)  
> _FUNC_(void) (state=42000,code=4){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20095) Fix jdbc external table feature

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623543#comment-16623543
 ] 

Hive QA commented on HIVE-20095:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12940639/HIVE-20095.9.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14994 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=167)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/13950/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/13950/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-13950/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12940639 - PreCommit-HIVE-Build

> Fix jdbc external table feature
> ---
>
> Key: HIVE-20095
> URL: https://issues.apache.org/jira/browse/HIVE-20095
> Project: Hive
>  Issue Type: Bug
>Reporter: Jonathan Doron
>Assignee: Jonathan Doron
>Priority: Major
> Attachments: HIVE-20095.1.patch, HIVE-20095.2.patch, 
> HIVE-20095.3.patch, HIVE-20095.4.patch, HIVE-20095.5.patch, 
> HIVE-20095.6.patch, HIVE-20095.7.patch, HIVE-20095.7.patch, 
> HIVE-20095.8.patch, HIVE-20095.8.patch, HIVE-20095.9.patch
>
>
> It seems like the committed code for HIVE-19161 
> (7584b3276bebf64aa006eaa162c0a6264d8fcb56) reverted some of HIVE-18423 
> updates, and therefore some of the external table queries are not working 
> correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20095) Fix jdbc external table feature

2018-09-21 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623468#comment-16623468
 ] 

Hive QA commented on HIVE-20095:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
21s{color} | {color:blue} jdbc-handler in master has 8 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} ql in master has 2326 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} jdbc-handler: The patch generated 2 new + 24 unchanged 
- 1 fixed = 26 total (was 25) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} jdbc-handler generated 3 new + 8 unchanged - 0 fixed = 
11 total (was 8) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:jdbc-handler |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hive.storage.jdbc.JdbcSerDe.initialize(Configuration, Properties)  
At JdbcSerDe.java:is not thrown in 
org.apache.hive.storage.jdbc.JdbcSerDe.initialize(Configuration, Properties)  
At JdbcSerDe.java:[line 114] |
|  |  
org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getColumnTypes(Configuration)
 may fail to clean up java.sql.ResultSet  Obligation to clean up resource 
created at GenericJdbcDatabaseAccessor.java:up java.sql.ResultSet  Obligation 
to clean up resource created at GenericJdbcDatabaseAccessor.java:[line 115] is 
not discharged |
|  |  
org.apache.hive.storage.jdbc.dao.GenericJdbcDatabaseAccessor.getColumnTypes(Configuration)
 may fail to clean up java.sql.Statement  Obligation to clean up resource 
created at GenericJdbcDatabaseAccessor.java:up java.sql.Statement  Obligation 
to clean up resource created at GenericJdbcDatabaseAccessor.java:[line 114] is 
not discharged |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13950/dev-support/hive-personality.sh
 |
| git revision | master / bd453b8 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13950/yetus/diff-checkstyle-jdbc-handler.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13950/yetus/whitespace-eol.txt
 |
| findbugs | 

  1   2   >