[jira] [Commented] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770039#comment-16770039
 ] 

Hive QA commented on HIVE-21240:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958954/HIVE-24240.8.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15806 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=275)
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16109/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16109/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16109/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958954 - PreCommit-HIVE-Build

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770034#comment-16770034
 ] 

Hive QA commented on HIVE-21240:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
48s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} serde: The patch generated 0 new + 4 unchanged - 25 
fixed = 4 total (was 29) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} ql: The patch generated 0 new + 6 unchanged - 5 
fixed = 6 total (was 11) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch core passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} serde generated 0 new + 193 unchanged - 4 fixed = 
193 total (was 197) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16109/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: serde ql hcatalog/core U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16109/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELU

[jira] [Commented] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Mani M (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770029#comment-16770029
 ] 

Mani M commented on HIVE-21270:
---

Run the check style in your local machine first, before submitting the patch.

In pom.xml comment all the projects other than ql, run maven checkstyle 
aggregate and check for the issues in checkstyle result.xml in target directory.

Then uncomment the pom.xml earlier commented lines before creating the patch 
file.

> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch, HIVE-21270.6.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
> This UDTF helps to retrieve each column name and it's data type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770026#comment-16770026
 ] 

Hive QA commented on HIVE-21270:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958953/HIVE-21270.6.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15801 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_functions] 
(batchId=79)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16108/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16108/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16108/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958953 - PreCommit-HIVE-Build

> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch, HIVE-21270.6.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
> This UDTF helps to retrieve each column name and it's data type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770021#comment-16770021
 ] 

Hive QA commented on HIVE-21270:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
44s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 4 new + 83 unchanged - 0 fixed 
= 87 total (was 83) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
58s{color} | {color:red} ql generated 1 new + 2262 unchanged - 0 fixed = 2263 
total (was 2262) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 22s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Found reliance on default encoding in 
org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSQLSchema.process(Object[]):in
 
org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSQLSchema.process(Object[]):
 String.getBytes()  At GenericUDTFGetSQLSchema.java:[line 77] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16108/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16108/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16108/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16108/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch, HIVE-21270.6.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
> This UDTF helps to retrieve each column name and it's data type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770015#comment-16770015
 ] 

Hive QA commented on HIVE-21279:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958943/HIVE-21279.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 46 failed/errored test(s), 15797 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin47] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_47] 
(batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_15] 
(batchId=93)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_16] 
(batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_18] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_25] 
(batchId=96)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_plan] 
(batchId=6)
org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[dboutput] 
(batchId=270)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[results_cache_diff_fs]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_2]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_capacity]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_lifetime]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_quoted_identifiers]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_temptable]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_transactional]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_2]
 (batchId=192)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning_4]
 (batchId=192)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[cbo_union] 
(batchId=147)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[column_access_stats]
 (batchId=137)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[optimize_nullscan] 
(batchId=148)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[spark_combine_equivalent_work_2]
 (batchId=136)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[table_access_keys_stats]
 (batchId=144)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union13] 
(batchId=136)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union24] 
(batchId=139)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union32] 
(batchId=124)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union34] 
(batchId=116)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union8] 
(batchId=137)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_date] 
(batchId=132)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_null] 
(batchId=149)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_script] 
(batchId=142)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_top_level] 
(batchId=139)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_view] 
(batchId=117)
org.apache.hive.hcatalog.streaming.TestStreaming.testStreamBucketingMatchesRegularBucketing
 (batchId=216)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testAddJarConstructorUnCaching 
(batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEmptyResultsetThriftSerializeInTasks
 (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEnableThriftSerializeInTasks 
(batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testFloatCast2DoubleThriftSerializeInTasks
 (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testJoinThriftSerializeInTasks 
(batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation2 (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation3 (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation4 (batchId=264)
org.apache.hive.jdbc.TestJdbcWithMiniHS2.testSelectThriftSerializeInTasks 
(batchId=264)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16107/testReport
Console output: https://b

[jira] [Commented] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16770006#comment-16770006
 ] 

Hive QA commented on HIVE-21279:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 51 new + 795 unchanged - 2 
fixed = 846 total (was 797) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
9s{color} | {color:red} ql generated 1 new + 2261 unchanged - 1 fixed = 2262 
total (was 2262) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Dead store to fs in 
org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.moveResultsToCacheDirectory(FetchWork,
 Path)  At 
QueryResultsCache.java:org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.moveResultsToCacheDirectory(FetchWork,
 Path)  At QueryResultsCache.java:[line 813] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16107/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16107/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16107/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16107/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21278) Fix ambiguity in grammar warnings at compilation time

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1676#comment-1676
 ] 

Hive QA commented on HIVE-21278:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958942/HIVE-21278.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 15797 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_aggregate]
 (batchId=100)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_max_length]
 (batchId=100)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_nonboolean_expr]
 (batchId=99)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_qual_name]
 (batchId=101)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_subquery]
 (batchId=99)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_tbl_level]
 (batchId=100)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_temporary_udf]
 (batchId=100)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_udtf]
 (batchId=99)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_violation]
 (batchId=99)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[check_constraint_window_fun]
 (batchId=99)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[create_external_with_check_constraint]
 (batchId=99)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_subquery_chain]
 (batchId=99)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16106/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16106/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16106/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 12 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958942 - PreCommit-HIVE-Build

> Fix ambiguity in grammar warnings at compilation time
> -
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Mani M (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769993#comment-16769993
 ] 

Mani M commented on HIVE-21270:
---

Since this function is used to get the schema for the SQL query, whether any 
exceptions will be raised for any SQL semantic error.

> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch, HIVE-21270.6.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
> This UDTF helps to retrieve each column name and it's data type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21278) Fix ambiguity in grammar warnings at compilation time

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769988#comment-16769988
 ] 

Hive QA commented on HIVE-21278:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
57s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16106/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16106/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix ambiguity in grammar warnings at compilation time
> -
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21254) Pre-upgrade tool should handle exceptions and skip db/tables

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769985#comment-16769985
 ] 

Hive QA commented on HIVE-21254:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958937/HIVE-21254.8.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 15799 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestSSL.testMetastoreWithSSL (batchId=260)
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithProperty (batchId=260)
org.apache.hive.jdbc.TestSSL.testSSLConnectionWithURL (batchId=260)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16105/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16105/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16105/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958937 - PreCommit-HIVE-Build

> Pre-upgrade tool should handle exceptions and skip db/tables
> 
>
> Key: HIVE-21254
> URL: https://issues.apache.org/jira/browse/HIVE-21254
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21254.1.patch, HIVE-21254.2.patch, 
> HIVE-21254.3.patch, HIVE-21254.4.patch, HIVE-21254.5.patch, 
> HIVE-21254.6.patch, HIVE-21254.7.patch, HIVE-21254.8.patch
>
>
> When exceptions like AccessControlException is thrown, pre-upgrade tool 
> fails. If hive user does not have read access to database or tables (some 
> external tables denies read access to hive), pre-upgrade tool should just 
> assume they are external tables and move on without failing pre-upgrade 
> process. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Attachment: HIVE-24240.8.patch

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Status: Open  (was: Patch Available)

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Status: Patch Available  (was: Open)

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Shubham Chaurasia (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shubham Chaurasia updated HIVE-21270:
-
Attachment: HIVE-21270.6.patch

> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch, HIVE-21270.6.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
> This UDTF helps to retrieve each column name and it's data type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21254) Pre-upgrade tool should handle exceptions and skip db/tables

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769967#comment-16769967
 ] 

Hive QA commented on HIVE-21254:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
21s{color} | {color:blue} upgrade-acid/pre-upgrade in master has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} upgrade-acid/pre-upgrade: The patch generated 33 new + 
267 unchanged - 8 fixed = 300 total (was 275) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16105/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16105/yetus/diff-checkstyle-upgrade-acid_pre-upgrade.txt
 |
| modules | C: upgrade-acid/pre-upgrade U: upgrade-acid/pre-upgrade |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16105/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Pre-upgrade tool should handle exceptions and skip db/tables
> 
>
> Key: HIVE-21254
> URL: https://issues.apache.org/jira/browse/HIVE-21254
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21254.1.patch, HIVE-21254.2.patch, 
> HIVE-21254.3.patch, HIVE-21254.4.patch, HIVE-21254.5.patch, 
> HIVE-21254.6.patch, HIVE-21254.7.patch, HIVE-21254.8.patch
>
>
> When exceptions like AccessControlException is thrown, pre-upgrade tool 
> fails. If hive user does not have read access to database or tables (some 
> external tables denies read access to hive), pre-upgrade tool should just 
> assume they are external tables and move on without failing pre-upgrade 
> process. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769956#comment-16769956
 ] 

Hive QA commented on HIVE-16924:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958930/HIVE-16924.08.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15796 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16104/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16104/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16104/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958930 - PreCommit-HIVE-Build

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769955#comment-16769955
 ] 

Hive QA commented on HIVE-16924:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 13 new + 639 unchanged - 13 
fixed = 652 total (was 652) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m  
6s{color} | {color:red} root: The patch generated 13 new + 647 unchanged - 13 
fixed = 660 total (was 660) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} ql generated 0 new + 2260 unchanged - 2 fixed = 2260 
total (was 2262) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16104/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16104/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16104/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16104/yetus/whitespace-eol.txt
 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16104/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These

[jira] [Commented] (HIVE-21198) Introduce a database object reference class

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769898#comment-16769898
 ] 

Hive QA commented on HIVE-21198:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958923/HIVE-21198.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 298 failed/errored test(s), 15797 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs]
 (batchId=278)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore]
 (batchId=278)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[allcolref_in_udf] 
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter1] (batchId=93)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_onto_nocurrent_db]
 (batchId=95)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_partition_update_status]
 (batchId=97)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_update_status]
 (batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_table_update_status_disable_bitvector]
 (batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_table] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_9] 
(batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_non_id] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_revoke_table_priv]
 (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_7] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_SortUnionTransposeRule]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_with_constraints] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_product_check_1] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cross_product_check_2] 
(batchId=96)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas] (batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_colname] 
(batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_uses_database_location]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_3] (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_3] (batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_4] (batchId=6)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cte_mat_5] (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_join2] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[decimal_serde] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[distinct_66] (batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[exec_parallel_column_stats]
 (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[explain_ddl] (batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_duplicate_key] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join41] (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join42] (batchId=26)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join_filters_overlap] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[lateral_view_outer] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_10] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_12] (batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_13] (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1_newdb] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_3] (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_4] (batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_5] (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_8] (batchId=8)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_9] (batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_1] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_3] 
(batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_4] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_mv] (batchId=88)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge3] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId

[jira] [Commented] (HIVE-21198) Introduce a database object reference class

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769868#comment-16769868
 ] 

Hive QA commented on HIVE-21198:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
4s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
32s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} storage-api: The patch generated 1 new + 5 unchanged - 
0 fixed = 6 total (was 5) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
58s{color} | {color:red} ql: The patch generated 34 new + 1764 unchanged - 51 
fixed = 1798 total (was 1815) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m  
7s{color} | {color:red} ql generated 1 new + 2260 unchanged - 2 fixed = 2261 
total (was 2262) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Dead store to hiveConf in 
org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAltertableSkewedby(TableName,
 ASTNode)  At 
DDLSemanticAnalyzer.java:org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAltertableSkewedby(TableName,
 ASTNode)  At DDLSemanticAnalyzer.java:[line 4110] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16103/dev-support/hive-personality.sh
 |
| git revision | master / aaf01ae |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16103/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16103/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16103/yetus/new-findbugs-ql.html
 |
| modules | C: storage-api ql hcatalog/core U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16103/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Introduce a database object reference class
> ---

[jira] [Updated] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21279:
---
Attachment: HIVE-21279.1.patch

> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19117) hiveserver2 org.apache.thrift.transport.TTransportException error when running 2nd query after minute of inactivity

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769865#comment-16769865
 ] 

t oo commented on HIVE-19117:
-

Any idea Mr V?

> hiveserver2 org.apache.thrift.transport.TTransportException error when 
> running 2nd query after minute of inactivity
> ---
>
> Key: HIVE-19117
> URL: https://issues.apache.org/jira/browse/HIVE-19117
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Metastore, Thrift API
>Affects Versions: 2.1.1
> Environment: * Hive 2.1.1 with hive.server2.transport.mode set to 
> binary (sample JDBC string is jdbc:hive2://remotehost:1/default)
>  * Hadoop 2.8.3
>  * Metastore using MySQL
>  * Java 8
>Reporter: t oo
>Priority: Blocker
>
> I make a JDBC connection from my SQL tool (ie Squirrel SQL, Oracle SQL 
> Developer) to HiveServer2 (running on remote server) with port 1.
> I am able to run some queries successfully. I then do something else (not in 
> the SQL tool) for 1-2minutes and then return to my SQL tool and attempt to 
> run a query but I get this error: 
> {code:java}
> org.apache.thrift.transport.TTransportException: java.net.SocketException: 
> Software caused connection abort: socket write error{code}
> If I now disconnect and reconnect in my SQL tool I can run queries again. But 
> does anyone know what HiveServer2 settings I should change to prevent the 
> error? I assume something in hive-site.xml
> From the hiveserver2 logs below, can see an exact 1 minute gap from 30th min 
> to 31stmin where the disconnect happens.
> {code:java}
> 2018-04-05T03:30:41,706 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: 
> Thread-36
>  2018-04-05T03:30:41,712 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Updating thread name to 
> c81ec0f9-7a9d-46b6-9708-e7d78520a48a HiveServer2-Handler-Pool: Thread-36
>  2018-04-05T03:30:41,712 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: 
> Thread-36
>  2018-04-05T03:30:41,718 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Updating thread name to 
> c81ec0f9-7a9d-46b6-9708-e7d78520a48a HiveServer2-Handler-Pool: Thread-36
>  2018-04-05T03:30:41,719 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: 
> Thread-36
>  2018-04-05T03:31:41,232 INFO [HiveServer2-Handler-Pool: Thread-36] 
> thrift.ThriftCLIService: Session disconnected without closing properly.
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> thrift.ThriftCLIService: Closing the session: SessionHandle 
> [c81ec0f9-7a9d-46b6-9708-e7d78520a48a]
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> service.CompositeService: Session closed, SessionHandle 
> [c81ec0f9-7a9d-46b6-9708-e7d78520a48a], current sessions:0
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Updating thread name to 
> c81ec0f9-7a9d-46b6-9708-e7d78520a48a HiveServer2-Handler-Pool: Thread-36
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: 
> Thread-36
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Updating thread name to 
> c81ec0f9-7a9d-46b6-9708-e7d78520a48a HiveServer2-Handler-Pool: Thread-36
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.HiveSessionImpl: Operation log session directory is deleted: 
> /var/hive/hs2log/tmp/c81ec0f9-7a9d-46b6-9708-e7d78520a48a
>  2018-04-05T03:31:41,233 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Resetting thread name to HiveServer2-Handler-Pool: 
> Thread-36
>  2018-04-05T03:31:41,236 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Deleted directory: 
> /var/hive/scratch/tmp/anonymous/c81ec0f9-7a9d-46b6-9708-e7d78520a48a on fs 
> with scheme file
>  2018-04-05T03:31:41,236 INFO [HiveServer2-Handler-Pool: Thread-36] 
> session.SessionState: Deleted directory: 
> /var/hive/ec2-user/c81ec0f9-7a9d-46b6-9708-e7d78520a48a on fs with scheme file
>  2018-04-05T03:31:41,236 INFO [HiveServer2-Handler-Pool: Thread-36] 
> hive.metastore: Closed a connection to metastore, current connections: 1{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-21279:
--


> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21279) Avoid moving/rename operation in FileSink op for SELECT queries

2019-02-15 Thread Vineet Garg (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-21279:
---
Status: Patch Available  (was: Open)

> Avoid moving/rename operation in FileSink op for SELECT queries
> ---
>
> Key: HIVE-21279
> URL: https://issues.apache.org/jira/browse/HIVE-21279
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-21279.1.patch
>
>
> Currently at the end of a job FileSink operator moves/rename temp directory 
> to another directory from which FetchTask fetches result. This is done to 
> avoid fetching potential partial/invalid files by failed/runway tasks. This 
> operation is expensive for cloud storage. It could be avoided if FetchTask is 
> passed on set of files to read from instead of whole directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21278) Fix ambiguity in grammar warnings at compilation time

2019-02-15 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21278:
---
Summary: Fix ambiguity in grammar warnings at compilation time  (was: 
Ambiguity in grammar causes )

> Fix ambiguity in grammar warnings at compilation time
> -
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19848) Implement HiveServer2WebUI authentication (Spark has HTTP Basic Auth for its UIs)

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769860#comment-16769860
 ] 

t oo commented on HIVE-19848:
-

:(

> Implement HiveServer2WebUI authentication (Spark has HTTP Basic Auth for its 
> UIs)
> -
>
> Key: HIVE-19848
> URL: https://issues.apache.org/jira/browse/HIVE-19848
> Project: Hive
>  Issue Type: New Feature
>  Components: Web UI
>Affects Versions: 2.3.2
>Reporter: t oo
>Priority: Major
>
> Implement HiveServer2WebUI authentication (Spark has HTTP Basic Auth for its 
> UIs)
> We are using Hive on EC2s without EMR/HDFS/Kerberos



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19919) HiveServer2 - expose queryable data dictionary (ie Oracles' ALL_TAB_COLUMNS)

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769861#comment-16769861
 ] 

t oo commented on HIVE-19919:
-

gentle ping

> HiveServer2 - expose queryable data dictionary (ie Oracles' ALL_TAB_COLUMNS)
> 
>
> Key: HIVE-19919
> URL: https://issues.apache.org/jira/browse/HIVE-19919
> Project: Hive
>  Issue Type: New Feature
>  Components: SQL
>Affects Versions: 3.0.0, 2.3.2
>Reporter: t oo
>Priority: Major
>
> All major db vendors have a table like information_schema.columns, 
> all_tab_columns or syscolumns containing table_name,column_name, data_type, 
> col_order. Adding this feature to HiveServer2 would be very convenient for 
> users.
> This information is currently only available in the mysql metastore ie TBLS, 
> COLS but should be exposed up into the HiveServer2 1 port connection. 
> Thus saving users from having 2 connections (1 to see data, 1 to see 
> metadata). For security reason too, mysql can be firewalled from end-users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-21278) Ambiguity in grammar causes

2019-02-15 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-21278 started by Jesus Camacho Rodriguez.
--
> Ambiguity in grammar causes 
> 
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21278) Ambiguity in grammar causes

2019-02-15 Thread Jesus Camacho Rodriguez (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769846#comment-16769846
 ] 

Jesus Camacho Rodriguez commented on HIVE-21278:


Ambiguity is easy to fix by just following SQL spec. In particular, condition 
in check constraint should always be within parentheses, and 'unknown' is a 
reserved keyword.

> Ambiguity in grammar causes 
> 
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21278) Ambiguity in grammar causes

2019-02-15 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21278:
---
Attachment: HIVE-21278.patch

> Ambiguity in grammar causes 
> 
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21278.patch
>
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21278) Ambiguity in grammar causes

2019-02-15 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21278:
---
Status: Patch Available  (was: In Progress)

> Ambiguity in grammar causes 
> 
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21278) Ambiguity in grammar causes

2019-02-15 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez reassigned HIVE-21278:
--


> Ambiguity in grammar causes 
> 
>
> Key: HIVE-21278
> URL: https://issues.apache.org/jira/browse/HIVE-21278
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>
> These are the warnings at compilation time:
> {code}
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATETIME" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_DATE {LPAREN, StringLiteral}" 
> using multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_UNIONTYPE LESSTHAN" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK {KW_EXISTS, KW_TINYINT}" using 
> multiple alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): org/apache/hadoop/hive/ql/parse/HiveParser.g:2439:5:
> Decision can match input such as "KW_CHECK KW_STRUCT LESSTHAN" using multiple 
> alternatives: 1, 2
> As a result, alternative(s) 2 were disabled for that input
> warning(200): IdentifiersParser.g:424:5:
> Decision can match input such as "KW_UNKNOWN" using multiple alternatives: 1, 
> 10
> As a result, alternative(s) 10 were disabled for that input
> {code}
> This means that multiple parser rules can match certain query text, possibly 
> leading to unexpected errors at parsing time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19959) 'Hive on Spark' error - org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered unregistered class ID: 109

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769838#comment-16769838
 ] 

t oo commented on HIVE-19959:
-

[~xuefuz] did u encounter this?

> 'Hive on Spark' error - 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 109
> ---
>
> Key: HIVE-19959
> URL: https://issues.apache.org/jira/browse/HIVE-19959
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.3.2, 2.3.3
> Environment: env: hive 2.3.3 spark 2.0.0 in standalone mode scratch 
> dir on S3 hive table on s3 hadoop 2.8.3 installed no hdfs setup
>Reporter: t oo
>Priority: Blocker
>
> connecting to beeline and running SELECT * works but when running select 
> count(*) get below error:
> 18/05/01 07:41:37 INFO Utilities: Open file to read in plan: 
> s3a://redacted/tmp/31f5ffb5-f318-45f1-b07d-1fac0b406c89/hive_2018-05-01_07-41-09_102_7250900080631620338-
> 2/-mr-10004/bbb93046-5d8f-4b6e-888e-c86bfeb57e3f/map.xml
> 18/05/01 07:41:37 INFO PerfLogger:  from=org.apache.hadoop.hive.ql.exec.Utilities>
> 18/05/01 07:41:37 INFO Utilities: Deserializing MapWork via kryo
> 18/05/01 07:41:37 ERROR Utilities: Failed to load plan: 
> s3a://redacted/tmp/31f5ffb5-f318-45f1-b07d-1fac0b406c89/hive_2018-05-01_07-41-09_102_7250900080631620338-
> 2/-mr-10004/bbb93046-5d8f-4b6e-888e-c86bfeb57e3f/map.xml: 
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 109
> Serialization trace:
> properties (org.apache.hadoop.hive.ql.plan.PartitionDesc)
> aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
> org.apache.hive.com.esotericsoftware.kryo.KryoException: Encountered 
> unregistered class ID: 109
> Serialization trace:
> properties (org.apache.hadoop.hive.ql.plan.PartitionDesc)
> aliasToPartnInfo (org.apache.hadoop.hive.ql.plan.MapWork)
> at 
> org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:119)
> at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:610)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:599)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:134)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:648)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer$ObjectField.read(FieldSerializer.java:605)
> at 
> org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:221)
> at 
> org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:626)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializeObjectByKryo(Utilities.java:1082)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:973)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.deserializePlan(Utilities.java:987)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:423)
> at org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:302)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:484)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:477)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:715)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:246)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:209)
> at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:102)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
> at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
> at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
> at org.apache.spark.scheduler.Task.run(Task.scala:85)
> at org.apache.spark.executor

[jira] [Commented] (HIVE-20606) hive3.1 beeline to dns complaining about ssl on ip

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769837#comment-16769837
 ] 

t oo commented on HIVE-20606:
-

[~krisden] - did u fix?

> hive3.1 beeline to dns complaining about ssl on ip
> --
>
> Key: HIVE-20606
> URL: https://issues.apache.org/jira/browse/HIVE-20606
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, HiveServer2
>Affects Versions: 3.1.0
>Reporter: t oo
>Priority: Blocker
>
> Why is beeline complaining about ip when i use dns in the connection? I have 
> a valid cert/jks on the dns. Exact same beeline worked when running on 
> hive2.3.2 but this is hive3.1.0
> [ec2-user@ip-10-1-2-3 logs]$ $HIVE_HOME/bin/beeline
>  SLF4J: Class path contains multiple SLF4J bindings.
>  SLF4J: Found binding in 
> [jar:file:/usr/lib/apache-hive-3.1.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in 
> [jar:file:/usr/lib/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See [http://www.slf4j.org/codes.html#multiple_bindings] for an 
> explanation.
>  SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]
>  Beeline version 3.1.0 by Apache Hive
>  beeline> !connect 
> jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit
>  userhere passhere
>  Connecting to 
> jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit
>  18/09/20 04:49:06 [main]: WARN jdbc.HiveConnection: Failed to connect to 
> mydns:1
>  Unknown HS2 problem when communicating with Thrift server.
>  Error: Could not open client transport with JDBC Uri: 
> jdbc:hive2://mydns:1/default;ssl=true;sslTrustStore=/home/ec2-user/spark_home/conf/app-trust-nonprd.jks;trustStorePassword=changeit:
>  javax.net.ssl.SSLHandshakeException: 
> java.security.cert.CertificateException: No subject alternative names 
> matching IP address 10.1.2.3 found (state=08S01,code=0)
>  beeline>
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
> hiveserver2 logs:
> 2018-09-20T04:50:16,245 ERROR [HiveServer2-Handler-Pool: Thread-79] 
> server.TThreadPoolServer: Error occurred during processing of message.
> java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: 
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
>  at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  ~[?:1.8.0_181]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ~[?:1.8.0_181]
>  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
> Caused by: org.apache.thrift.transport.TTransportException: 
> javax.net.ssl.SSLHandshakeException: Remote host closed connection during 
> handshake
>  at 
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) 
> ~[hive-exec-3.1.0.jar:3.1.0]
>  at 
> org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:178)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  at 
> org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) 
> ~[hive-exec-3.1.0.jar:3.1.0]
>  at 
> org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  at 
> org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
>  ~[hive-exec-3.1.0.jar:3.1.0]
>  ... 4 more
> Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection 
> during handshake
>  at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1002) 
> ~[?:1.8.0_181]
>  at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1385)
>  ~[?:1.8.0_181]
>  at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:938) 
> ~[?:1.8.0_181]
>  at sun.security.ssl.AppInputStream.read(AppInputStream.java:105) 
> ~[?:1.8.0_181]
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) 
> ~[?:1.8.0_181]
>  at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) 
> ~[?:1.8.0_181]
>  at java.io.BufferedInputStream.read(Bu

[jira] [Commented] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769834#comment-16769834
 ] 

Hive QA commented on HIVE-21240:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958920/HIVE-24240.8.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15806 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16102/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16102/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16102/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958920 - PreCommit-HIVE-Build

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21221) Make HS2 and LLAP consistent - Bring up LLAP WebUI in test mode if WebUI port is configured

2019-02-15 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-21221:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to master.

> Make HS2 and LLAP consistent - Bring up LLAP WebUI in test mode if WebUI port 
> is configured
> ---
>
> Key: HIVE-21221
> URL: https://issues.apache.org/jira/browse/HIVE-21221
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Oliver Draese
>Assignee: Oliver Draese
>Priority: Trivial
>  Labels: llap
> Attachments: HIVE-21221.1.patch, HIVE-21221.patch
>
>
> When HiveServer2 comes up, it skips the start of the WebUI if
> 1) hive.in.test is set to true
> AND
> 2) the WebUI port is not specified or default (hive.server2.webui.port)
>  
> Right now, on LLAP daemon start, it is only checked if hive is in test 
> (condition 1) above.
>  
> The LLAP Daemon start up code (to skip WebUI creation) should be consistent 
> with HS2, therefore if a port is specified (other than the default), the 
> WebUI should also be started in test mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769805#comment-16769805
 ] 

Hive QA commented on HIVE-21240:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
46s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} serde: The patch generated 0 new + 4 unchanged - 25 
fixed = 4 total (was 29) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} ql: The patch generated 0 new + 6 unchanged - 5 
fixed = 6 total (was 11) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch core passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} serde generated 0 new + 193 unchanged - 4 fixed = 
193 total (was 197) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16102/dev-support/hive-personality.sh
 |
| git revision | master / 4cf320f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: serde ql hcatalog/core U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16102/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELU

[jira] [Commented] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769777#comment-16769777
 ] 

Hive QA commented on HIVE-20849:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958922/HIVE-20849.6.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16101/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16101/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16101/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-15 21:48:28.398
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16101/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-15 21:48:28.402
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 4cf320f HIVE-18890: Lower Logging for "Table not found" Error
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4cf320f HIVE-18890: Lower Logging for "Table not found" Error
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-15 21:48:29.563
+ rm -rf ../yetus_PreCommit-HIVE-Build-16101
+ mkdir ../yetus_PreCommit-HIVE-Build-16101
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16101
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16101/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java:
 does not exist in index
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java:219
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java'
 with conflicts.
Going to apply patch with: git apply -p1
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java:219
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java'
 with conflicts.
U 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/ConstantPropagateProcFactory.java
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-16101
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958922 - PreCommit-HIVE-Build

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, 
> HIVE-20849.5.patch, HIVE-20849.6.patch, HIVE-20849.6.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769776#comment-16769776
 ] 

Hive QA commented on HIVE-19430:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958907/HIVE-19430.03.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15797 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16100/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16100/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16100/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958907 - PreCommit-HIVE-Build

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch, 
> HIVE-19430.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14269) Performance optimizations for data on S3

2019-02-15 Thread t oo (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769769#comment-16769769
 ] 

t oo commented on HIVE-14269:
-

gentle ping

> Performance optimizations for data on S3
> 
>
> Key: HIVE-14269
> URL: https://issues.apache.org/jira/browse/HIVE-14269
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Sergio Peña
>Assignee: Sergio Peña
>Priority: Major
>
> Working with tables that resides on Amazon S3 (or any other object store) 
> have several performance impact when reading or writing data, and also 
> consistency issues.
> This JIRA is an umbrella task to monitor all the performance improvements 
> that can be done in Hive to work better with S3 data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21254) Pre-upgrade tool should handle exceptions and skip db/tables

2019-02-15 Thread Prasanth Jayachandran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-21254:
-
Attachment: HIVE-21254.8.patch

> Pre-upgrade tool should handle exceptions and skip db/tables
> 
>
> Key: HIVE-21254
> URL: https://issues.apache.org/jira/browse/HIVE-21254
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21254.1.patch, HIVE-21254.2.patch, 
> HIVE-21254.3.patch, HIVE-21254.4.patch, HIVE-21254.5.patch, 
> HIVE-21254.6.patch, HIVE-21254.7.patch, HIVE-21254.8.patch
>
>
> When exceptions like AccessControlException is thrown, pre-upgrade tool 
> fails. If hive user does not have read access to database or tables (some 
> external tables denies read access to hive), pre-upgrade tool should just 
> assume they are external tables and move on without failing pre-upgrade 
> process. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21254) Pre-upgrade tool should handle exceptions and skip db/tables

2019-02-15 Thread Prasanth Jayachandran (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769763#comment-16769763
 ] 

Prasanth Jayachandran commented on HIVE-21254:
--

Another unrelated failure.

> Pre-upgrade tool should handle exceptions and skip db/tables
> 
>
> Key: HIVE-21254
> URL: https://issues.apache.org/jira/browse/HIVE-21254
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-21254.1.patch, HIVE-21254.2.patch, 
> HIVE-21254.3.patch, HIVE-21254.4.patch, HIVE-21254.5.patch, 
> HIVE-21254.6.patch, HIVE-21254.7.patch, HIVE-21254.8.patch
>
>
> When exceptions like AccessControlException is thrown, pre-upgrade tool 
> fails. If hive user does not have read access to database or tables (some 
> external tables denies read access to hive), pre-upgrade tool should just 
> assume they are external tables and move on without failing pre-upgrade 
> process. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769737#comment-16769737
 ] 

Hive QA commented on HIVE-19430:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
40s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 403 unchanged - 0 fixed = 404 total (was 403) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16100/dev-support/hive-personality.sh
 |
| git revision | master / 4cf320f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16100/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server hcatalog/server-extensions U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16100/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch, 
> HIVE-19430.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large

[jira] [Comment Edited] (HIVE-21232) LLAP: Add a cache-miss friendly split affinity provider

2019-02-15 Thread slim bouguerra (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16767793#comment-16767793
 ] 

slim bouguerra edited comment on HIVE-21232 at 2/15/19 9:01 PM:


[~gopalv] please re upload to rerun the test and adjust the Documentations 
(Java docs) to the new behavior (it will be very helpful).

also please clean unused code

{code}

23 import java.util.SortedSet;
24 import java.util.TreeSet;

{code}

overall +1

 


was (Author: bslim):
[~gopalv] please re upload to rerun the test and adjust the Documentations 
(Java docs) to the new behavior (it will be very helpful).

overall +1

 

> LLAP: Add a cache-miss friendly split affinity provider
> ---
>
> Key: HIVE-21232
> URL: https://issues.apache.org/jira/browse/HIVE-21232
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Major
> Attachments: HIVE-21232.1.patch
>
>
> If one of the LLAP nodes have data-locality, preferring that over another 
> does have advantages for the first query or a more general cache-miss.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769702#comment-16769702
 ] 

Hive QA commented on HIVE-21275:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958898/HIVE-21275.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 15797 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16099/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16099/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16099/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958898 - PreCommit-HIVE-Build

> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21275.1.patch
>
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769655#comment-16769655
 ] 

Hive QA commented on HIVE-21275:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16099/dev-support/hive-personality.sh
 |
| git revision | master / 4cf320f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16099/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21275.1.patch
>
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Status: Open  (was: Patch Available)

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Attachment: HIVE-16924.08.patch

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Status: Patch Available  (was: Open)

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769612#comment-16769612
 ] 

Hive QA commented on HIVE-21260:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958893/HIVE-21260.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15797 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.hcatalog.mapreduce.TestHCatMutableNonPartitioned.testHCatNonPartitionedTable[3]
 (batchId=214)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16098/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16098/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16098/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958893 - PreCommit-HIVE-Build

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.ut

[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769595#comment-16769595
 ] 

Vihang Karajgaonkar commented on HIVE-19430:


Thanks for providing the patch. Looks good to me. One suggestion related to the 
log message.

"LOG.warn("Exception received while cleaning notifications. More details can be 
found in debug mode", ex);"

The log message above is not super helpful, since it is not necessary that 
logging level is in debug mode when the exception occurred. Since you already 
are adding the exception trace in the warning, may be you can remove the "More 
details can be found ..." statement.

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch, 
> HIVE-19430.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769586#comment-16769586
 ] 

Hive QA commented on HIVE-21260:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m 
13s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
11s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16098/dev-support/hive-personality.sh
 |
| git revision | master / 4cf320f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: standalone-metastore/metastore-server ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16098/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException:

[jira] [Updated] (HIVE-21198) Introduce a database object reference class

2019-02-15 Thread David Lavati (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-21198:

Attachment: HIVE-21198.1.patch
Status: Patch Available  (was: In Progress)

I haven't dug down to every tiny corner and it probably doesn't make sense 
everywhere, but this covers a fairly significant part of the codebase.

> Introduce a database object reference class
> ---
>
> Key: HIVE-21198
> URL: https://issues.apache.org/jira/browse/HIVE-21198
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: David Lavati
>Priority: Major
> Attachments: HIVE-21198.1.patch
>
>
> There are many places in which "{databasename}.{tablename}" is passed as a 
> single string; there are some places where the they travel as 2 separate 
> arguments.
> Idea would be to introduce a simple immutable class with 2 fields ; and pass 
> these informations together. Making this better is required if we would be 
> wanting to enable dot in tablenames 
> HIVE-16907, HIVE-21151



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Attachment: HIVE-24240.8.patch

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20849:
---
Attachment: HIVE-20849.6.patch

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, 
> HIVE-20849.5.patch, HIVE-20849.6.patch, HIVE-20849.6.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20849:
---
Status: Patch Available  (was: Open)

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, 
> HIVE-20849.5.patch, HIVE-20849.6.patch, HIVE-20849.6.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-20849) Review of ConstantPropagateProcFactory

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-20849:
---
Status: Open  (was: Patch Available)

> Review of ConstantPropagateProcFactory
> --
>
> Key: HIVE-20849
> URL: https://issues.apache.org/jira/browse/HIVE-20849
> Project: Hive
>  Issue Type: Improvement
>  Components: Logical Optimizer
>Affects Versions: 3.1.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20849.1.patch, HIVE-20849.1.patch, 
> HIVE-20849.2.patch, HIVE-20849.3.patch, HIVE-20849.4.patch, 
> HIVE-20849.5.patch, HIVE-20849.6.patch, HIVE-20849.6.patch
>
>
> I was looking at this class because it blasts a lot of useless (to an admin) 
> information to the logs.  Especially if the table has a lot of columns, I see 
> big blocks of logging that are meaningless to me.  I request that the logging 
> is toned down to debug, and some other improvements to the code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-20895) Utilize Switch Statements in JdbcColumn Class

2019-02-15 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769569#comment-16769569
 ] 

BELUGA BEHR commented on HIVE-20895:


[~pvary] [~ngangam] Please consider for inclusion into the project. :)

> Utilize Switch Statements in JdbcColumn Class
> -
>
> Key: HIVE-20895
> URL: https://issues.apache.org/jira/browse/HIVE-20895
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-20895.1.patch, HIVE-20895.2.patch, 
> HIVE-20895.3.patch, HIVE-20895.4.patch, HIVE-20895.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Status: Patch Available  (was: Open)

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Status: Open  (was: Patch Available)

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769554#comment-16769554
 ] 

Hive QA commented on HIVE-16924:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958897/HIVE-16924.07.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 15796 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[distinct_groupby] 
(batchId=46)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomNonExistent
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighBytesRead 
(batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryElapsedTime
 (batchId=264)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerSlowQueryExecutionTime
 (batchId=264)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16097/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16097/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16097/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958897 - PreCommit-HIVE-Build

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769551#comment-16769551
 ] 

Hive QA commented on HIVE-16924:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
51s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
50s{color} | {color:red} ql: The patch generated 13 new + 639 unchanged - 13 
fixed = 652 total (was 652) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
10s{color} | {color:red} root: The patch generated 13 new + 647 unchanged - 13 
fixed = 660 total (was 660) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} ql generated 0 new + 2260 unchanged - 2 fixed = 2260 
total (was 2262) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16097/dev-support/hive-personality.sh
 |
| git revision | master / 34db82b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16097/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16097/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16097/yetus/whitespace-eol.txt
 |
| modules | C: ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16097/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:

[jira] [Commented] (HIVE-18890) Lower Logging for "Table not found" Error

2019-02-15 Thread Andrew Sherman (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769512#comment-16769512
 ] 

Andrew Sherman commented on HIVE-18890:
---

Pushed to master. Thanks [~mnarayanan2018] for your contribution.

> Lower Logging for "Table not found" Error
> -
>
> Key: HIVE-18890
> URL: https://issues.apache.org/jira/browse/HIVE-18890
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: Manoj Narayanan
>Priority: Minor
> Attachments: HIVE-18890.1.patch
>
>
> https://github.com/apache/hive/blob/7cb31c03052b815665b3231f2e513b9e65d3ff8c/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java#L1105
> {code:java}
> // Get the table from metastore
> org.apache.hadoop.hive.metastore.api.Table tTable = null;
> try {
>   tTable = getMSC().getTable(dbName, tableName);
> } catch (NoSuchObjectException e) {
>   if (throwException) {
> LOG.error("Table " + tableName + " not found: " + e.getMessage());
> throw new InvalidTableException(tableName);
>   }
>   return null;
> } catch (Exception e) {
>   throw new HiveException("Unable to fetch table " + tableName + ". " + 
> e.getMessage(), e);
> }
> {code}
> We should throw an exception or log it, but not both. Right [~mdrob] ? ;)
> And in this case, we are generating scary ERROR level logging in the 
> HiveServer2 logs needlessly.  This should not be reported as an application 
> error.  It is a simple user error, indicated by catching the 
> _NoSuchObjectException_ Throwable, that can always be ignored by the service. 
>  It is most likely a simple user typo of the table name.  However, the more 
> serious general _Exception_ is not logged.  This is backwards.
> Please remove the _error_ level logging for the user error... or lower it to 
> _debug_ level logging.
> Please include an _error_ level logging to the general Exception case, unless 
> this Exception is being captured up the stack, somewhere else, and is being 
> logged there at ERROR level logging.
> {code}
> -- Sample log messages found in HS2 logs
> 2018-03-02 10:26:40,363  ERROR hive.ql.metadata.Hive: 
> [HiveServer2-Handler-Pool: Thread-4467]: Table default not found: 
> default.default table not found
> 2018-03-02 10:26:40,367  ERROR hive.ql.metadata.Hive: 
> [HiveServer2-Handler-Pool: Thread-4467]: Table default not found: 
> default.default table not found
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769495#comment-16769495
 ] 

Hive QA commented on HIVE-21240:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958891/HIVE-24240.8.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15806 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16096/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16096/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16096/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958891 - PreCommit-HIVE-Build

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769472#comment-16769472
 ] 

Sankar Hariappan commented on HIVE-19430:
-

+1 for 03.patch, pending tests.

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch, 
> HIVE-19430.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread Sankar Hariappan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769470#comment-16769470
 ] 

Sankar Hariappan commented on HIVE-21260:
-

+1 for 02.patch, pending tests.

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769466#comment-16769466
 ] 

Hive QA commented on HIVE-21240:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
47s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} serde in master has 197 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
55s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} serde: The patch generated 0 new + 4 unchanged - 25 
fixed = 4 total (was 29) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} ql: The patch generated 0 new + 6 unchanged - 5 
fixed = 6 total (was 11) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} The patch core passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} serde generated 0 new + 193 unchanged - 4 fixed = 
193 total (was 197) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16096/dev-support/hive-personality.sh
 |
| git revision | master / 34db82b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: serde ql hcatalog/core U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16096/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELU

[jira] [Updated] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-19430:
--
Status: In Progress  (was: Patch Available)

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Ashutosh Bapat (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-19430:
--
Attachment: HIVE-19430.03.patch
Status: Patch Available  (was: In Progress)

Attaching the .02 patch again to trigger ptest. The testcases that failed in 
the last ptest run are passing for me locally.

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch, 
> HIVE-19430.03.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21217) Optimize range calculation for PTF

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21217?focusedWorklogId=199294&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199294
 ]

ASF GitHub Bot logged work on HIVE-21217:
-

Author: ASF GitHub Bot
Created on: 15/Feb/19 15:39
Start Date: 15/Feb/19 15:39
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #538: HIVE-21217: 
Optimize range calculation for PTF
URL: https://github.com/apache/hive/pull/538#discussion_r257274581
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/ValueBoundaryScanner.java
 ##
 @@ -44,10 +49,207 @@ public ValueBoundaryScanner(BoundaryDef start, 
BoundaryDef end, boolean nullsLas
 this.nullsLast = nullsLast;
   }
 
+  public abstract Object computeValue(Object row) throws HiveException;
+
+  /**
+   * Checks if the distance of v2 to v1 is greater than the given amt.
+   * @return True if the value of v1 - v2 is greater than amt or either value 
is null.
+   */
+  public abstract boolean isDistanceGreater(Object v1, Object v2, int amt);
+
+  /**
+   * Checks if the values of v1 or v2 are the same.
+   * @return True if both values are the same or both are nulls.
+   */
+  public abstract boolean isEqual(Object v1, Object v2);
+
   public abstract int computeStart(int rowIdx, PTFPartition p) throws 
HiveException;
 
   public abstract int computeEnd(int rowIdx, PTFPartition p) throws 
HiveException;
 
+  /**
+   * Checks and maintains cache content - optimizes cache window to always be 
around current row
+   * thereby makes it follow the current progress.
+   * @param rowIdx current row
+   * @param p current partition for the PTF operator
+   * @throws HiveException
+   */
+  public void handleCache(int rowIdx, PTFPartition p) throws HiveException {
+BoundaryCache cache = p.getBoundaryCache();
+if (cache == null) {
+  return;
+}
+
+//Start of partition
+if (rowIdx == 0) {
+  cache.clear();
+}
+if (cache.isComplete()) {
+  return;
+}
+
+int cachePos = cache.approxCachePositionOf(rowIdx);
+
+if (cache.isEmpty()) {
+  fillCacheUntilEndOrFull(rowIdx, p);
+} else if (cachePos > 50 && cachePos <= 75) {
+  if (!start.isPreceding() && end.isFollowing()) {
+cache.evictHalf();
+fillCacheUntilEndOrFull(rowIdx, p);
+  }
+} else if (cachePos > 75 && cachePos <= 95) {
+  if (start.isPreceding() && end.isFollowing()) {
+cache.evictHalf();
+fillCacheUntilEndOrFull(rowIdx, p);
+  }
+} else if (cachePos >= 95) {
+  if (start.isPreceding() && !end.isFollowing()) {
+cache.evictHalf();
+fillCacheUntilEndOrFull(rowIdx, p);
+  }
+
+}
+  }
+
+  /**
+   * Inserts values into cache starting from rowIdx in the current partition 
p. Stops if cache
+   * reaches its maximum size or we get out of rows in p.
+   * @param rowIdx
+   * @param p
+   * @throws HiveException
+   */
+  private void fillCacheUntilEndOrFull(int rowIdx, PTFPartition p) throws 
HiveException {
+BoundaryCache cache = p.getBoundaryCache();
+if (cache == null || p.size() <= 0) {
+  return;
+}
+
+//If we continue building cache
+Map.Entry ceilingEntry = cache.getMaxEntry();
+if (ceilingEntry != null) {
+  rowIdx = ceilingEntry.getKey();
+}
+
+Object rowVal = null;
+Object lastRowVal = null;
+
+while (rowIdx < p.size()) {
+  rowVal = computeValue(p.getAt(rowIdx));
+  if (!isEqual(rowVal, lastRowVal)){
+if (!cache.putIfNotFull(rowIdx, rowVal)){
 
 Review comment:
   Should not we detect that the cache is full before continuing to read? Do we 
end up reading the lines for rowVal twice?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199294)
Time Spent: 20m  (was: 10m)

> Optimize range calculation for PTF
> --
>
> Key: HIVE-21217
> URL: https://issues.apache.org/jira/browse/HIVE-21217
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, 
> HIVE-21217.2.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During window function execution Hive has to iterate on neighbouring rows of 
> the current row to find the beginning and end of the proper range (on which 
> the aggregation will be executed).
> When we're using range based window

[jira] [Work logged] (HIVE-21217) Optimize range calculation for PTF

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21217?focusedWorklogId=199293&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199293
 ]

ASF GitHub Bot logged work on HIVE-21217:
-

Author: ASF GitHub Bot
Created on: 15/Feb/19 15:39
Start Date: 15/Feb/19 15:39
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #538: HIVE-21217: 
Optimize range calculation for PTF
URL: https://github.com/apache/hive/pull/538#discussion_r257278647
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/ValueBoundaryScanner.java
 ##
 @@ -44,10 +49,207 @@ public ValueBoundaryScanner(BoundaryDef start, 
BoundaryDef end, boolean nullsLas
 this.nullsLast = nullsLast;
   }
 
+  public abstract Object computeValue(Object row) throws HiveException;
+
+  /**
+   * Checks if the distance of v2 to v1 is greater than the given amt.
+   * @return True if the value of v1 - v2 is greater than amt or either value 
is null.
+   */
+  public abstract boolean isDistanceGreater(Object v1, Object v2, int amt);
+
+  /**
+   * Checks if the values of v1 or v2 are the same.
+   * @return True if both values are the same or both are nulls.
+   */
+  public abstract boolean isEqual(Object v1, Object v2);
+
   public abstract int computeStart(int rowIdx, PTFPartition p) throws 
HiveException;
 
   public abstract int computeEnd(int rowIdx, PTFPartition p) throws 
HiveException;
 
+  /**
+   * Checks and maintains cache content - optimizes cache window to always be 
around current row
+   * thereby makes it follow the current progress.
+   * @param rowIdx current row
+   * @param p current partition for the PTF operator
+   * @throws HiveException
+   */
+  public void handleCache(int rowIdx, PTFPartition p) throws HiveException {
+BoundaryCache cache = p.getBoundaryCache();
+if (cache == null) {
+  return;
+}
+
+//Start of partition
+if (rowIdx == 0) {
+  cache.clear();
+}
+if (cache.isComplete()) {
+  return;
+}
+
+int cachePos = cache.approxCachePositionOf(rowIdx);
+
+if (cache.isEmpty()) {
+  fillCacheUntilEndOrFull(rowIdx, p);
+} else if (cachePos > 50 && cachePos <= 75) {
 
 Review comment:
   This is strange to me. Do we know the size of the window in advance? Shall 
we not use our cache accordingly? If the window size is 5 then we should cache 
the 7 values (1 before, 1 after, and the 5 values)?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199293)
Time Spent: 20m  (was: 10m)

> Optimize range calculation for PTF
> --
>
> Key: HIVE-21217
> URL: https://issues.apache.org/jira/browse/HIVE-21217
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, 
> HIVE-21217.2.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During window function execution Hive has to iterate on neighbouring rows of 
> the current row to find the beginning and end of the proper range (on which 
> the aggregation will be executed).
> When we're using range based windows and have many rows with a certain key 
> value this can take a lot of time. (e.g. partition size of 80M, in which we 
> have 2 ranges of 40M rows according to the orderby column: within these 40M 
> rowsets we're doing 40M x 40M/2 steps.. which is of n^2 time complexity)
> I propose to introduce a cache that keeps track of already calculated range 
> ends so it can be reused in future scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21217) Optimize range calculation for PTF

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21217?focusedWorklogId=199295&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199295
 ]

ASF GitHub Bot logged work on HIVE-21217:
-

Author: ASF GitHub Bot
Created on: 15/Feb/19 15:39
Start Date: 15/Feb/19 15:39
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #538: HIVE-21217: 
Optimize range calculation for PTF
URL: https://github.com/apache/hive/pull/538#discussion_r257280122
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/udf/ptf/ValueBoundaryScanner.java
 ##
 @@ -44,10 +49,207 @@ public ValueBoundaryScanner(BoundaryDef start, 
BoundaryDef end, boolean nullsLas
 this.nullsLast = nullsLast;
   }
 
+  public abstract Object computeValue(Object row) throws HiveException;
+
+  /**
+   * Checks if the distance of v2 to v1 is greater than the given amt.
+   * @return True if the value of v1 - v2 is greater than amt or either value 
is null.
+   */
+  public abstract boolean isDistanceGreater(Object v1, Object v2, int amt);
+
+  /**
+   * Checks if the values of v1 or v2 are the same.
+   * @return True if both values are the same or both are nulls.
+   */
+  public abstract boolean isEqual(Object v1, Object v2);
+
   public abstract int computeStart(int rowIdx, PTFPartition p) throws 
HiveException;
 
   public abstract int computeEnd(int rowIdx, PTFPartition p) throws 
HiveException;
 
+  /**
+   * Checks and maintains cache content - optimizes cache window to always be 
around current row
+   * thereby makes it follow the current progress.
+   * @param rowIdx current row
+   * @param p current partition for the PTF operator
+   * @throws HiveException
+   */
+  public void handleCache(int rowIdx, PTFPartition p) throws HiveException {
+BoundaryCache cache = p.getBoundaryCache();
+if (cache == null) {
+  return;
+}
+
+//Start of partition
+if (rowIdx == 0) {
+  cache.clear();
+}
+if (cache.isComplete()) {
+  return;
+}
+
+int cachePos = cache.approxCachePositionOf(rowIdx);
+
+if (cache.isEmpty()) {
+  fillCacheUntilEndOrFull(rowIdx, p);
+} else if (cachePos > 50 && cachePos <= 75) {
+  if (!start.isPreceding() && end.isFollowing()) {
+cache.evictHalf();
+fillCacheUntilEndOrFull(rowIdx, p);
+  }
+} else if (cachePos > 75 && cachePos <= 95) {
+  if (start.isPreceding() && end.isFollowing()) {
+cache.evictHalf();
+fillCacheUntilEndOrFull(rowIdx, p);
+  }
+} else if (cachePos >= 95) {
+  if (start.isPreceding() && !end.isFollowing()) {
+cache.evictHalf();
+fillCacheUntilEndOrFull(rowIdx, p);
+  }
+
+}
+  }
+
+  /**
+   * Inserts values into cache starting from rowIdx in the current partition 
p. Stops if cache
+   * reaches its maximum size or we get out of rows in p.
+   * @param rowIdx
+   * @param p
+   * @throws HiveException
+   */
+  private void fillCacheUntilEndOrFull(int rowIdx, PTFPartition p) throws 
HiveException {
+BoundaryCache cache = p.getBoundaryCache();
+if (cache == null || p.size() <= 0) {
+  return;
+}
+
+//If we continue building cache
+Map.Entry ceilingEntry = cache.getMaxEntry();
+if (ceilingEntry != null) {
+  rowIdx = ceilingEntry.getKey();
+}
+
+Object rowVal = null;
+Object lastRowVal = null;
+
+while (rowIdx < p.size()) {
+  rowVal = computeValue(p.getAt(rowIdx));
+  if (!isEqual(rowVal, lastRowVal)){
+if (!cache.putIfNotFull(rowIdx, rowVal)){
+  break;
+}
+  }
+  lastRowVal = rowVal;
+  ++rowIdx;
+
+}
+//Signaling end of all rows in a partition
+if (cache.putIfNotFull(rowIdx, null)) {
+  cache.setComplete(true);
+}
+  }
+
+  /**
+   * Uses cache content to jump backwards if possible. If not, it steps one 
back.
+   * @param r
+   * @param p
+   * @return pair of (row we stepped/jumped onto ; row value at this position)
+   * @throws HiveException
+   */
+  protected Pair skipOrStepBack(int r, PTFPartition p)
+  throws HiveException {
+Object rowVal = null;
+BoundaryCache cache = p.getBoundaryCache();
+
+Map.Entry floorEntry = null;
+Map.Entry ceilingEntry = null;
+
+if (cache != null) {
+  floorEntry = cache.floorEntry(r);
+  ceilingEntry = cache.ceilingEntry(r);
+}
+
+if (floorEntry != null && ceilingEntry != null) {
+  r = floorEntry.getKey() - 1;
+  floorEntry = cache.floorEntry(r);
+  if (floorEntry != null) {
+rowVal = floorEntry.getValue();
+  } else if (r >= 0){
+rowVal = computeValue(p.getAt(r));
+  }
+} else {
+  r--;
+  if (r >= 0) {
+rowVal = computeValue(p.getAt(r));
+  }
+}
+return new ImmutablePair<>(r, rowVal);
+  }
+
+  /**
+   * Uses cache content to jump forward if possible. 

[jira] [Commented] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769420#comment-16769420
 ] 

Hive QA commented on HIVE-21270:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958884/HIVE-21270.5.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16095/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16095/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16095/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-02-15 15:30:38.672
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-16095/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-02-15 15:30:38.676
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 34db82b HIVE-21239: Beeline help LDAP connection example 
incorrect (Zoltan Chovan via Peter Vary)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 34db82b HIVE-21239: Beeline help LDAP connection example 
incorrect (Zoltan Chovan via Peter Vary)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-02-15 15:30:39.821
+ rm -rf ../yetus_PreCommit-HIVE-Build-16095
+ mkdir ../yetus_PreCommit-HIVE-Build-16095
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-16095
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16095/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java:530
error: repository lacks the necessary blob to fall back on 3-way merge.
error: ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java: patch 
does not apply
error: 
ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFGetSchema.java: 
does not exist in index
error: 
ql/src/test/org/apache/hadoop/hive/ql/udf/generic/TestGenericUDTFGetSchema.java:
 does not exist in index
error: patch failed: ql/src/test/results/clientpositive/show_functions.q.out:109
error: repository lacks the necessary blob to fall back on 3-way merge.
error: ql/src/test/results/clientpositive/show_functions.q.out: patch does not 
apply
fatal: git apply: bad git-diff - inconsistent old filename on line 19
fatal: git apply: bad git-diff - inconsistent old filename on line 19
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-16095
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958884 - PreCommit-HIVE-Build

> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
>

[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769419#comment-16769419
 ] 

Hive QA commented on HIVE-19430:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958879/HIVE-19430.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15767 tests 
executed
*Failed tests:*
{noformat}
TestMiniLlapLocalCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=181)

[vector_windowing_expressions.q,tez_union_group_by.q,materialized_view_rewrite_no_join_opt_2.q,vector_like_2.q,llap_acid.q,sqlmerge.q,tez_dynpart_hashjoin_1.q,schema_evol_orc_acid_part_update_llap_io.q,vector_windowing_gby.q,vector_binary_join_groupby.q,runtime_stats_hs2.q,lateral_view.q,optimize_nullscan.q,vectorization_decimal_date.q,schema_evol_orc_nonvec_table_llap_io.q,udaf_all_keyword.q,acid_vectorization_original.q,tez_fsstat.q,vector_fullouter_mapjoin_1_optimized_passthru.q,stats11.q,vector_mapjoin_reduce.q,join_acid_non_acid.q,empty_join.q,vector_groupby_grouping_window.q,auto_join21.q,tez_input_counters.q,vector_groupby_sort_11.q,schema_evol_orc_nonvec_part_all_complex_llap_io.q,orc_ppd_timestamp.q,vector_decimal_1.q]
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16094/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16094/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16094/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958879 - PreCommit-HIVE-Build

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769384#comment-16769384
 ] 

Hive QA commented on HIVE-19430:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  2m 
33s{color} | {color:blue} standalone-metastore/metastore-common in master has 
29 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
7s{color} | {color:blue} standalone-metastore/metastore-server in master has 
181 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} hcatalog/server-extensions in master has 3 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} standalone-metastore/metastore-server: The patch 
generated 1 new + 403 unchanged - 0 fixed = 404 total (was 403) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16094/dev-support/hive-personality.sh
 |
| git revision | master / 34db82b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16094/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt
 |
| modules | C: standalone-metastore/metastore-common 
standalone-metastore/metastore-server hcatalog/server-extensions U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16094/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that h

[jira] [Updated] (HIVE-21217) Optimize range calculation for PTF

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21217:
--
Labels: pull-request-available  (was: )

> Optimize range calculation for PTF
> --
>
> Key: HIVE-21217
> URL: https://issues.apache.org/jira/browse/HIVE-21217
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, 
> HIVE-21217.2.patch
>
>
> During window function execution Hive has to iterate on neighbouring rows of 
> the current row to find the beginning and end of the proper range (on which 
> the aggregation will be executed).
> When we're using range based windows and have many rows with a certain key 
> value this can take a lot of time. (e.g. partition size of 80M, in which we 
> have 2 ranges of 40M rows according to the orderby column: within these 40M 
> rowsets we're doing 40M x 40M/2 steps.. which is of n^2 time complexity)
> I propose to introduce a cache that keeps track of already calculated range 
> ends so it can be reused in future scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21275:
---
Fix Version/s: 3.2.0
   4.0.0

> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21275.1.patch
>
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21275:
---
Description: 
There is an incredible amount of logging generated by the {{Operator}} during 
the Q-Tests.

I counted more than 1 *million* lines of pretty useless logging.  Please lower 
to TRACE level.

{code}
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
Starting group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
Starting group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
Starting group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
group
2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
Starting group
{code}

  was:
There is an incredible amount of logging generated by the {{Operator}} during 
the Q-Tests.

I counted more than 1 *million* lines of pretty useless logging.  Please lower 
to TRACE level.




> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Priority: Minor
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Attachment: HIVE-16924.07.patch

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21275:
---
Status: Patch Available  (was: Open)

Here is a patch.  Hopefully will measurably cut down on Q-Test run time.

> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-21275.1.patch
>
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Attachment: (was: HIVE-16924.01.patch)

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21217) Optimize range calculation for PTF

2019-02-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21217?focusedWorklogId=199264&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-199264
 ]

ASF GitHub Bot logged work on HIVE-21217:
-

Author: ASF GitHub Bot
Created on: 15/Feb/19 14:40
Start Date: 15/Feb/19 14:40
Worklog Time Spent: 10m 
  Work Description: szlta commented on pull request #538: HIVE-21217: 
Optimize range calculation for PTF
URL: https://github.com/apache/hive/pull/538
 
 
   @pvary 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 199264)
Time Spent: 10m
Remaining Estimate: 0h

> Optimize range calculation for PTF
> --
>
> Key: HIVE-21217
> URL: https://issues.apache.org/jira/browse/HIVE-21217
> Project: Hive
>  Issue Type: Improvement
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21217.0.patch, HIVE-21217.1.patch, 
> HIVE-21217.2.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> During window function execution Hive has to iterate on neighbouring rows of 
> the current row to find the beginning and end of the proper range (on which 
> the aggregation will be executed).
> When we're using range based windows and have many rows with a certain key 
> value this can take a lot of time. (e.g. partition size of 80M, in which we 
> have 2 ranges of 40M rows according to the orderby column: within these 40M 
> rowsets we're doing 40M x 40M/2 steps.. which is of n^2 time complexity)
> I propose to introduce a cache that keeps track of already calculated range 
> ends so it can be reused in future scans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769372#comment-16769372
 ] 

BELUGA BEHR edited comment on HIVE-21275 at 2/15/19 2:39 PM:
-

Here is a patch.  Hopefully will measurably cut down on Q-Test run time.

Please consider for 4.x and 3.x lines.


was (Author: belugabehr):
Here is a patch.  Hopefully will measurably cut down on Q-Test run time.

> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-21275.1.patch
>
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21275:
---
Attachment: HIVE-21275.1.patch

> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HIVE-21275.1.patch
>
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21275) Lower Logging Level in Operator Class

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR reassigned HIVE-21275:
--

Assignee: BELUGA BEHR

> Lower Logging Level in Operator Class
> -
>
> Key: HIVE-21275
> URL: https://issues.apache.org/jira/browse/HIVE-21275
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 4.0.0, 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
>
> There is an incredible amount of logging generated by the {{Operator}} during 
> the Q-Tests.
> I counted more than 1 *million* lines of pretty useless logging.  Please 
> lower to TRACE level.
> {code}
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.JoinOperator: Starting 
> group
> 2019-02-14T14:25:31,612 DEBUG [pool-69-thread-1] exec.FileSinkOperator: 
> Starting group
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21260:
---
Attachment: HIVE-21260.02.patch

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21260:
---
Attachment: (was: HIVE-21260.02.patch)

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21260:
---
Attachment: HIVE-21260.02.patch

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769350#comment-16769350
 ] 

Hive QA commented on HIVE-21001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
17s{color} | {color:blue} ql in master has 2262 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
35s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 3 new + 566 unchanged - 26 
fixed = 569 total (was 592) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  3m 
46s{color} | {color:red} root: The patch generated 3 new + 7811 unchanged - 26 
fixed = 7814 total (was 7837) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-16092/dev-support/hive-personality.sh
 |
| git revision | master / 34db82b |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16092/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16092/yetus/diff-checkstyle-root.txt
 |
| modules | C: ql accumulo-handler hbase-handler . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-16092/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001

[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Status: Patch Available  (was: Open)

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, 
> HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, 
> HIVE-16924.06.patch, HIVE-16924.07.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16924) Support distinct in presence of Group By

2019-02-15 Thread Miklos Gergely (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-16924:
--
Attachment: HIVE-16924.01.patch

> Support distinct in presence of Group By 
> -
>
> Key: HIVE-16924
> URL: https://issues.apache.org/jira/browse/HIVE-16924
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning
>Reporter: Carter Shanklin
>Assignee: Miklos Gergely
>Priority: Major
> Attachments: HIVE-16924.01.patch, HIVE-16924.01.patch, 
> HIVE-16924.02.patch, HIVE-16924.03.patch, HIVE-16924.04.patch, 
> HIVE-16924.05.patch, HIVE-16924.06.patch
>
>
> {code:sql}
> create table e011_01 (c1 int, c2 smallint);
> insert into e011_01 values (1, 1), (2, 2);
> {code}
> These queries should work:
> {code:sql}
> select distinct c1, count(*) from e011_01 group by c1;
> select distinct c1, avg(c2) from e011_01 group by c1;
> {code}
> Currently, you get : 
> FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the 
> same query. Error encountered near token 'c1'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21260:
---
Status: Patch Available  (was: Open)

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769345#comment-16769345
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958876/HIVE-21001.29.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16093/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16093/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16093/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12958876/HIVE-21001.29.patch 
was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958876 - PreCommit-HIVE-Build

> Upgrade to calcite-1.18
> ---
>
> Key: HIVE-21001
> URL: https://issues.apache.org/jira/browse/HIVE-21001
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21001.01.patch, HIVE-21001.01.patch, 
> HIVE-21001.02.patch, HIVE-21001.03.patch, HIVE-21001.04.patch, 
> HIVE-21001.05.patch, HIVE-21001.06.patch, HIVE-21001.06.patch, 
> HIVE-21001.07.patch, HIVE-21001.08.patch, HIVE-21001.08.patch, 
> HIVE-21001.08.patch, HIVE-21001.09.patch, HIVE-21001.09.patch, 
> HIVE-21001.09.patch, HIVE-21001.10.patch, HIVE-21001.11.patch, 
> HIVE-21001.12.patch, HIVE-21001.13.patch, HIVE-21001.15.patch, 
> HIVE-21001.16.patch, HIVE-21001.17.patch, HIVE-21001.18.patch, 
> HIVE-21001.18.patch, HIVE-21001.19.patch, HIVE-21001.20.patch, 
> HIVE-21001.21.patch, HIVE-21001.22.patch, HIVE-21001.22.patch, 
> HIVE-21001.22.patch, HIVE-21001.23.patch, HIVE-21001.24.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.26.patch, 
> HIVE-21001.26.patch, HIVE-21001.26.patch, HIVE-21001.27.patch, 
> HIVE-21001.28.patch, HIVE-21001.29.patch, HIVE-21001.29.patch
>
>
> XLEAR LIBRARY CACHE 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21001) Upgrade to calcite-1.18

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769344#comment-16769344
 ] 

Hive QA commented on HIVE-21001:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958876/HIVE-21001.29.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 105 failed/errored test(s), 15797 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[char_udf1] (batchId=98)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_dp] 
(batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_when_case] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_skew_1_23] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[leadlag] (batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[list_bucket_dml_9] 
(batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nested_column_pruning] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_nested_column_pruning]
 (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_ppd_char] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_15]
 (batchId=94)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_4] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_6] 
(batchId=44)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join_filter] 
(batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_udf_case] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppr_allchildsarenull] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppr_pushdown3] 
(batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin4] (batchId=93)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin5] (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamp_ints_casts] 
(batchId=1)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_6_subq] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_date_1] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_math_funcs]
 (batchId=25)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_15] 
(batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_4] 
(batchId=23)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_6] 
(batchId=29)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_case] 
(batchId=62)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_casts] 
(batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_math_funcs] 
(batchId=22)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_string_funcs] 
(batchId=63)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_timestamp_ints_casts]
 (batchId=53)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_expressions]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_extractTime]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_floorTime]
 (batchId=195)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[constraints_optimization]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[kryo] 
(batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_4]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_6]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_7]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_8]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_ppd_varchar]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_lifetime]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[semijoin6] 
(batchId=182)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subqu

[jira] [Updated] (HIVE-21260) Hive replication to a target with hive.strict.managed.tables enabled is failing when used HMS on postgres.

2019-02-15 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-21260:
---
Status: Open  (was: Patch Available)

> Hive replication to a target with hive.strict.managed.tables enabled is 
> failing when used HMS on postgres.
> --
>
> Key: HIVE-21260
> URL: https://issues.apache.org/jira/browse/HIVE-21260
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21260.01.patch, HIVE-21260.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Missing quotes in sql string is causing sql execution error for postgres.
>  
> {code:java}
> metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(201)) - 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relat
> ion "database_params" does not exist
> Position: 25
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2284)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2003)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:200)
> at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:424)
> at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:321)
> at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:284)
> at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:108)
> at 
> com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.updateReplId(TxnHandler.java:907)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.commitTxn(TxnHandler.java:1023)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.commit_txn(HiveMetaStore.java:7703)
> at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy39.commit_txn(Unknown Source)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18730)
> at 
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$commit_txn.getResult(ThriftHiveMetastore.java:18714)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:636)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:631)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at 
> org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:631)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> ){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Status: Patch Available  (was: Open)

Fixed checkstyles

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Status: Open  (was: Patch Available)

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 3.1.1, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21240) JSON SerDe Re-Write

2019-02-15 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated HIVE-21240:
---
Attachment: HIVE-24240.8.patch

> JSON SerDe Re-Write
> ---
>
> Key: HIVE-21240
> URL: https://issues.apache.org/jira/browse/HIVE-21240
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 4.0.0, 3.1.1
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, HIVE-24240.8.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19430) ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending events

2019-02-15 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769288#comment-16769288
 ] 

Hive QA commented on HIVE-19430:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12958867/HIVE-19430.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 15797 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=86)
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=230)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=230)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/16091/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16091/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16091/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12958867 - PreCommit-HIVE-Build

> ObjectStore.cleanNotificationEvents OutOfMemory on large number of pending 
> events
> -
>
> Key: HIVE-19430
> URL: https://issues.apache.org/jira/browse/HIVE-19430
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19430.01.patch, HIVE-19430.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If there are large number of events that haven't been cleaned up for some 
> reason, then ObjectStore.cleanNotificationEvents() can run out of memory 
> while it loads all the events to be deleted.
> It should fetch events in batches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21270) A UDTF to show schema (column names and types) of given query

2019-02-15 Thread Shubham Chaurasia (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769291#comment-16769291
 ] 

Shubham Chaurasia commented on HIVE-21270:
--

[~rmsm...@gmail.com] Thanks for the review, I have addressed the review 
comments in new patch.

 

*Function Invocation & Output*

Say the table is: 
{code:java}
create  table t3(c1 int, c2 float, c3 double, c4 string, c5 date, c6 
array, c7 struct, c8 map){code}
Then the get_sql_schema function
{code:java}
select get_sql_schema('select * from t3'){code}
returns following output

{code:java}
+---+-+
| col_name | col_type |
+---+-+
| t3.c1 | int |
| t3.c2 | float |
| t3.c3 | double |
| t3.c4 | string |
| t3.c5 | date |
| t3.c6 | array |
| t3.c7 | struct |
| t3.c8 | map |
+---+-+{code}
 

 

 

> A UDTF to show schema (column names and types) of given query
> -
>
> Key: HIVE-21270
> URL: https://issues.apache.org/jira/browse/HIVE-21270
> Project: Hive
>  Issue Type: New Feature
>  Components: UDF
>Affects Versions: 4.0.0
>Reporter: Shubham Chaurasia
>Assignee: Shubham Chaurasia
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21270.1.patch, HIVE-21270.2.patch, 
> HIVE-21270.3.patch, HIVE-21270.4.patch, HIVE-21270.5.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We can get ResultSet metadata using \{{ResultSet#getMetaData()}} but JDBC 
> provides no way of getting nested data types(of columns) associated with it. 
> This UDTF helps to retrieve each column name and it's data type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >